The European Commission is coordinating a broad effort to curb disinformation on major digital platforms. This Monday, platforms such as Facebook and YouTube registered to the EU code of good practices and began implementing immediate measures to identify and flag user-generated content that uses artificial intelligence to spread misleading information.
This step marks a significant innovation announced at a Brussels gathering led by Vera Jourova, Vice-President for Values and Transparency, during a standing working group meeting on the code of good practices. The new obligations, which will become mandatory with the entry into force of the forthcoming digital rules, reflect a shift toward more proactive content moderation in response to rapidly advancing technologies.
Officials stressed that the existing code did not adequately address the capabilities of modern AI tools. The working group emphasized the need for the code to evolve in step with technological and social changes. The objective is clear: ensure that major technology companies help detect AI-generated misinformation and clearly label such material to inform and protect users. Early labeling is anticipated to keep pace with the fast spread of AI applications online.
A Czech commissioner noted his commitment to upholding freedom of expression in Europe, while acknowledging the need for careful regulation. He argued that the focus should be on developing codes of practice that balance free expression with safeguards against harmful content. These technologies open new avenues for knowledge but also bring new risks and negative consequences for society, particularly in the spread of disinformation.
Jourova reported that twelve additional digital companies had joined the group, bringing the total number of signatories to forty-four. The group aims to fill gaps between mandatory disinformation commitments under the Digital Services Act and practical enforcement. The growing coalition is expected to bridge regulatory gaps and promote consistent standards across platforms.
Alongside broader oversight of AI-generated content, the European head of Values and Transparency urged companies to tighten the fight against Russian disinformation amid the ongoing war in Ukraine. The message is that information warfare is a critical front in the conflict, and pro-Kremlin sources have been actively disseminating disinformation on a large scale. The Commission called for robust verification measures and steady funding for content verification work to counter these campaigns.
Twitter and the Code of Conduct
A central highlight from the meeting was the stance of Twitter, as reflected in the group policy adopted at the end of May. The policy states that Twitter will continue to be monitored and that the provisions of the code of good practices will be mandatory once the new law comes into force in August. This signals a firm push to extend the code’s reach beyond traditional platforms and ensure consistent standards for how content is moderated.
Jourová warned that Twitter’s position and actions had drawn considerable attention and that the company’s compliance with EU law would be reviewed vigorously and without delay. The aim is to respond quickly and effectively to any non-compliance that creates a higher risk of disinformation slipping through or undermining user trust. A faster, more decisive approach is expected to be taken when risk levels demand it.
The 2018 code, which currently applies to roughly thirty digital service providers, includes major players such as Meta, along with other tech companies like Mozilla, Google, Microsoft and TikTok. The ongoing expansion of this code signals the EU’s intent to harmonize practices across platforms and to strengthen safeguards for users online, particularly in the area of misinformation and disinformation campaigns that threaten democratic discourse.