In a move that highlights ongoing efforts to curb online hate speech, a leading live video streaming platform joined the European Union Code of Conduct on disinformation. This added the company to a list of tech and social platforms that have committed to monitor and address harmful content more proactively within the online ecosystem. The collaboration aligns with broader EU initiatives aimed at creating safer digital spaces, particularly for younger users who can be more vulnerable to abusive material and misleading information.
Earlier this year, the platform also formalized its participation in the EU Code of Practice on Disinformation, joining a coalition that includes major players across the tech landscape. The members of this alliance—Facebook, Microsoft, Twitter, YouTube, Instagram, Snapchat, TikTok, and LinkedIn—are expected to uphold standards for rapid content assessment, user reporting, and transparent moderation practices as part of a collective effort to reduce the spread of false or harmful content online.
Věra Jourová, Vice-President for Values and Transparency at the European Commission, commented on the development, noting that the inclusion of the platform strengthens the joint action against hate speech and abuse. Her remarks underscored the priority of safeguarding young audiences and ensuring that online spaces remain welcoming and safe for users of all ages. The broader implication is that visible improvements in content governance can help rebuild trust in digital services while maintaining a balanced approach to free expression.
Recent assessments of the Code’s effectiveness reveal that ICT companies responded to a large majority of reported issues within 24 hours, and a substantial portion of flagged content was removed. These figures reflect a concerted effort to translate policy commitments into concrete moderation outcomes, though they also illustrate the ongoing challenges in distinguishing harmful material from legitimate expression across diverse languages and cultural contexts. The trend signals progress, yet it also invites ongoing refinement of criteria, workflows, and tooling to scale moderation without compromising user rights or access to information.
Experts and policymakers emphasize that the EU Code of Conduct operates in harmony with other regulatory frameworks governing online platforms. In particular, the Digital Services Act sets out platforms’ responsibilities beyond merely flagging or removing hate speech, encouraging transparency, user empowerment, and accountability across the digital marketplace. This integration of voluntary commitments with binding regulatory standards is intended to create a cohesive system where platforms, regulators, and civil society can collaborate to reduce online harm while preserving legitimate discourse and innovation. (European Commission, 2024)