Hate speech on Twitter—encompassing racist, anti-Semitic, and homophobic attacks—rose sharply after the social platform was acquired by Elon Musk, according to multiple studies compiled by The New York Times this Friday. The data paint a troubling picture of how harmful rhetoric proliferated in the early months of Musk’s ownership, raising questions about the platform’s ability to police content effectively.
Twitter suspends Kanye West’s account for anti-Semitism
The billionaire entrepreneur made a notable foray into public discourse, promising to expand freedom of expression and proposing amnesty for thousands of accounts previously suspended for their posts. This included accounts associated with former US President Donald Trump, who had been barred following the attack on the Capitol. Critics warn that such assurances could encourage violence, even as brands grow wary of advertising on the platform.
Meanwhile, the workforce saw substantial reductions, with thousands of employees laid off. Among those affected were many who handled moderation and content control tasks. Musk has sought to reassure advertisers that the site would not descend into a “Wild West” environment, yet numerous companies have paused or significantly reduced their ad presence as a result.
In addition to the shift in moderation, observers note a rapid reappearance of accounts previously removed for extremist ties. Data from researchers at organizations studying extremism and disinformation indicate a spike in ISIS-linked profiles re-emerging in the weeks after the change in ownership, compared with the prior period. This has intensified concerns about how the platform handles violent extremism online. A related issue concerns verification, with some users leveraging the new payment-confirmation system to obtain the blue check mark and project a sense of legitimacy for controversial content and conspiracy-focused accounts.
Experts emphasize that the combination of reduced moderation staff and more permissive verification may correlate with shifts in the platform’s content landscape. While the full impact remains under study, these patterns highlight the ongoing tension between safeguarding free expression and curbing hate speech and harmful misinformation on major social networks. Different research groups—including those focused on digital hate and strategy—note the rapid changes in content dynamics following policy and leadership changes on large platforms. (source attribution: Anti-Defamation League; Center Against Digital Hate; Institute for Strategic Dialogue)