Twitter has changed its stance on misinformation about COVID-19. Since late November, the platform appears to have paused or rolled back some moderation policies that were used to contextualize or remove harmful claims related to the coronavirus. The move comes after months of debate about how social networks should handle false health information while preserving free expression.
In the early days of the pandemic, Twitter joined a wider industry effort to curb misinformation about the health crisis. The platform adopted measures to flag tweets that could spread false information about the virus, its health effects, treatment options, or vaccines. This approach mirrored actions taken by other major networks such as Facebook, YouTube, and Instagram, which sought to reduce the spread of dangerous or misleading content during a period of heightened uncertainty.
When posts or accounts were found to violate these guidelines, Twitter would typically issue warnings to the user. Repeated violations could lead to suspension. Between January 2020 and September 2022, the company stated it had blocked thousands of accounts and removed a large volume of content as part of its health information safeguards.
Policy Direction After the Change in Ownership
Ownership of Twitter shifted to a prominent tech entrepreneur, and the platform soon signaled a renewed focus on freedom of expression as a guiding principle. The principal stance expressed was that content would be restricted primarily by criminal law, suggesting that disinformation about COVID-19 would not automatically be treated as illegal content. This shift meant that, in the view of the new leadership, coronavirus-related misinformation could fall within the broad spectrum of contested and controversial discourse on the platform.
In the weeks that followed, the platform announced an amnesty program aimed at reinstating previously suspended accounts that were not found to be engaged in illegal activity. The policy relaxation included users who argued against the existence of the pandemic, attributed the spread of the virus to particular groups, viewed mask usage as an overreach, or linked high death tolls to wide-reaching conspiracies. Support for these ideas grew among some conservative and far-right followers who welcomed the leadership change.
At the same time, other major networks like Facebook and Instagram continued to enforce their existing policies against health-related scams and dangerous misinformation, awaiting any potential review or ruling by oversight bodies. These platforms maintained stronger lines against the spread of unverified medical claims while the ongoing governance discussions unfolded.