Reworking Platform Moderation: HRW Details on Meta and Palestinian Content

No time to read?
Get a summary

Concerns about censorship across platforms have grown as social networks increasingly limit voices that express support for Palestine. A new report from Human Rights Watch highlights how content that defends Palestinian rights or sympathy for Palestinians has faced greater suppression on major platforms like Facebook and Instagram, sometimes even before users finish posting. The report suggests these actions are part of broader moderation practices that can chill free expression on the internet.

The 50-page study examines how Meta, the owner of Facebook and Instagram, has moved to restrict or remove content in response to events that began with the surprise Hamas attack and the Israeli military response. The violence that followed, including substantial casualties in Gaza and Israel, has intensified calls for accountability and for safety on social networks as users discuss ongoing crises.

NGOs have long criticized Meta for its content policies, and HRW’s latest evaluation calls these rules arbitrary and inconsistent. The organization argues the pattern of censorship stems from flawed policies, heavy reliance on automated moderation tools, and influence from governments that seeks to shape what users can say online. The study also notes that decisions can appear ad hoc, with rapid takedowns following breaking news while other, similar posts remain visible in some cases.

Deborah Brown, HRW’s deputy director of technology and human rights, described Meta’s actions as adding to the suffering experienced by Palestinians at a moment of severe oppression. She noted that the platform’s handling of Palestinian-related content can suppress vital expressions of concern and documentation of humanitarian crises. The report frames this as a broader issue of how censorship practices may silence eyewitness accounts and public discourse around current events.

More than a thousand cases

HRW’s review covers 1,050 censorship incidents across more than 60 countries, revealing six recurring patterns. Posts and accounts are removed, suspended, or deleted; sharing capabilities are impaired; user accounts are tracked; some features become inaccessible; and techniques such as shadow banning limit reach without warning. These tactics collectively reduce the visibility of content that documents or discusses the conflict and its human impact.

The document also criticizes Meta for applying lists of organizations designated as terrorists by the United States government in a way that curtails legitimate commentary about hostilities between Israel and Palestinian groups. In several instances, the platform reportedly removed journalism and documentation of injuries and deaths, and flagged content under policy areas like violent content, hate speech, or nudity even when the material served a reporting purpose. HRW stresses that the execution of these rules often undermines the credibility of the platform and the information ecosystem around major conflicts.

Meta has stated that it aims to follow human rights principles when responding to crises, claiming that its policies seek to balance safety with free expression. The company asserts that it uses basic guiding principles to address urgent situations as they unfold, while attempting to limit harm and prevent the spread of dangerous content. Critics, however, argue that such measures lack transparency and consistency, leaving users unclear about why certain posts disappear or why some accounts are restricted for extended periods.

Researchers call for clearer, more accountable moderation standards that protect essential reporting and personal testimony during times of emergency. They also urge platforms to publicly disclose the criteria used for urgent content decisions and to ensure that automated systems are supplemented by human review, especially when information involves casualties, battlefield updates, and humanitarian crises. In the meantime, users in North America and beyond continue to seek reliable channels for information and for speaking out about events in the Middle East, while demanding that digital spaces uphold basic rights to expression and access to information. [Citation: Human Rights Watch report on Meta moderation practices, 2024]

No time to read?
Get a summary
Previous Article

RFU Decision on Asia Move and International Play

Next Article

Polish Leadership Changes and Legal Safeguards in State Media