How Social Media Sites Failed to Avoid Censorship, Limit Hate Speech and Disinformation During the Gaza War
LONDON: Tech giant Meta recently announced that, in an effort to curb anti-Semitism on its platforms, it will begin removing social media posts that use the term “Zionist” in contexts that refer to Jews and Israelis rather than representing supporters of a political movement.
The parent company of Facebook and Instagram previously said it would lift its blanket ban on the single most moderated term across all Meta platforms — “shaheed” or “martyr” in English — after a year-long review by the watchdog found the approach was “overbroad .”
Similarly, TikTok, X and Telegram have long pledged to step up efforts to curb hate speech and the spread of misinformation on their platforms amid the ongoing war in Gaza.
These initiatives aim to create a safer and less toxic online environment. But as experts consistently point out, these efforts often fail, leading to empty promises and a worrying trend towards censorship.
“In short, social media platforms have not been very good at avoiding censorship or curtailing hate speech and disinformation about the war in Gaza,” Nadim Nashif, founder and director of 7amleh, a digital rights and human rights activist group for Palestinians, told The Arabs. News.
“Throughout the conflict, censorship and account takedowns have threatened efforts to document even human rights abuses on the ground.”
Nashif says hate speech and incitement to violence is still “rampant”, particularly on the Meta and X platforms, where anti-Semitic and Islamophobic content continues to “proliferate”.
Since the Hamas-led attack on October 7 that sparked the Gaza conflict, social media has been flooded with war-related content. In many cases, it has served as a vital window into the dramatic events unfolding in the region and has become a vital source of real-time news and accountability for Israeli actions.
Profiles supporting the actions of both Hamas and the Israeli government have been accused of sharing misleading and hateful content.
FASTREALITY
1050
The takedown and further suppression of Instagram and Facebook content posted by Palestinians and their supporters, as documented by Human Rights Watch between October and November 2023.
Yet none of the social media platforms — including Meta, YouTube, X, TikTok, or messaging apps like Telegram — have publicly outlined policies designed to moderate hate speech and incitement to violence related to the conflict.
Instead, these platforms remain flooded with war propaganda, dehumanizing speech, genocidal statements, explicit calls for violence and racist hate speech. In some cases, the platforms take down pro-Palestinian content, block accounts and sometimes shadow-ban users from expressing their support for the people of Gaza.
On Friday, Turkey's telecommunications authority blocked access to Meta-owned social media platform Instagram. Local media reported that access was blocked in response to Instagram removing posts by Turkish users expressing condolences over the recent killing of Hamas political chief Ismail Haniyeh in Tehran.
The previous day, Malaysian Prime Minister Anwar Ibrahim accused Met of cowardice after his Facebook post about the killing of Haniyeh was removed. “Let this serve as a clear and unequivocal message to Meta: Stop this display of cowardice,” Anwar, who has repeatedly condemned Israel's war in Gaza and its actions in the occupied West Bank, wrote on his Facebook page.
Meanwhile, footage of Israeli soldiers allegedly blowing up mosques and houses, burning copies of the Koran, torturing and humiliating blindfolded Palestinian prisoners, driving them handcuffed to the hoods of military vehicles and glorifying war crimes is freely available on mobile screens.
“Historically, platforms have been very bad at moderating content about Israel and Palestine,” Nashif said. “During the war in Gaza and the credible genocide going on, it just got worse.
A Human Rights Watch report titled “Meta's Broken Promises,” published in December, accused the company of “systematic online censorship” and “inconsistent and opaque application of its policies” and practices that silenced voices in support of Palestine and the Palestinian people. Instagram and Facebook rights.
The report added that Meta's conduct “falls short of its human rights due diligence obligations” due to years of failed promises to address its “overbroad interference”.
Jacob Mukherjee, head of the political communications master's program at Goldsmiths, University of London, told Arab News: “I'm not sure to what extent you can really call them an effort to stop censorship.
“Meta promised to carry out various revisions – which, by the way, it has been promising for quite a few years since the last escalation of the Israeli-Palestinian conflict in 2021 – before October 7 of last year.
“But from what I can see, not much has changed, substantially speaking. Of course they had to respond to suggestions that they engaged in censorship, but from my perspective it was mainly a PR effort.”
Between October and November 2023, Human Rights Watch documented more than 1,050 takedowns and other suppressions of Instagram and Facebook content posted by Palestinians and their supporters, including content about human rights abuses.
Of these, 1,049 involved peaceful pro-Palestinian content that was censored or otherwise wrongfully suppressed, while one case involved the removal of pro-Israel content.
However, censorship seems to be only part of the problem.
7amleh's violence indicator, which tracks real-time data on violent content in Hebrew and Arabic on social media platforms, has recorded more than 8.6 million pieces of such content since the start of the conflict.
Nashif says the proliferation of violent and harmful content, mostly in Hebrew, is largely due to a lack of investment in moderation.
This content, which primarily targeted Palestinians on platforms such as Facebook and Instagram, was used by South Africa as evidence in its case against Israel at the International Court of Justice.
Meta is probably not alone in bearing responsibility for what South African lawyers have described as the first genocide broadcast live to cellphones, computers and television screens.
X also faced accusations from supporters of both Palestine and Israel that they were giving free rein to cliques known to spread disinformation and fake images, often shared by prominent political and media figures.
“One of the main problems with current content moderation systems is the lack of transparency,” said Nashif.
“When it comes to AI, platforms do not release clear and transparent information about when and how AI systems are implemented in the content moderation process. Policies are often opaque and allow platforms a lot of freedom to do as they see fit.”
For Mukherjee, the issue of moderation behind the smoke screen of dark policies is heavily political, requiring these companies to adopt a “balanced” approach between political pressure and “managing the expectations and desires of the user base”.
He said: “In a way, these AI tools can be used to shield the real power holders, the people who run the platforms, from criticism and accountability, which is a real problem.
“These platforms are private monopolies that are essentially responsible for regulating an important part of the political public sphere.
“In other words, they help shape and regulate the arena in which conversations take place, in which people form their opinions, from which politicians feel the pressure of public opinion, and yet they are completely unaccountable.”
Although there have been examples of pro-Palestinian content being censored or removed, as revealed by Arab News in October, these platforms made it clear, long before the Gaza conflict, that it was ultimately not in their interest to remove content from their platforms.
“These platforms are not created for reasons of public interest or to ensure an informed and educated population that is exposed to a range of viewpoints and equipped to make informed decisions and form opinions,” Mukherjee said.
“The fact (is) that business models actually want there to be a lot of content, and if it's pro-Palestinian content, then yes. Ultimately, it's still eyeballs and engagement on the platform, and content that drives strong sentiment, to use industry terms, gets engagement, and that means data, and that means money.”