But as the war unfolds, who can post such videos and what people can say about them will be determined in part by content moderation policies, which vary widely from social network to social network. It will be.
These policies can mean the difference between a particular video going viral or being removed from the site.
Support Israel, call for peace, and lament the plight of Palestinians on Google’s YouTube and Meta’s Facebook and Instagram. However, expressing support for Hamas is prohibited. Both companies consider Hamas an extremist organization. This means that members of the group cannot use the company’s platform, nor can they post videos or images created by Hamas there.
TikTok has previously declined to comment on which groups it designates as extremist organizations, but confirmed to the Post that Hamas is also banned from its platform.
Still, videos believed to have been shot by Hamas members have been published on all three platforms, but in some cases this may be due to newsworthiness or people posting offensive content to denounce offensive content. This is because it is permitted under the “counter speech” exception. They even showed bodies of Israeli hostages and victims.
In contrast, Telegram, an influential messaging platform, does little to moderate content. He openly hosts a Hamas channel. It distributes gruesome footage and images of dead Israelis to more than 100,000 subscribers. And some of those posts are being rebroadcast on Elon Musk’s X (formerly Twitter), which nominally bans Hamas content but has since fired most of its employees. After that, relatively little seems to have been done to police it.
Experts say X, in particular, has become a hub for posts and videos that have been removed by other platforms for violating rules against graphic violence and hate speech. On Tuesday, EU Commissioner Thierry Breton posted: letter to mask The regulator warned him it had “indications” that the site may be in breach of European rules on violent and terrorist content., So is disinformation.
In Israel, some authorities are encouraging parents to remove their children from social media to prevent them from being exposed to violent content after Hamas leaders said they would broadcast the execution of Israeli hostages. It is recommended that you move away from it.
When deciding which posts to remove during a war, social media companies weigh their interests in protecting users from violent, hateful, and misleading content against newsworthy material or potential war crimes. Evelyn Dooku said the goal of allowing freedom of expression, including evidence, must be weighed. , Assistant Professor at Stanford Law School. And we often have to make calls under time pressure without complete information.
“In the midst of escalating conflict and humanitarian atrocities, there are no good options for platforms seeking to responsibly moderate content,” Dweck said. This is a very difficult problem both technically and normatively. ”
In the case of the Hamas-Israel war, such calls were made to avoid being seen as aiding terrorist organizations by allowing them to broadcast propaganda, threats, videos of hostages, and even executions. Complicated by desire. Facebook has been sued in the past by families of people killed by Hamas. And earlier this year, Google, Twitter, and Meta defended themselves in the Supreme Court against charges that they materially supported the terrorist organization ISIS by hosting or promoting content such as recruitment videos. (Technology companies won in both cases.)
But defining what qualifies as an extremist group is not always easy, and social media platforms have long provided guidance on which government officials, political movements, military operations, and violent regimes have a voice and which factions. He has faced intense scrutiny over whether he has the right to speak out. After the US withdrew its troops from Afghanistan in 2021, social media companies were forced to make a high-stakes decision on whether to continue banning the Taliban as they took over the government.
Facebook ultimately chose to ban the Taliban, but Twitter allowed the group to maintain an official presence as a de facto government.
“Platforms are notoriously opaque about what organizations they designate as dangerous or terrorist organizations,” Dweck said. “This is also an area where platforms tend to be cautious for fear of legal liability.”
When it comes to content supporting Hamas, being careful can mean removing videos showing atrocities. But it could also mean suppressing supposedly legitimate expression by those who support the liberation of Palestine.
“Within social media companies, the category you fall into determines how your words are treated,” said Annika Collier-Navaroli, former Twitter content policy director. “Speech from a political party will be treated very differently from speech from a terrorist. Speech from a legitimate nation-state will also be treated differently from speech from someone who is not recognized as a legitimate nation-state. right.”
Last year, consultancy Business for Social Responsibility, commissioned by Meta, found that social media giants unfairly suppressed the freedom of expression of Palestinian users during the two-week war between Israel and Hamas in 2021. A report was published confirming that
The technology company was praised for allowing users to share first-hand stories about the bloody conflict. However, the report documents how Meta mistakenly removed content for some users and was more likely to take action against content written in Arabic than Hebrew.
Earlier this year, Meta relaxed its rules against praising dangerous groups or figures, allowing only praise for extremist groups to be made in the context of a conversation about politics or social issues, such as news reporting or academic conversation about current events. Allowed more posts.
Still, Amir al-Khatatbeh, who runs an Instagram account under the handle @Muslim and has around 5 million followers, worries that similar dynamics are playing out in this war. He said he was there. “There are many people who have had their posts removed or who have been restricted from using Instagram’s live video feature to post in support of Palestine,” he said.
On TikTok, both the #Israel and #Palestine hashtags have amassed tens of billions of views, and young people are turning to the platform for news and perspectives on the conflict. But at least one prominent account covering news from a Palestinian perspective received a notice on Monday. be permanently banned. TikTok spokesperson Jamie Fabaza said Tuesday that the ban was a mistake and the account has been reinstated.
Fabaza said that since the Hamas invasion began, TikTok has changed more content moderators to focus on posts about the conflict, including posts in Arabic and Hebrew. It also blocks some hashtags associated with graphic violence and terrorist propaganda, such as videos of hostages and executions. And while we work with fact-checkers to identify misinformation, a quick look at popular searches like “Israel” and “Gaza” on Tuesday showed many were presented as if they were news. Many unrelated videos of past conflicts were found. In other videos, graphic footage of Israeli victims, likely originally produced by Hamas, gained views with a thinly veiled commentary denouncing the act.
As for YouTube, spokesperson Jacques Maron said the platform is working to connect users searching for war-related terms with trusted news sources. He added that YouTube removes hate speech targeting both Jewish and Palestinian communities.
In the first hours after the Hamas invasion, the graphic footage was published on smaller platforms with permissive content rules, such as Gab and Telegram, said the Anti-Defamation League vice president and former Facebook policy executive. Yael Eisenstat said. Inevitably, it gets reposted on mainstream platforms, and depending on policy and enforcement, it can flourish or wither. Many of them found a home in X.
“Right now, even on YouTube and even meta, it’s harder to find things that are clearly violating, especially things that are more anti-Semitic,” Eisenstat said. “It’s very easy to find on X.”
On Telegram, an apparently official Hamas account with nearly 120,000 subscribers regularly posts gruesome videos of attacks on Israel. One video, which has been viewed more than 77,000 times, shows an unidentified militant stomping on the face of a dead soldier. Many of the videos have been reposted on his X. At least one of his videos was also posted on his YouTube by Al Jazeera Arabic, a media outlet with 12.9 million subscribers, but some of the gore has been blurred out.
Telegram did not respond to a request for comment.
On Monday, X’s “safe” account tweeted a change to the company’s policy to allow posts that normally violate the rules to remain on the platform, with newsworthy exceptions. “In situations like this, X believes that, while difficult, it is in the public’s interest to understand what is happening in real time,” the company said in a tweet.
Meta and TikTok partner with fact-checking agencies to label false and misleading information. X engages its users in a crowdsourced fact-checking project called Community Notes.
On Tuesday, participants in the project debated whether to apply fact-checking labels to a gruesome video posted by the former president’s son, Donald Trump Jr. The source of the video, which appeared to show militants firing at casualties on a concrete floor, was unclear. The video remained up Wednesday.
Drew Harwell and Cat Zakrzewski contributed to this report.