Faced with Jihadi Content, Facebook’s Methods Are Inscrutable

“Why would you disable my personal account? My real name is Isis Anchalee!” This user is rightfully upset. Despite three attempts to re-activate her Facebook page from its suspension after the Nov. 13 attacks in Paris, the social network has still not granted her request.

And for good reason — her first name, a homonym of the English name for the Islamic State (ISIS, an acronym for Islamic State in Iraq and Syria), is grounds for suspension from Mark Zuckerberg’s company.

As reported by The New York Times, a Facebook engineer relayed his apologies on Twitter and promised to look into the source of the problem.

The social network has until now sought to show its solidarity with the events in Paris. On Friday night, it set up a safety check, a mechanism to allow users to signal to loved ones that they are safe, and it offered to tint users’ profile pictures with the colors of the French flag to honor the victims of the attacks.

But despite this, discontent is growing with Mark Zuckerberg’s company’s strange moderation choices: on the one hand, a quick show of support for France while blindly blocking content when the word ISIS is used, and on the other hand, its reluctance to remove videos with jihadi content.

Vain Alerts

The author of the book “Homo Cooperans 2.0,” Belgian Matthieu Lietaert published on his blog a widely disseminated post, in which he questions Facebook’s attitude with respect to a video posted by an Italian media outlet, which shows two young Belgian jihadis raising a call to arms against Europe. After having alerted Facebook to its hateful content, he received a message from Facebook explaining that the video “does not violate [their] rules.”

In another message widely disseminated on social networks, an online moderator also expressed irritation after coming upon several videos where the Islamic State group calls for jihad. “We reported each of these videos to Facebook so that they can all be removed. Facebook told us that they cannot delete the videos because ‘they do not violate their community standards.’”

However, Facebook’s community standards are explicit: “Facebook removes any content that incites hatred.” But a more nuanced look at the social network’s regulation reveals that sharing some content does not mean it will be censored. If the person shares hateful content to denounce it, Facebook doesn’t delete it.

That is undoubtedly what explains the continued link to the video, originally published by an Italian member of the Five Star Movement.

A Tardy and Partial Moderation

On Tuesday, Nov. 17, after the two rants on social networks, the video finally became inaccessible … but only in France, and at the government’s request. On the social network’s side they confirmed, “There are dedicated teams working around the clock to take down this content; terrorism has no place on Facebook. But we look at the context and not just the video. If the purpose is to denounce it, we instead believe that these messages are educational and we keep them, unless the government asks for their removal.”

A policy of questionable effectiveness — the video of jihadi propaganda had the time to go from 6 to 14 million views in the 24 hours after the attacks of Nov. 13, and it remains accessible outside of France, even though authorities now believe that the attacks were planned in Belgium.

An Administrative Block

An online petition urging more responsiveness and severity in the moderation of jihadi content has since been launched. Mr. Lietaert indicated late Wednesday afternoon that he “has not heard back from Facebook,” and the video was still visible in his country, an inconsistency that is difficult to justify in his eyes, “This is a crisis, we can learn from it and make clear to some people that they have responsibilities. It is useless to color profiles with the flag and to have a safety check if that is not consistent with background policy.”

The National Assembly voted on Nov. 19 on an amendment allowing the minister of the interior to “ensure the interruption of any online public communication service causing or advocating for the commission of acts of terrorism.” This extends, legally, to social networks.

About this publication

Be the first to comment

Leave a Reply