Halting the Spread of Hatred on Social Networks


Following the live video broadcast of the Christchurch massacre in New Zealand, it is imperative that we make web platforms more accountable so they do not serve as a sounding board for hate speech.

Live death. This is the nauseating sight that hundreds of thousands of people shared on the internet on March 15. The far-right terrorist Brenton Tarrant, who massacred 50 people in two Christchurch, New Zealand mosques, had indeed taken the cynical initiative of filming and broadcasting his act in real time on Facebook to give himself even more resonance.

The social network took almost half an hour to respond, during which time the first internet user reported the content of the video. It was too late. Thanks to the viral nature of the internet, the disgraceful act spread like wildfire after being shared on Facebook, YouTube and various other platforms.

This scandal exposes the limited control social networks have over video sharing. Mark Zuckerberg’s team claims to have removed 1.5 million copies of the attack video, including 1.2 million copies before the video was loaded on the platform. Important, but insufficient. In addition to the hundreds of thousands of people who have actually seen it, websites — both reputable and disreputable — are still able to duplicate and share the offensive content.

The problem is not limited to inciting extremely unhealthy voyeurism. Broadcasting the video contributes to a very dangerous proselytism. Just hours after the killing, American “alt-right” sympathizers, neo-Nazis and extremists collectively shared the video to celebrate their new “hero,” some calling on others to take inspiration from his act.*

Eradicate the Extremist Ecosystem

Under the guise of respect for freedom of expression — the founding principle of the web and a cardinal value in the United States, which generates most of these major platforms — these videos contribute, through no fault of their own, to the dissemination of hate speech.

After years of acting irresponsibly, the giants of the internet have addressed moderation with success, more or less. Automatic filtering to combat online pedophilia works well globally. Since 2016, this process has been adapted to deal with terrorist content. The Christchurch drama shows that we still have a long way to go.

The difficulty is that this transition is essential to the operation of these large platforms. It is a question of appealing to emotion and permanent outrage to increase visibility, to arouse the highest number of possible reactions for the best, but, too often, for the worst. Online communities, organized in silos, become the realm of the inter-self, a situation which encourages one to communicate with those who resemble you, and which ends up removing inhibitions by fueling a speech freed from any contrarian views. It’s obvious that these viral tools are more effective in spreading extremist ideas than in spreading moderate positions.

It will be very difficult to eradicate the online extremist ecosystem that thrives on encrypted forums where radical Islamists and far-right activists have every opportunity to disseminate their deadly paranoia in the name of a clash of civilizations. The priority is to prevent large platforms from acting as sounding boards. The social platforms must use their technological power to stem the phenomenon. As for political leaders, they must hold large platforms responsible by enacting sufficiently dissuasive legislation so that a tragedy like Christchurch can never again be used or exploited.

*Editor’s note: “Alt-right” refers to a white nationalist movement.

About this publication


Be the first to comment

Leave a Reply