Mark, Please Do Not Intervene


The surprising triumph of Donald Trump in the North American elections sparked a series of criticisms from the media and news organizations about Facebook’s editorial responsibility. The social network is accused of having allowed, through its inactivity, the spread of false news. According to some analysts, and even according to President Obama, the repetition of these stories and the so-called “filter bubble” managed to confuse the electorate, making sure that Trump defeated Hillary Clinton. An article published in the prestigious Columbia Journalism Review maintains without doubt that the wave of false news stories contributed to Trump’s success. It quotes an analysis carried out by Craig Silverman, a specialist in verification techniques and editor of Buzzfeed, who studied content published during the presidential campaign by professional sources and supporters from the left and the right—Hillary and Trump—and demonstrated that, sure enough, false and malicious stories were published. However, the study does not prove that either the false or true stories had any real impact on the electoral decision of those people who were exposed to them.

Now the media is calling for Facebook to implement verification mechanisms and to differentiate the false news stories from the true. Seriously, do we want Facebook to be the arbitrator of truth? Mark Zuckerberg announced seven measures to facilitate the identification of false news and discourage its distribution. The first is that Facebook will implement technology to predict what news will be marked as false by users, even before they do it. It will also look to produce simple mechanisms so that users can tell Facebook that a news story is false. The idea is that its own community and the verification of independent information-checking bodies will alert them to the falsity of a news story, allowing them to stop its distribution or label it with a warning. Although it is not stated, surely Facebook’s algorithm will penalize those reported and reduce their reach.

The problem with these measures is that a politically active community would be able to denounce a true news story as false on Facebook because it is not beneficial to their interests. And a true report based on non-factual information—for example, a leak—would force independent verification bodies to characterize it as unsubstantiated information which would devalue it in the eyes of the public. In any case, the reader’s judgement will be affected by a new combination of uncertain values created by Facebook’s algorithms, partial action from the community and the judgement of independent verification bodies.

Why deploy so many control mechanisms to do something that both the credibility of newspaper brands and the reputation of the authors have done since the beginning of journalism? We believe or stop believing in something because of who said it. Credibility is a delicate asset, constructed over time in a complicated relationship between media brands and the audience. Therefore if The New York Times were to say that a squadron of UFOs flew over New York we would believe it, but if a blog dedicated to extraterrestrial life were to say the same we would not believe it. Clinton did not lose because of the influence of a handful of false news stories but because of something deeper that we have yet to understand.

The prayers asking for Facebook’s intervention in news content run the risk of being addressed. It would be something very bad for the press and for freedom. Facebook should stay neutral.

Mark, please, do not intervene.

About this publication


Be the first to comment

Leave a Reply