During the presidential election a while back, Donald Trump supporters wearing T-shirts emblazoned with a large letter Q could be seen at campaign events. The shirts belonged to QAnon, a far-right organization that believes in unproven conspiracies and has no credibility. According to QAnon, a group of Satan worshippers is trafficking children across the globe and planning to overthrow Trump.
QAnon arose before Trump was elected in 2016. A conspiracy circulating on Twitter at #SpiritCooking asserted that Democratic candidate Hillary Clinton’s campaign director was an occultist who threw a dinner party that was actually a secret satanic ritual. #SpiritCooking later evolved into Pizzagate, which was based on the rumor that these satanic activities were centered on a pizza shop in the Washington, D.C., area. After #SpiritCooking circulated on the internet for only a month, a man walked into that pizza shop and fired an AR-15 rifle.
The rise of QAnon is not unique. Because the internet has no boundaries, false information and conspiracy theories can arise from anywhere. Many people worry that hostile foreign governments use such invisible weapons to attack and influence voters. Facebook CEO Mark Zuckerberg initially declined to take responsibility for such attacks, but a number of investigations later revealed that social media platforms can indeed radicalize and harm people with differing views, especially those in psychologically fragile conditions.
Following advances in artificial intelligence and data tracking technology, the influential power of false information has increased. The algorithms that Facebook and YouTube use to recommend content allow people to view that content in an echo chamber. Once you fall down the hole of misdirection and hate, it’s as if your brain ossifies, and escape is difficult. Politicians know this. Trump has repeatedly used his large following on Twitter to spread racism and conspiracy theories, acts which have forced Twitter to flag credibility issues in some of his tweets.
Knowing they had to respond to these problems, social media companies modified their program logic, employed a large number of content moderators, developed automated systems to detect and remove extremist content or false information, updated their posting policies and changed their privacy rules. All these actions were in the hopes of banning, or at the very least reducing, hateful, bullying or untrue content.
To date, this kind of malignant content has overcome the resistance and the will of tech companies. Social media profit models depend on maximizing the amount of time users spend on platforms. Banning posts or randomly deleting them hurts the business model. As numerous studies have shown, false information comes disproportionately from right-wing groups. However, stifling such groups too much leads to charges of political bias. And so, social media companies are damned if they do and damned if they don’t.
Cyberbullying and false information are like climate change: global ecological disasters that have caused a current hell in which the truth has been killed. They are rooted in humans’ worst instincts and have a long history. (An ancient Chinese saying, for example, is that it only takes three people repeating a rumor to make it true.) Today, people’s thoughts have been poisoned by the omnipresence of technology. Yet, demanding that tech companies solve the problems they have created is not feasible. We can only count on people discerning the truth and waking up to it.
The author is president of the National Taipei University of Business.
Leave a Reply
You must be logged in to post a comment.