The Anger Algorithm

The Anger Algorithm

Social Media’s toxic profits might push civilisation to its biggest peril from a technology that was supposed to connect us all

Algorithmic bias is disrupting democracies, our lives, and livelihoods across the world. From Myanmar to Ethiopia, social media toxicity is spreading like cyber cancer that is costing lives. Worst, it is affecting children – leading them to depression, and in extreme cases suicides. Ironically, human civilization faces its biggest peril from a technology that was supposed to connect us all, and democratise information. Instead, a monster has been unleashed by technology titans creating algorithms that fuel anger, and amplify hateful messages by design. More than half the world population of around 8 billion is infected by this vitriol – such is the enormity of this crisis.

The business model fuelled by hate

The business model is simple; creating machine-learning models that can identify, with a startling level of accuracy, our preferences by analysing what we post, what we like, what we read, and who we interact with online. Our social media footprints reveal more than we ourselves know about us. Armed with these findings about our preferences, social media giants like Facebook, Google, Amazon, Instagram, or WhatsApp, then push content matching our choices, which will engage us more in their platforms, leading us to spend increasing lengths of time on these applications. The more we stay on their pages, the greater number of ads we view and help them rake up astronomical profits. And the chilling truth is –hateful messages are more engaging, thus vastly more profitable.

From genocides to suicides

The combined market value of Google, Apple, Facebook, and Amazon is close to US$10 trillion – over three times the GDP of India. Right now, Facebook, with annual revenues of US$86 billion and profits of nearly US$30 billion is in the crosshairs of government regulatory bodies in most countries, thanks to a whistle-blower – Ms. Frances Haugen – a former product manager. She came forward, armed with Facebook’s own internal research, claiming that the social media giant and its group companies Instagram and WhatsApp, with a global user base of 3.5 billion, deliberately ignored findings such as: hateful messages have led to genocides in countries like Myanmar and Ethiopia, cyber bullying and body-shaming caused widespread depression among teenagers, and even leading to suicides. “Facebook prioritized profits over user safety,” was the most damning comment she made before US Senate Subcommittee probing her allegations.

Facebook was also accused of manipulating public choice when Cambridge Analytica, a consultancy involved in Donald Trump’s 2016 presidential election campaign, was found to havetappedFacebook data of millions of Americans citizens.

The most distressing example is Myanmar, where viral fake news and hate speech about the Rohingya minority escalated the country’s ethnic divide into a full-blown conflict. After years of downplaying its role, Facebook admitted in 2018 that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

And Facebook is not the only culprit. In December 2020, Dr Timnit Gebru, an esteemed researcher in the field of AI ethics was allegedly fired by Google, allegedly after she questioned an order to retract a research paper that concluded that AI systems mimicking language could hurt marginalized populations.

How does the algorithm work?

In contrast to usual algorithms that are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the relationships within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. As a random example, an algorithm trained on ad-click data might learn that more women click on slimming products than men and accordingly push such ads to them. This might be dangerous with teenagers, as wrong diet recommendations popping up during their surfing are known to have caused serious health issues. Sadly, the young have been led to venerate a certain body-type based on Instagram images, and if they fall short of the ideal, they suffer from acute depression.

AI cannot detect hate speech

Regrettably, Artificial Intelligence (AI) still fails to detect hate speech, though some progress has been reported. According to a research paper of Functional Tests for Hate Speech Detection Models jointly authored by research teams from University of Oxford, The Alan Turing Institute, Utrecht University, and the University of Sheffield, scientists tested four top AI systems for detecting hate speech, and found that all struggled in different ways to distinguish toxic and innocuous sentences.

The results are not surprising. Creating AI that understands the nuances of natural language is difficult. AI often misclassifies hate content and, in some cases, even tag innocent text as toxic. Algorithms tend to confuse toxic comments with nontoxic comments that contain words related to gender, sexual orientation, religion or disability – areas usually targeted to create toxicity. For example, one user reported that simple neutral sentences such as “I am a gay black woman” or “I am a woman who is deaf” resulted in high toxicity scores, while “I am a man” resulted in a low score.

The results point to one of the most challenging aspects of AI-based hate-speech detection today: moderate too little and you fail to solve the problem; moderate too much and you could censor the terms that marginalised groups use to empower and defend themselves.

It’s missing the context

Furthermore, AI is at times unable to properly place the usage of certain words in a context and therefore risk wrong tagging.  It has been noticed that the inclusion of insults or profanity in a text comment will almost always result in a high toxicity score, regardless of the intent or tone of the author. As an example, the sentence “I am tired of writing this stupid essay” will give a toxicity score of 99.7%, while removing the word ‘stupid’ will change the score to 0.05%.

Despite the fact that one of the released models has been specifically trained to limit unintended bias, most models are still likely to exhibit some bias, which can pose ethical concerns when used off-the-shelf to moderate content. Although there has been considerable progress on automatic detection of toxic speech, we still have a long way to go until models can capture the actual, nuanced, meaning behind our language – beyond the simple memorisation of particular words or phrases.

Of course, investing in better and more representative datasets would yield incremental improvements, but we must go a step further and begin to interpret data in context, a crucial part of understanding online behaviour. A seemingly benign text post on social media accompanied by racist symbolism in an image or video would be easily missed if we only looked at the text. We know that lack of context can often be the cause of our own human misjudgments. If AI is to stand a chance of replacing manual effort on a large scale, it is imperative that we give our models the full picture.

Ignoring toxicity fixes on purpose

The most damaging part of FrancesHaugen’s testimony on September 5, is that Facebook has ignored recommendations on how to fix toxicity. In 2017, Chris Cox, Facebook’s long-time chief product officer, formed a new task force to understand whether maximising user engagement on Facebook was contributing to political polarisation. It indeed found a correlation, and reducing that polarisation would mean taking a hit on engagement. The task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “anti-growth.” Most of the proposals didn’t move forward, and the task force was disbanded.

Detoxification of Facebook and social media is urgent and cannot be delayed simply because it is challenging. The first step Haugen recommends would be to “get rid of the engagement-based ranking.” She also advocates for a return to Facebook’s chronological news feed.

Mission Possible

During her testimony on October 5, Haugen said that Facebook had actually turned on the filters that stopped hateful messages and fake news before the latest US presidential elections, but turned those off once the elections were over. This shows that toxicity can be checked if only the organisation wanted to do it.

Organisations need to take a call on doing the right things as opposed to doing things right – the former needs a moral compass while the latter is about merely sticking to rules. The Facebook crisis will spread to other social media platforms, and soon they would need to discipline themselves or be disciplined by respective governments. China is already toughening its stand on social media usage; it might be a global model for other countries.

The issue is alive and evolving fast. For full implications of how biased AI systems can shake the fabrics of society, stay tuned for our next episode.

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us