Misinformation & Disinformation in 2024

Misinformation & Disinformation in 2024

The disruptive capabilities of manipulated information are rapidly accelerating and there is a risk that some governments will act too slowly, facing a trade-off between preventing misinformation and protecting free speech

 

Sample these:

  • A few weeks ago, voters in the US state of New Hampshire woke up to robocalls featuring a voice that sounded like the US President Joe Biden, telling voters to skip the voting for a primary that will choose a Republican candidate.
  • A wave of pornographic deepfake images of US pop star Taylor Swift, created using artificial intelligence became viral on social media platforms this week. Before it could be taken down, the videos had been watched by millions of viewers across the world.
  • In India, after actor Rashmika Mandanna’s deep fake video controversy last November and batsman Sachin Tendulkar’s deepfake video, several deepfake videos of ICICI Prudential Mutual Fund Managing Director Nimesh Shah making stock recommendations for 2024 have surfaced as ads on social media platform Facebook.

Researchers have observed a 230% increase in deepfake usage by cybercriminals and scammers, and have predicted the technology would replace phishing in a couple of years, according to Cyfrima, a cybersecurity company.

The Severest Risk of 2024

Emerging as the most severe global risk anticipated over the next two years, foreign and domestic actors alike will leverage Misinformation and disinformation to further widen societal and political divides, according to the World Economic Forum’s ‘Global Risk Perception Survey 2024’. As close to three billion people are expected to head to the electoral polls across several economies – including Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom and the United States – over the next two years, the widespread use of misinformation and disinformation, and tools to disseminate it, may undermine the legitimacy of newly elected governments.

Resulting unrest could range from violent protests and hate crimes to civil confrontation and terrorism. Beyond elections, perceptions of reality are likely to also become more polarised, infiltrating the public discourse on issues ranging from public health to social justice. However, as truth is undermined, the risk of domestic propaganda and censorship will also rise in turn. In response to mis- and disinformation, governments could be increasingly empowered to control information based on what they determine to be “true”. Freedoms relating to the internet, press and access to wider sources of information that are already in decline risk descending into broader repression of information flows across a wider set of countries.

The report highlights that:

  • Misinformation and disinformation may radically disrupt electoral processes in several economies over the next two years.
  • A growing distrust of information, as well as media and governments as sources, will deepen polarised views – a vicious cycle that could trigger civil unrest and possibly confrontation.
  • There is a risk of repression and erosion of rights as authorities seek to crack down on the proliferation of false information – as well as risks arising from inaction.

Unprecedented Disruptive Capabilities

The disruptive capabilities of manipulated information are rapidly accelerating, as open access to increasingly sophisticated technologies proliferates and trust in information and institutions deteriorates. In the next two years, a wide set of actors will capitalise on the boom in synthetic content,amplifying societal divisions, ideological violence and political repression – ramifications that will persist far beyond the short term.

Misinformation and disinformation (#1) is a new leader of the top 10 rankings this year. No longer requiring a niche skill set, easy-to-use interfaces to large-scale artificial intelligence (AI) models have already enabled an explosion in falsified information and so-called ‘synthetic’ content, from sophisticated voice cloning to counterfeit websites.

Governments Fight Back

To combat growing risks, governments are beginning to roll out new and evolving regulations to target both hosts and creators of online disinformation and illegal content.Nascent regulation of generative AI will likely complement these efforts. For example, requirements in China to watermark AI-generated content may help identify false information, including unintentional misinformation through AI hallucinated content.

Generally, however, the speed and effectiveness of regulation is unlikely to match the pace of development. Synthetic content will manipulate individuals, damage economies and fracture societies in numerous ways over the next two years. Falsified information could be deployed in pursuit of diverse goals, from climate activism to conflict escalation.

New classes of crimes will also proliferate, such as non-consensual deepfake pornography or stock market manipulation. However, even as the insidious spread of misinformation and disinformation threatens the cohesion of societies, there is a risk that some governments will act too slowly, facing a trade-off between preventing misinformation and protecting free speech, while repressive governments could use enhanced regulatory control to erode human rights.

 

Know more about our Top Ranked PGDM in Management, among the Best Management Diploma in Kolkata and West Bengal, with Digital-Ready PGDM with Super-specialization in Business AnalyticsPGDM with Super-specialization in Banking and Finance, and PGDM with Super-specialization in Marketing.

Leave a comment

Your email address will not be published. Required fields are marked *

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us