The Silent Epidemic: AI-Generated Misinformation

Shree Priya Thakur | 28 March 2024


Summary

  • WEF’s Global Risk Report highlights AI-generated misinformation as the second largest risk in 2024, and the biggest short term risk with the potential to alter elections and induce societal polarisation. 

  • AI-generated misinformation is particularly unique because it can be generated on open source platforms, dispersed with speed, and convincingly impersonate key individuals.

  • Countries differ in their penetration of digital literacy and regulatory frameworks to govern AI Misinformation, further widening the gap between the Global North & South; yet consensus driven global frameworks and multistakeholder approaches are essential.


The World Economic Forum's "Global Risks Report 2024" highlights "growing global fractures'' as a defining challenge of the year, with "AI-generated misinformation and disinformation" ranked as a significant emerging risk, second only to "Extreme Weather Events." This concern is particularly acute in a year when over two billion individuals will participate in around 50 elections across economies constituting 60% of global GDP. The ability of AI-generated misinformation to manipulate election outcomes and exacerbate societal polarisation is alarming.

According to the United Nations Human Rights Commission (UNHRC), misinformation is false or inaccurate information, whereas disinformation is the deliberate use of inaccurate information. The question is, if such synthetic content has always been a part and parcel of election processes, what makes the AI-generated misinformation so pervasive? Simply put, AI-generated synthetic information has changed the game by amplifying threats and fabricating trust. Through deep fakes such as tampered recordings of political representatives and video alteration; generative AI has the potential to achieve photo-realistic results and convincingly impersonate targeted individuals. Fabricated footage of Ukrainian President Volodymyr Zelensky ordering troops to lay down their arms, US President Joe Biden asking people to boycott elections, and Pakistan’s electoral candidate Basharat Raja asking people to refrain from voting are all but a few examples.

Several qualities make AI-generated misinformation a potent tool for disrupting electoral processes, deepening polarisation, and enhancing believability, as highlighted in the WEF Report. Common platforms used for generating deep fakes (Synthesia, Deep Fake Web, etc.) do not require a niche skill set and are easy-to-use, open-source platforms. These sophisticated technologies facilitate the seamless creation of manipulated images and are “nascently governed.” The risks are further exacerbated as such information is emotively powerful, speedily dispersed, and creates a new category of crimes, i.e. generating non-consensual deep fakes, and sparking stock market manipulation. 

As the “biggest short-term risk of 2024”, AI-generated misinformation is expected to disrupt the election super year and perpetuate societal polarisation; creating ripples across the economic and geopolitical chessboard. Unpacking the domain of election, it is expected that almost 3 billion people will vote over the next two years in major economies like India, the United Kingdom, the United States, and more. Misinformation and disinformation in this context can be detrimental, often leading to questioning the legitimacy of democratically elected governments. 

Manifestations of its impact include rising unrest and ethnic violence. AI-generated misinformation carries with it the volume, speed, and difficulty of containing its spread. Further, elections are particularly vulnerable as AI can tailor the synthetic information towards a specific audience, for example, minority communities. Countries varying in digital literacy (~79% digital literacy in the Netherlands, contrasted with ~38% in India) and populations largely apathetic to data privacy deepen the challenge of differentiating between synthetic and authentic content. Long-term implications include erosion of democratic processes, and promotion of civil confrontation, all the while necessary legislative framework to address misinformation and data privacy is not present in several countries (eg: countries like India, Singapore, and Japan have soft guidelines to regulate AI but not hard acts/legislations). 

In terms of societal polarisation, AI-induced synthetic information creates a self-perpetuating cycle as “polarised societies trust information which confirms their beliefs”. Polarisation can take the form of strong political affiliations, damage to the perception of reality, and deteriorating social cohesion. For example, emotively heavy content can conceal facts and leverage ideologies and emotions to achieve political ends. AI-generated misinformation can alter public discourse and even promote discrimination in workplaces. False information can be weaponised and used to command control as public trust shifts towards specific leaders or theorists. This is most noticeably seen in the form of digital authoritarianism when flawed or hybrid democracies resort to suppressing dissent, eg: 2.4 billion people were restricted from internet usage in 2023. The need of the hour is to strengthen fact-checking tools, develop consensus on the global governance of AI, and embark on mission-mode digital literacy programmes. AI-generated misinformation and disinformation is an invisible contagion to which no one is immune.

The Silent Epidemic: AI-Generated Misinformation

CottonBro Studio


Forecast

  • Short-term:

    • AI-generated misinformation is highly likely to challenge social media companies’ ability to uphold integrity and carry out fact-checking in the face of intersecting campaigns.

    • Further, hybrid or flawed democracies are likely to face medium risk due to difficulties in managing real or perceived foreign influence in electoral processes. 

  • Medium-term:

    • Mature democracies are likely to anticipate a legislative challenge of finding the right balance in the trade-off between regulatory control of AI and freedom of speech. The unchecked spread of falsified information can potentially create echo bubbles of misinformation, a breeding ground for fundamentalism, thus creating a high-risk situation. 

  • Long-term

    • Authoritarian digital norms are expected in hybrid democracies. Developed North is highly likely to export its digital norms to the developing South in the absence of a rule-based & consensus-oriented governance framework for AI Misinformation.

Previous
Previous

Egypt’s Economic Woes

Next
Next

Kazakhstan: An Unlikely Solution to Food and Energy Security?