AI, elections, disinformation

GS Paper III

News Excerpt:

As India will now go through the 18th general election process, countering artificial intelligence (AI)-generated disinformation will be a major challenge for the political parties and administrative authorities.


  • In March 2018, the Cambridge Analytica scandal brought into mainstream public discourse the impact of social media on electoral politics, and the possibility of manipulating the views of Facebook users using data mined from their private posts.
  • The shadow of large language models looms over elections around the world, and stakeholders are aware that even one relatively successful deployment of an artificial intelligence (AI)-generated disinformation tool could impact both campaign narratives and election results very significantly.

Three-way trouble 

  • AI can accelerate the production and diffusion of disinformation in broadly three ways, contributing to organised attempts to persuade people to vote in a certain way.
    • First, AI can magnify the scale of disinformation by thousands of times
    • Second, hyper-realistic deep fakes of pictures, audio, or video could influence voters powerfully before they can be possibly fact-checked. 
    • Third, and perhaps most importantly, by microtargeting.
  • AI can be used to inundate voters with highly personalized propaganda on a scale that could make the Cambridge Analytica scandal appear microscopic, as the persuasive ability of AI models would be far superior to the bots and automated social media accounts that are now baseline tools for spreading disinformation.
  • The risks are compounded by social media companies such as Facebook and Twitter significantly cutting their fact-checking and election integrity teams. 
    • While YouTube, TikTok and Facebook do require labelling of election-related advertisements generated with AI, that may not be a foolproof deterrent.

AI as Imminent danger

  • A new study published in PNAS Nexus predicts that disinformation campaigns will increasingly use generative AI to propagate election falsehoods. 
  • The research, which used “prior studies of cyber and automated algorithm attacks” to analyze, model, and format the proliferation of bad-actor AI activities online, predicts that AI will help spread toxic content across social media platforms on an almost-daily basis in 2024. 
    • The fallout could potentially affect election results in more than 50 countries.
  • The World Economic Forum’s Global Risks Perception Survey, ranks misinformation and disinformation among the top 10 risks, with easy-to-use interfaces of large-scale AI models enabling a boom in false information and “synthetic” content — from sophisticated voice cloning to fake websites. 
    • The report also warned that disinformation in these elections could destabilise societies by discrediting and questioning the legitimacy of governments.

Potential displayed

  • Generative AI companies with the most popular visual tools prohibit users from creating “misleading” images
    • However, researchers with the British nonprofit Centre for Countering Digital Hate (CCDH), who tested four of the largest AI platforms — Midjourney, OpenAI’s ChatGPT Plus,’s DreamStudio, and Microsoft’s Image Creator — succeeded in making deceptive election-related images more than 40% of the time.
  • The researchers were able to create fake images of Donald Trump being led away by police in handcuffs and Joe Biden in a hospital bed. 
  • According to a report by the BBC quoting a public database, users of Midjourney have created fake photos of Biden handing wads of cash to Israeli Prime Minister Benjamin Netanyahu, and Trump playing golf with Russian President Vladimir Putin.

Regulatory tightrope

  • The Indian government has asked digital platforms to provide technical and business process solutions to prevent and weed out misinformation that can harm society and democracy. 
  • The Minister for IT and Communications has said that a legal framework against deepfakes and disinformation will be finalised after the elections.
  • The IT Ministry had issued an advisory to companies such as Google and OpenAI, and to those running foundational models and wrappers, that their services should not generate responses that are illegal under Indian laws or “threaten the integrity of the electoral process”. 
    • The advisory had faced a backlash from some generative AI space startups, including those invested in the ecosystem abroad, over fears of regulatory overreach that could throttle the fledgling industry.

Way Forward:

  • Regulation of AI and disinformation is the need of the hour. The government should work for it with proper discussion with the involved stakeholders.
  • A public awareness campaign will also be one of the key measures to inform the people of India about the deep fakes and disinformation and verify the source and content every time.
  • The election commission should ensure proper surveillance so that fair elections can be conducted and no one’s image or character will be assassinated.


Hence, AI has some positive and life changing impact on society, but its negative use and impact threaten us. If we have to use AI’s positive version, we have to stop its negative use through proper regulation, laws, and implementation.

Book A Free Counseling Session