Shallow fakes or cheap fakes

GS Paper III

News Excerpt:

In the wake of the General Elections, a new threat has arisen, i.e. of misinformation via deepfakes and other means of forged videos.

More About the News: 

  • The upcoming 2024 elections are poised to set records, with an extraordinary number of voters expected to participate across more than 50 nations, representing half of the world's population. 
  • Misinformation tends to proliferate during elections, and this year presents an even greater challenge as deepfakes and generative artificial intelligence can contribute to the spread of false information. 
  • However, it's not just deepfakes that pose a significant concern; shallow fakes, or cheap fakes, are perhaps even more worrying. 

The main difference between deepfakes and shallow fakes lies in the level of sophistication and the technology used to create them:

  • Deepfakes: 
    • Deepfakes are created using artificial intelligence (AI) algorithms, particularly deep learning techniques, to superimpose or replace faces and voices in videos with high levels of realism. 
    • These videos often involve complex algorithms trained on large datasets to convincingly alter the appearance and voice of individuals. 
    • Deepfakes can be extremely difficult to detect with the naked eye and require specialized techniques to identify.
  • Shallow fakes: 
    • Shallow fakes, are typically created using conventional editing tools or simple software that do not involve advanced AI algorithms. 
    • They may involve basic techniques such as splicing, cropping, or altering the speed of videos, as well as mis-captioning or mis-contextualizing existing content
    • While shallow fakes can still be deceptive, they are generally easier to detect compared to deepfakes because they lack the level of realism achieved through AI technology.

Impact on the electoral process: Chief Election Commissioner recently acknowledged the complex challenge of addressing misinformation in the digital age, highlighting the importance of responsible behaviour from political parties in mitigating the threat posed by misinformation during elections.

  • Misinformation: Shallow fakes can be used to spread false information about candidates, parties, or electoral processes. For example, manipulated images or videos could depict candidates in compromising situations or make false claims about their actions or statements.
  • Forms of Misinformation: 
    • The study(Conducted by the Reuters Institute for Journalism at Oxford University) found that 59% of misinformation during the early stages of the pandemic involved re-configuration, where existing true information is spun, twisted, or recontextualized. This form of manipulation is distinct from completely fabricated content, which accounted for 38% of misinformation.
  • Absence of Deepfakes: The study did not find examples of deepfakes in the sample. Instead, simpler tools were used to create "cheap fakes," indicating that more sophisticated forms of AI-generated content were not as prevalent at that time.
  • Impact on Social Media Interactions: Reconfigured misinformation accounted for a significant portion (87%) of social media interactions, highlighting the effectiveness of this manipulation tactic in shaping public discourse and garnering engagement.
  • Manipulation of Perception: By selectively editing or misrepresenting content, shallow fakes can manipulate public perception of candidates or parties. 
    • For instance, morphed images or edited videos could portray candidates in a negative light or distort their statements to create a false narrative.
  • Influence on Voter Behaviour: 
    • Shallow fakes circulated on social media platforms can influence voter behaviour by shaping their opinions and attitudes towards political figures. 
    • Misleading content may sway voters' decisions or reinforce existing biases.
  • Speed and Virality: 
    • With the rapid dissemination of information on social media, shallow fakes can spread quickly and widely, reaching a large audience within a short period. 
    • This amplifies their potential impact on the electoral process, as false or misleading content can go viral and become ingrained in public discourse.
  • Platform Response: Platforms took action to remove significant quantities of reconfigured cheap fake and shallow fake content during the pandemic, reflecting efforts to curb the spread of misinformation.
  • Media Literacy Approach: This involves techniques such as the SIFT method, which includes steps like Stopping to assess emotional responses, Investigating the source, Finding alternative coverage, and Tracing the original image or video.
  • Global Concerns: The World Economic Forum's Global Risk Report 2024 identifies India as facing the highest risk of misinformation and disinformation, emphasizing the global significance of addressing this issue.

Detection of Fakes:

  • Detection Tools: Various online tools and platforms have been developed to detect deepfakes and cheapfakes, including Sensity, Microsoft Video Authenticator, Fabula AI, and The Content Authenticity Initiative (CAI) by Adobe. These tools use AI algorithms and image recognition technology to identify manipulated media.
  • Detection Methods: Several detection methods have been proposed for identifying cheapfakes. These include:
    • Camera fingerprinting: Every camera leaves unique artifacts on images, which can be analyzed to detect tampering.
    • Editing clues: Techniques like photoshopping leave traces that can be detected.
    • Compression clues: Altering an image or video typically involves compression, which creates evidence of tampering.
    • Human observation: Some cheapfakes exhibit visible distortions or unnatural characteristics that are easily identifiable to the human eye.
  • Impact and Examples: Cheapfakes have been used for various purposes, including online fraud, political propaganda, and inciting violence.
    • Examples include spreading hoaxes via messaging apps like WhatsApp, manipulating images to create false narratives, and using cheapfakes for facial recognition attacks.

Way Forward: 

  • The Indian government instructed “social media intermediaries” to remove morphed videos or deepfakes from their platforms within 24 hours of a complaint being filed, in accordance with a requirement outlined in the IT Rules 2021. The IT Rules, 2021, also prohibit hosting any content that impersonates another person and requires social media firms to take down artificially morphed images when alerted.
  • The Indian IT ministry has also issued notices to social media platforms stating that impersonating online was illegal under Section 66D of the Information Technology Act of 2000
  • The EU has issued guidelines for the creation of an independent network of fact-checkers to help analyse the sources and processes of content creation. The EU’s code also requires tech companies including Google, Meta, and X to take measures in countering deepfakes and fake accounts on their platforms.
  • China has issued guidelines to service providers and users to ensure that any doctored content using deepfake tech is explicitly labelled and can be traced back to its source.
  • The United States of America has introduced the bipartisan Deepfake Task Force Act to counter deepfake technology.

Conclusion:

The way forward requires vigilance, ethics, and open communication between tech innovators, lawmakers, researchers, and the public. With thoughtful governance and innovation guided by shared human values, Deepfakes could usher in a new age of creativity, access, and prosperity for all.

Book A Free Counseling Session