Blog

THE DANGER OF DEEPFAKES

THE DANGER OF DEEP FAKES

What is Deepfake Technology?

Deepfake technology uses powerful computers and deep learning to manipulate digital media, including movies, pictures, and audio files

Using this technology, Cybercriminals overlay a digital composite over an already-existing video, photograph, or audio using artificial intelligence.

Deepfakes or AI-Generated Synthetic media are advantageous in a number of fields, including accessibility, education, filmmaking, criminal forensics, and artistic expression.

However, hyper-realistic digital falsification can be used in less resource-intensive ways to weaken public trust in democratic institutions while harming reputations and fabricating proof (cloud computing, AI algorithms and abundant data).

Origin of the Word: When an anonymous Reddit user went by "Deepfakes" in 2017, the phrase "deep fake" first appeared. This person created and posted pornographic videos using Google's open-source, deep-learning technology.

Threats associated with Deepfake Technology:

  • Deepfake can portray someone as engaging in deviant behaviour, leading to societal unrest and polarisation and even affecting the outcome of elections.
  • 96% of deep fakes are pornographic videos that dehumanize women and inflict emotional harm.
  • Deepfakes can potentially worsen the trust gap in traditional media, be used by nation-state actors to destabilize the target nation's institutions and be abused by non-state actors (terrorist groups) to incite anti-state feelings among the populace.
  • The danger of the Liar's Dividend: Denials gain more legitimacy when an unfavourable reality is discounted as a deep fake or fake news.
  • Use of fake news, alternative facts, and deep fakes as weapons are used to discredit real media and the truth.
  • Deepfakes have become impossible to tell apart from genuine faces, according to a study by academics from Lancaster University and UC Berkeley. Many even regard the former as more trustworthy. Deepfakes are used not only to propagate false information but also to con individuals.

How are countries combating deepfakes?

China: The country's cyber authority, the Cyberspace Administration of China, is implementing new rules to limit the use of deep synthesis technology to stop deception. According to the regulation, deep synthesis service providers and users must ensure that any content altered using the technology is clearly identified and can be traced back to its source.

The European Union: A revised Code of Practice mandates deep fake prevention measures from tech giants like Google, Meta, and Twitter on their services. These businesses risk fines equal to up to 6% of their annual global turnover if found to be in violation.

US: To help the Department of Homeland Security (DHS) combat deep fake technology, the United States introduced the bipartisan Deepfake Task Force Act. The law requires the DHS to conduct an annual assessment of deep fakes to evaluate the technology used, monitor how domestic and foreign companies use it and develop countermeasures to deal with the issue.

Deepfake videos intended to influence elections are illegal to publish and distribute, according to laws established in Texas and California. Distribution of non-consensual deepfake pornography is illegal in Virginia and is punishable by law.

India and Deepfake technology: 

  • The use of deep fake technologies is not prohibited by law. However, particular laws can be addressed for abusing the technology, such as cybercrimes, copyright violations, and defamation.
  • FakeCatcher, Intel's latest deep fake detection tool, has been introduced. This technology has a 96% accuracy rate and can identify deep fakes in milliseconds. The solution utilizes hardware and software from Intel, runs on a server, and communicates via a web-based platform.
  • Its ability to operate in real-time, which it claims to be the first to do, sets it apart from other systems. This eliminates the need to upload footage for analysis and wait hours for the results. 
  • According to Intel, although most deep learning-based detectors hunt for signs of inauthenticity in raw data, FakeCatcher adopts a different strategy and searches for genuine cues in actual videos. It evaluates our unique human characteristics and examines "subtle blood flow in the pixels of a video." 

Solutions:

  •  Meaningful rules and regulations developed in consultation with the technology sector, civil society, and governments are needed to discourage producing and disseminating malevolent deep fakes.
  • Simple and accessible technological methods to identify fake news, validate media, and highlight reliable sources.
  • Consumer media literacy is the most effective defence against deep fakes and misinformation.
  •  Social media platforms are recognizing the issue of deep fakes, and practically all of them have some policy or appropriate usage guidelines.