Today's Editorial

Today's Editorial - 19 November 2023

How do deepfakes shrink online space for women?

News Excerpt:

Recently, a deepfake video of a famous actress went viral. The article discusses concerns raised about the impact of deepfake technology on women and the broader issue of online harassment and violation of privacy.

What is a Deepfake?

  • Deepfake refers to a specific type of synthetic media (Audio and Visual) created or altered using deep learning techniques, particularly Artificial Intelligence (AI) and Machine Learning algorithms. 
  • The term "deepfake" is derived from "deep learning" and "fake." 
  • It typically focuses on altering faces and voices in videos or images. 
  • These manipulated media can often appear highly realistic, making it challenging to discern the fabricated elements from the genuine ones.

Challenges posed by Deepfakes:

  • Prevalence of Non-Consensual Content: A study by Sensity AI, a company that monitors deepfakes, indicated that a significant portion of deepfake content, estimated to be around 96%, was non-consensual pornography, primarily targeting women. 
    • This statistic underscores the disproportionate impact on women who become subjects of falsified explicit material without their consent.
  • Means of Online Harassment: Deepfakes have been highlighted as a means of amplifying online harassment and cyberbullying against women. 
    • They result in emotional distress, reputational damage, and personal trauma for the victims and can have profound consequences on their personal and professional lives. 
  • Privacy and Consent Violations: Deepfakes infringe upon individuals' rights to privacy and control over their images. 
    • They undermine consent by using someone's appearance without permission, leading to a loss of agency and autonomy over one's identity and personal data.
  • Digital Safety: As this technology makes distinguishing between authentic and manipulated content challenging, it contributes to a climate of distrust, affecting how people perceive and interact with media and information online.
    • A survey by Plan International revealed that 60% of girls and women experienced distrust of social media platforms, leading to a reduction in social media usage for one-fifth of them.

Legal Provisions: 

  • Section 66E of the IT Act of 2000 pertains to violating privacy by capturing, publishing, or transmitting images of a person without their consent. It criminalises sharing private images and could be used in cases involving creating or disseminating deepfake videos that violate someone's privacy. 
  • Section 66D of the IT Act of 2000: This section deals with cheating by impersonation using communication devices or computer resources. This section could be applicable if individuals create or spread deepfake content with malicious intent to deceive or harm others. 
  • Indian Copyright Act of 1957, Section 51 addresses copyright infringement. If deepfakes involve the unauthorised use of someone's copyrighted images or videos, this provision could be used to take legal action against those responsible.

Way forward: 

  • Awareness and Education: Both men and women should be informed about the existence and the potential risks of deepfake technology. They need to be taught how to identify manipulated content and the steps they can take to prevent it.
  • Cultural Shift and Responsibility: Initiating conversations about consent, respect, and responsible online behaviour is essential. 
    • Both men and women need to be part of this dialogue to foster a culture that respects privacy and condemns any form of digital harassment or violation. This cultural shift would also challenge the victim-blaming that exists in society.
  • Digital Literacy and Online Safety: Promoting digital literacy is crucial. This includes understanding privacy settings on social media platforms, employing strong security measures, and being cautious about sharing personal information online. 
  • Legal Support and Advocacy: There is a need for robust legal frameworks to address the issues surrounding deepfakes and non-consensual image sharing. Advocacy groups and legal entities can support victims and work towards creating and enforcing laws that protect individuals from such violations.
  • Psychological Support: Victims of deepfake incidents may experience psychological distress. Access to counselling, mental health support, and safe spaces to share experiences can help individuals cope with the emotional impact of such violations.
  • Technology Solutions: Innovations in technology, including AI and digital forensics, can play a role in developing tools to detect and counter deepfakes. Platforms and tech companies should invest in solutions to identify manipulated content and prevent dissemination.

Conclusion:

Addressing the challenges posed by deepfake technology requires a multifaceted approach. To create a safer online environment for all, promoting the ethical use of AI and emphasising the importance of consent in society is essential.