How Artificial Intelligence  Regulation is Shaping Up Globally

GS Paper III

News Excerpt:

As Artificial Intelligence (AI) technology is evolving and adopted by businesses at a staggering speed, government and intergovernmental bodies across the world are moving to understand it and potentially put regulations around it.

Recent Developments:

  • The AI space has seen certain developments crucial to its regulation in recent years:
    • The United Nations’s Resolution on Artificial Intelligence.
    • The AI Act by the European Parliament.
    • The laws introduced on AI in the U.K. and China.
    • The launch of the AI mission in India. 
  • These efforts to formalise AI regulations at the global level will be critical to various sectors of governance in all other countries.

United Nations Resolution on Artificial Intelligence:

  • The General Assembly has adopted the first UN resolution on AI giving global support to an international effort to ensure that the AI benefits all nations, respects human rights and is “safe, secure and trustworthy.
  • The resolution recognised that unethical and improper use of AI systems would impede the achievement of the 2030 Sustainable Development Goals (SDGs), weakening the ongoing efforts across all three dimensions — social, environmental, and economic.
  • It also mentioned the plausible adverse impact of AI on the workforce.
  • In addition to its workforce, the impact on small and medium entrepreneurs also needs to be ascertained. 
  • Thus, being the first of its kind, the Resolution has shed light on the future implications of AI systems and the urgent need to adopt collaborative action.

The EU’s new Artificial Intelligence Act:

  • The European Parliament recently passed the AI Act, the foremost law establishing rules and regulations governing AI systems. 
  • In its law, the EU has adopted a risk-based approach.
    • The Act categorises systems into four categories, namely unacceptable, high, limited, and minimal risks, prescribing guidelines for each. 
    • The Act prescribes an absolute ban on applications that risk citizens’ rights, including manipulation of human behaviour, emotion recognition, mass surveillance, etc. 
  • While the Act allows exemptions to banned applications when it is pertinent to law enforcement, it limits the deployment by asking for prior judicial/administrative authorisation in such cases.
  • The landmark legislation highlights two important considerations — acknowledging the compliance burden placed on business enterprises, and start-ups, and regulating the much-deliberated Generative AI systems such as ChatGPT. 
  • These two factors warrant the immediate attention of policymakers, given their disruptive potential and the challenges of keeping pace with such evolving systems. 

China’s stand on AI regulation:

  • China has released a regulatory framework addressing the following three issues:
    • Content moderation, which includes identification of content generated through any AI system; 
    • Personal data protection, with a specific focus on the need to procure users’ consent before accessing and processing their data; 
    • Algorithmic governance, with a focus on security and ethics, while developing and running algorithms over any gathered dataset.
  • Identifying risks is evident in the approach adopted by China, which focuses on prompting AI tools and innovation with safeguards against any future harm to the nation’s social and economic goals.

The U.K.’s framework on AI:

  • The U.K. has adopted a principled and context-based approach in its ongoing efforts to regulate AI systems. 
  • The approach requires mandatory consultations with regulatory bodies, expanding its technical know-how and expertise in better regulating complex technologies while bridging regulatory gaps.
  • The U.K. has thus resorted to a decentralised and more soft law approach rather than opting to regulate AI systems through stringent legal rules. This is in striking contrast to the EU approach.

India’s position:

  • Amid the global movement towards regulating AI systems, India’s response would be crucial, with the nation currently catering to one of the largest consumer bases and labour forces for technology companies. 
  • India will be home to over 10,000 deep tech start-ups by 2030. 
  • In this direction, a ₹10,300 crore allocation was approved for the India AI mission to further its AI ecosystem through enhanced public-private partnerships and promote the start-up ecosystem. 
  • Amongst other initiatives, the allocation would be used to deploy 10,000 Graphic Processing Units, Large Multi-Models (LMMs) and other AI-based research collaboration and efficient and innovative projects.

Way Forward:

  • With its economy expanding, India’s response must align with its commitment towards the SDGs while also ensuring that economic growth is maintained. 
  • This would require the judicious use of AI systems to offer solutions that could further the innovation while mitigating its risks. 
  • A gradual phase-led approach appears more suitable for India’s efforts towards a fair and inclusive AI system. 
  • Considering AI’s impact on the labour force it would be imperative, especially for developing and least developed countries, to devise a response as the labour market in such countries is increasingly vulnerable to the use of such systems.

Recent Copyright infringement case against OpenAI:

  • The dispute started with the New York Times's allegations that OpenAI, the creator of ChatGPT, and its partner, Microsoft, unlawfully utilised millions of its articles to train various AI technologies. 
  • OpenAI's Defence:
    • OpenAI has contended that the unlicensed use of copyrighted material for AI training can be seen as a transformative use, potentially qualifying for protection under the fair use doctrine under copyright law.

The Case's Impact on AI Development and Copyright Law:

  • The lawsuit between The NYT and OpenAI carries implications that extend far beyond the legal domain:
    • It will potentially influence AI development practices, the evolution of copyright law, and the digital content scape.
    • The case spotlights the critical need for clear legal frameworks that adequately address the nuances of AI technologies and their interaction with copyrighted works.
    • The lawsuit may catalyse a shift towards more transparent and cooperative relationships between AI developers and content creators. 
    • It also serves as a bellwether for future disputes in the rapidly evolving intersection of technology and copyright law. 
  • This case will likely inform policy discussions, shape industry standards, and influence public perceptions about the ethical use of copyrighted content in AI training processes.

Book A Free Counseling Session