The art and craft of deploying AI responsibly
Relevance: GS Paper III
Why in News?
The Ministry of Electronics and Information Technology (MeitY) recently issued an advisory to the Artificial Intelligence industry.
Broader picture:
- The recent advisory came amid controversial responses from Google’s Gemini chatbot about the Indian Prime Minister. More concerning was the platform's well-moderated responses to similar queries about other global leaders.
- The advisory points to a growing concern over the unchecked proliferation of AI technologies.
- The advisory caused a lot of concern in technology circles.
- However, the minister later clarified that the advisory is aimed at significant platforms, and permission seeking from Meity is only for large platforms and will not apply to startups.
- The incident points to the broader, global challenge of ensuring that AI operates within ethical and legal constraints while continuing to innovate and that the government’s approach must be strategic rather than reactionary.
Understanding AI models:
- AI models today digest vast amounts of information, combining it into what experts call “latent spaces.”
- Latent spaces are the engine rooms of AI, thriving on impreciseness and flexibility.
- Understanding that AI models are not designed to function like traditional look-up databases is essential.
- Expecting them to deliver precise answers or politically correct opinions without specific guidelines is a misunderstanding of their capabilities.
- To address these challenges, many AI companies have instituted guardrails.
- These mechanisms work outside the core AI system, ensuring that the output is generally safe for work by not providing certain opinions or accepting specific queries.
Case study -
European Union:
- The European Union has responded to similar challenges by introducing the AI Act, the first comprehensive AI law globally.
- It focuses on high-risk AI applications, particularly in sectors like education, healthcare, and policing, and mandates new standards for these applications.
- For example, specific uses of AI, like creating facial recognition databases or using emotion recognition technology in workplaces or schools, are banned.
- This act calls for greater transparency in AI model development and holds organisations accountable for any harm resulting from high-risk AI systems.
Japan:
- The Copyright Act of Japan allows the use of copyrighted works for information analysis without permission from creators, provided the service is limited to the minimum necessary and does not unreasonably harm creators’ interests.
- Yet, unresolved issues remain regarding the scope of these exceptions and the definition of “unreasonable harm,” especially in AI.
- Japan is navigating its course in this domain.
USA:
- AI transparency has become a focal point in the United States, with states like Pennsylvania introducing legislation to ensure transparency in using AI algorithms in insurance claim processing.
- Furthermore, the US is witnessing significant legal activities, such as class action lawsuits against major insurers, over the use of AI algorithms.
- The administration has also taken steps through an Executive Order to outline actions ensuring AI’s safe, secure, and trustworthy development across various sectors.
Way forward:
- Flexibility in regulation and innovation:
- For policymakers in India and beyond, the task is to create an environment where AI can thrive without compromising ethical standards or societal values.
- A strategic, responsible approach to AI is needed. This includes developing mechanisms for accountability, flexible regulations, public awareness initiatives, addressing legal and ethical concerns, and maintaining a balance between innovation and regulation.
- Government regulation and frameworks:
- The Government of India is working to create a comprehensive global regulatory framework for AI with a pro-growth, pro-jobs, and pro-safety stance.
- This framework is expected to be released in the June-July timeframe.
- Long-term impact assessment:
- Companies at the forefront of AI deployment must address the immediate concerns and anticipate the long-term impact of their AI platforms on society.
- They must implement robust guardrails to ensure the output is safe for work, culturally sensitive, and politically neutral.
- Corporate Responsibility:
- Responsible deployment of AI is as much an art as a business necessity.
- Businesses should push the envelope of innovation while also taking the lead in ethical considerations, especially now that the full impact of AI has yet to be adequately grasped.
- Public awareness and education:
- It is also critical for users to have at least a high-level understanding of AI’s capabilities and, more importantly, its limitations.
Conclusion:
The recent incident highlights a critical junction in AI governance. India’s stance on AI development will have profound implications as a nation fuelling digital innovation with its talent pool.
Beyond Editorial: Strategic positioning of India in AI:
India's Approach:
Key Government Initiatives Leveraging Al: UMANG (Unified Mobile Application for New-Age Governance):
DigiYatra:
Digital India Bhashini (National Language Translation Mission):
Applications of Al in Urban Governance:
Applications of Al in Health Care:
Al Applications in Agriculture:
Al-Based Attendance Monitoring (Shiksha Setu):
|