GS Paper II & III
News Excerpt:
Recently, WHO released comprehensive guidance on the ethical use and governance of Large Multi-Modal Models (LMM) in healthcare.
- This fast-growing generative Artificial Intelligence (AI) technology, capable of processing diverse data inputs like text, videos and images, is revolutionising healthcare delivery and medical research.
About LMMs:
- LMMs, known for their ability to mimic human communication and perform tasks without explicit programming, have been adopted more rapidly than any other consumer technology in history.
- Platforms like ChatGPT, Bard and Bert have become household names since their introduction last year.
- WHO emphasised the importance of transparent information and policies for managing the design, development and use of LMMs to achieve better health outcomes and overcome persisting health inequities.
- Applications of LMMs in healthcare:
- Diagnosis and clinical care, such as responding to patients' written queries.
- Patient-guided use for investigating symptoms and treatments.
- Clerical and administrative tasks in electronic health records.
- Medical and nursing education with simulated patient encounters.
- Scientific research and drug development.
- LMM's risks in healthcare:
- According to WHO, generating false, inaccurate or biased statements could misguide health decisions.
- The data used to train these models can suffer from quality or bias issues, potentially perpetuating disparities based on race, ethnicity, sex, gender identity or age.
- There are broader concerns, such as the accessibility and affordability of LMMs.
- The risk of 'automation bias' in healthcare leads professionals and patients to overlook errors.
- Cybersecurity is another critical issue, given the sensitivity of patient information and the reliance on the trustworthiness of these algorithms.
Background:
|
WHO Guidelines:
- WHO called for a collaborative approach involving governments, technology companies, healthcare providers, patients and civil society in all LMM development and deployment stages.
- Key recommendations for governments include:
- Investing in public infrastructure, like computing power and public datasets, that adhere to ethical principles
- Using laws and regulations to ensure LMMs meet ethical obligations and human rights standards
- Assigning regulatory agencies to assess and approve LMMs for healthcare use
- Introducing mandatory post-release audits and impact assessments
- For developers, the WHO advises engaging a wide range of stakeholders, including potential users and healthcare professionals, from the early stages of AI development.
- It also recommends designing LMMs for well-defined tasks with the necessary accuracy and understanding of potential secondary outcomes.
Issues associated with AI:
- Exacerbate inequalities within and between countries:
- It might reduce demand for low-skilled workers and negatively impact disadvantaged groups and lower-income countries reliant on low-skill-intensive economic activities.
- This is particularly true for generative AI, which can automate high-skilled tasks, raising concerns about job displacement in clerical, technical and service jobs.
- Workers in low-income developing countries are less likely to be affected by automation due to fewer AI-enabled jobs and less likely to benefit from AI-driven productivity gains.
- AI could disproportionately affect women, potentially widening gender employment and wage gaps.
- Women are often overrepresented in roles with higher automation risks but also more frequently employed in jobs requiring interpersonal skills that are hard to automate.
- Infrastructure gaps, such as access to digital education and the internet, limit these countries' ability to fully leverage AI advancements, potentially aggravating productivity and income disparities.
- The World Economic Forum's (WEF) Global Risks Report for 2024 identified AI-generated disinformation and misinformation, especially through manipulating media content and creating deep fakes, as one of the most significant global risks over the next two years.
- Quantum computing is also a potential disruptor due to security concerns such as "harvest attacks", where criminals collect encrypted data for future decryption with advanced quantum computers.
Conclusion:
WHO's new guidance offers a roadmap for harnessing the power of LMMs in healthcare while navigating their complexities and ethical considerations. This initiative marks a significant step towards ensuring AI technologies serve the public interest, particularly in the health sector.