WHO guidelines for multi-modal Generative AI in Healthcare

GS Paper II & III

News Excerpt:

Recently, WHO released comprehensive guidance on the ethical use and governance of Large Multi-Modal Models (LMM) in healthcare.

  • This fast-growing generative Artificial Intelligence (AI) technology, capable of processing diverse data inputs like text, videos and images, is revolutionising healthcare delivery and medical research.

About LMMs: 

  • LMMs, known for their ability to mimic human communication and perform tasks without explicit programming, have been adopted more rapidly than any other consumer technology in history. 
    • Platforms like ChatGPT, Bard and Bert have become household names since their introduction last year. 
  • WHO emphasised the importance of transparent information and policies for managing the design, development and use of LMMs to achieve better health outcomes and overcome persisting health inequities.
  • Applications of LMMs in healthcare: 
    • Diagnosis and clinical care, such as responding to patients' written queries.
    • Patient-guided use for investigating symptoms and treatments.
    • Clerical and administrative tasks in electronic health records.
    • Medical and nursing education with simulated patient encounters.
    • Scientific research and drug development.
  • LMM's risks in healthcare:
    • According to WHO, generating false, inaccurate or biased statements could misguide health decisions. 
    • The data used to train these models can suffer from quality or bias issues, potentially perpetuating disparities based on race, ethnicity, sex, gender identity or age.
    • There are broader concerns, such as the accessibility and affordability of LMMs.
      • The risk of 'automation bias' in healthcare leads professionals and patients to overlook errors.
    • Cybersecurity is another critical issue, given the sensitivity of patient information and the reliance on the trustworthiness of these algorithms.

Background:

  • In May 2023, the WHO highlighted the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing and deploying AI for health. 
  • The six core principles identified by WHO are:
    • protect autonomy;
    • promote human well-being, human safety, and the public interest;
    • ensure transparency, explainability, and intelligibility;
    • foster responsibility and accountability;
    • ensure inclusiveness and equity;
    • promote AI that is responsive and sustainable.
  • In the document released last year, WHO listed out concerns that called for rigorous oversight needed for the technologies to be used in safe, effective and ethical ways. These included:  
    • The data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity and inclusiveness;
    • Large Language Models (LLM) generate responses that can appear authoritative and plausible to an end user; however, they may be completely incorrect or contain serious errors, especially for health-related responses;
    • LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response; 
    • LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content;
    • Policy-makers should ensure patient safety and protection while technology firms work to commercialise LLMs.
  • The European Union passed the AI Act to ensure that AI systems used in the EU are safe and respect fundamental rights and EU values.
  • The 2024 World Economic Situation and Prospects report highlighted that although AI will transform the labour market and enhance productivity like other significant technological advancements in the past, the impact may not be evenly distributed. This may widen disparities, the experts pointed out.

WHO Guidelines:

  • WHO called for a collaborative approach involving governments, technology companies, healthcare providers, patients and civil society in all LMM development and deployment stages.
  • Key recommendations for governments include:
    • Investing in public infrastructure, like computing power and public datasets, that adhere to ethical principles
    • Using laws and regulations to ensure LMMs meet ethical obligations and human rights standards
    • Assigning regulatory agencies to assess and approve LMMs for healthcare use
    • Introducing mandatory post-release audits and impact assessments
  • For developers, the WHO advises engaging a wide range of stakeholders, including potential users and healthcare professionals, from the early stages of AI development. 
  • It also recommends designing LMMs for well-defined tasks with the necessary accuracy and understanding of potential secondary outcomes.

Issues associated with AI:

  • Exacerbate inequalities within and between countries: 
    • It might reduce demand for low-skilled workers and negatively impact disadvantaged groups and lower-income countries reliant on low-skill-intensive economic activities. 
    • This is particularly true for generative AI, which can automate high-skilled tasks, raising concerns about job displacement in clerical, technical and service jobs​​.
    • Workers in low-income developing countries are less likely to be affected by automation due to fewer AI-enabled jobs and less likely to benefit from AI-driven productivity gains.
  • AI could disproportionately affect women, potentially widening gender employment and wage gaps
    • Women are often overrepresented in roles with higher automation risks but also more frequently employed in jobs requiring interpersonal skills that are hard to automate​​.
  • Infrastructure gaps, such as access to digital education and the internet, limit these countries' ability to fully leverage AI advancements, potentially aggravating productivity and income disparities​​.
  • The World Economic Forum's (WEF) Global Risks Report for 2024 identified AI-generated disinformation and misinformation, especially through manipulating media content and creating deep fakes, as one of the most significant global risks over the next two years. 
  • Quantum computing is also a potential disruptor due to security concerns such as "harvest attacks", where criminals collect encrypted data for future decryption with advanced quantum computers.

Conclusion:

WHO's new guidance offers a roadmap for harnessing the power of LMMs in healthcare while navigating their complexities and ethical considerations. This initiative marks a significant step towards ensuring AI technologies serve the public interest, particularly in the health sector.

 

Book A Free Counseling Session