Today's Editorial

Today's Editorial - 11 May 2023

EU’s Artificial Intelligence Act

Source: By DIKSHA MUNJAL: The Hindu

The story so far: After intense last-minute negotiations in the past few weeks on how to bring general-purpose artificial intelligence systems (GPAIS) like OpenAI’s popular new chatbot ChatGPT under the ambit of regulation, members of the European Parliament reached a preliminary deal on a new draft of the European Union’s ambitious Artificial Intelligence Act, first drafted two years ago.

Why regulate artificial intelligence?

As artificial intelligence technologies become omnipresent and their algorithms more advanced—capable of performing a wide variety of tasks including voice assistancerecommending musicdriving carsdetecting cancer, and even deciding whether you get shortlisted for a job— the risks and uncertainties associated with them have also ballooned.

Many AI tools are essentially black boxes, meaning even those who design them cannot explain what goes on inside them to generate a particular outputComplex and unexplainable AI tools have already manifested in wrongful arrests due to AI-enabled facial recognitiondiscrimination and societal biases seeping into AI outputs, and most recently, in how chatbots based on large language models (LLMs) like Generative Pretrained Trasformer-3 (GPT-3) and 4 can generate versatile, human-competitive and genuine looking content, which may be inaccurate and use copyrighted material created by others.

Recently, industry stakeholders including Twitter CEO Elon Musk and Apple co-founder Steve Wozniak signed an open letter asking AI labs to stop the training of AI models more powerful than GPT-4 for six months, citing potential risks to society and humanity. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said. It urged global policymakers to “dramatically accelerate” the development of “robust” AI governance systems.

EU lawmakers this month also urged world leaders, including U.S. President Joe Biden, to hold a summit to brainstorm ways to control the development of advanced AI systems such as ChatGPT, saying they were developing faster than expected.

As for the AI Act, the legislation was drafted in 2021 with the aim of bringing transparencytrust, and accountability to AI and creating a framework to mitigate risks to the safety, health, fundamental rights, and democratic values of the EU. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy. The legislation seeks to strike a balance between promoting “the uptake of AI while mitigating or preventing harms associated with certain uses of the technology”.

Similar to how the EU’s 2018 General Data Protection Regulation (GDPR) made it an industry leader in the global data protection regime, the AI law aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market” and ensure that AI in Europe respects the 27-country bloc’s values and rules. Notably, since recent AI developments have been concentrated in the U.S., the law seeks to facilitate the development of a single market for AI applications in Europe.

What does the Artificial Intelligence Act entail?

The Act broadly defines AI as “software that is developed with one or more of the techniques that can, for a given set of human-defined objectives, generate outputs such as contentpredictionsrecommendations, or decisions influencing the environments they interact with”. Under the definition, it identifies AI tools based on machine learning and deep learningknowledge and logic-based approaches and statistical approaches.

The Act’s central approach is the classification of AI tech based on the level of risk they pose to the “health and safety or fundamental rights” of a person. There are four risk categories in the Act— unacceptablehighlimited and minimal.

The Act prohibits using technologies in the unacceptable risk category with little exception. These include the use of real-time facial and biometric identification systems in public spaces; China-like systems of social scoring of citizens by governments leading to “unjustified and disproportionate detrimental treatment”; subliminal techniques to distort a person’s behaviour; and technologies which can exploit vulnerabilities of the young or elderly, or persons with disabilities.

The Act lays substantial focus on AI in the high-risk category, prescribing a number of pre-and post-market requirements for developers and users of such systems. Some systems falling under this category include biometric identification and categorization of natural persons, AI used in healthcareeducationemployment (recruitment)law enforcementjustice delivery systems, and tools that provide access to essential private and public services (including access to financial services such as loan approval systems). The Act envisages establishing an EU-wide database of high-risk AI systems and setting parameters so that future technologies or those under development can be included if they meet the high-risk criteria.

Before high-risk AI systems can make it to the market, they will be subject to strict reviews known in the Act as ‘conformity assessments’ — algorithmic impact assessments to analyze data sets fed to AI tools, biases, how users interact with the system, and the overall design and monitoring of system outputs. It also requires such systems to be transparentexplainable, allow human oversight and give clear and adequate information to the user. Moreover, since AI algorithms are specifically designed to evolve over time, high-risk systems must also comply with mandatory post-market monitoring obligations such as logging performance data and maintaining continuous compliance, with special attention paid to how these programs change through their lifetime.

AI systems in the limited and minimal risk category such as spam filters or video games can be used with a few requirements like transparency obligations. The EU’s regulatory framework proposal states that “as AI is a fast-evolving technology, the proposal has a future-proof approach, allowing rules to adapt to technological change”. The bloc’s established standard-setting bodies in each sector will set regulatory parameters for high-risk AI tech, meaning new and developing technologies will have set standards to chalk out their plans.

What is the recent proposal on General Purpose AI like ChatGPT?

As recently as February this year, general-purpose AI such as the language model-based ChatGPT, used for a plethora of tasks from summarising concepts on the internet to serving up poems, news reports, and even a Colombian court judgement, did not feature in EU lawmakers’ plans for regulating AI technologies. The bloc’s 108-page proposal for the AI Act, published two years earlier, included only one mention of the word “chatbot.”

By mid-April, however, members of the European Parliament were racing to update those rules to catch up with an explosion of interest in generative AI, which has provoked awe and anxiety since OpenAI unveiled ChatGPT six months ago.

Lawmakers now target the use of copyrighted material by companies deploying generative AI tools such as OpenAI’s ChatGPT or image generator Midjourney, as these tools train themselves from large sets of text and visual data on the internet. They will have to disclose any copyrighted material used to develop their systems. Reuters reported that some lawmakers initially proposed banning such use of copyrighted material altogether, but this was abandoned in favour of a transparency requirement.

While the current draft does not clarify what obligations GPAIS manufacturers would be subject to, lawmakers are also debating whether all forms of GPAIS will be designated high-risk. The draft could be amended multiple times before the Act actually comes into force as it would require the consensus of all member countries and all three EU administrative bodies— the ParliamentCouncil, and Commission.

How has the AI industry reacted to the legislation?

While some industry players have welcomed the legislation, others have warned that broad and strict rules could stifle innovation. Companies have also raised concerns about transparency requirements, fearing that it could mean divulging trade secrets; explainability requirements in the law have caused unease as it is often not possible for even developers to explain the functioning of algorithms. Lawmakers and consumer groups, on the other hand, have criticised it for not fully addressing risks from AI systems.

The Act also delegates the process of standardisation or creation of precise technical requirements for AI technologies to the EU’s expert standard-setting bodies in specific sectors. A Carnegie Endowment paper points out, however, that the standards process has historically been driven by industry, and it will be a challenge to ensure governments and the public have a meaningful seat at the table.

Where does global AI governance currently stand?

The rapidly evolving pace of AI development has led to diverging global views on how to regulate these technologies. The U.S. does not currently have comprehensive AI regulation and has taken a fairly hands-off approach. The Biden administration released a Blueprint for an AI Bill of Rights (AIBoR). Developed by the White House Office of Science and Technology Policy (OSTP), the AIBoR outlines a harms of AI to economic and civil rights and lays down five principles for mitigating these harms. The Blueprint, instead of a horizontal approach like the EU, endorses a sectorally specific approach to AI governance, with policy interventions for individual sectors such as health, labour, and education, leaving it to sectoral federal agencies to come out with their plans. The AIBoR has been described by the administration as a guidance or a handbook rather than a binding legislation.

On the other end of the spectrum, China over the last year came out with some of the world’s first nationally binding regulations targeting specific types of algorithms and AI. It enacted a law to regulate recommendation algorithms with a focus on how they disseminate information. China’s Cyberspace Administration of China (CAC), which drafted the rules, told companies to “promote positive energy”, to not “endanger national security or the social public interest” and to “give an explanation” when they harm the legitimate interests of users. Observers have said the rules are a way of making AI tech companies toe the ruling Communist Party’s line.

Another piece of legislation targets deep synthesis technology used to generate deepfakes. In order to have transparency and understand how algorithms function, China’s AI regulation authority has also created a registry or database of algorithms where developers have to register their algorithms, information about the data sets used by them and potential security risks.

Book A Free Counseling Session

What's Today

Reviews