Regulate AI now
Source: By Vaibhav Parikh: The Financial Express
After the initial shock and awe has waned, ChatGPT is increasingly being absorbed in various sectors that have significant job potential such as customer service, therapy, clinical documentation, spam detection, human resources, coding, sentiment analysis, etc.
However, with every innovation in artificial intelligence (AI), we are reminded of its threats. Generative AI (GAI), such as ChatGPT, has brought these threats closer home. Here, we will focus on two challenges: misinformation and copyright.
First, Chat GPT’s greatest strength—to mimic a human actor in terms of natural language processing—could also be its greatest undoing if it is manipulated by bad actors to spread misinformation. Misinformation or ‘fake news’ as a phenomenon made its first foray into the mass psyche and mainstream discussion arguably during the 2016 US presidential elections, in which Democrat Hilary Clinton and Republican Donald Trump were the candidates. The 2019 general elections in India also saw a swathe of misinformation sweeping the web, causing it to be dubbed India’s first ‘WhatsApp Elections’. Until recently, we saw dangerous rumours about the Covid-19 vaccine and non-scientific treatments.
‘Fake news’ plays to people’s feelings of fear and shock, which, when coupled with their limited attention span to invest in distinguishing the truth from otherwise, has the potential to create conflicts and destabilize societies. ChatGPT’s human-like natural language processing makes it terrifyingly good as a weapon for spreading high-quality misinformation on the web. With the appropriate prompts, it can spew nonsensical and factually incorrect outputs in a confident tone.
While doing so, it can imitate a desired writing style to lend further credibility. It can also increase the base of miscreants by giving a platform to non-native speakers. There are some guard-rails that are inbuilt to prevent misuse. However, there are concerns that users have found ways to bypass them.
News guard, a prominent organisation that studies and combats misinformation, reported that out of a series of leading prompts relating to 100 of the top false narratives, ChatGPT successfully generated legitimate-seeming, and even authoritative, responses to 80.
Social media intermediaries use a mix of ‘AI-detection’ tools, a team of fact-checkers and crowdsourcing to fight misinformation. However, the odds are stacked against fact-checkers given the relatively small size of this cohort. Given the scale and pace with which Chat GPT and future (read: better) GAI models generate content, these odds may become higher.
Second, Chat GPT and other GAI models are at odds with our notion of copyright and the right of the creator to receive exclusive benefits of creative work. This is causing an outcry in the creative industry. The data used for training GAI is often copyrighted in one way or the other. While copyrighted work can be used under the ‘fair dealing’ doctrine under the Indian Copyright Act, it is unlikely that scraping billions of works from the web for a commercial benefit would be protected. It would depend on the purpose and nature of the use and its impact on the market. In fact, Getty Images, a prominent stock images website, and, separately, a trio of artists are suing a GAI company in the US for scraping their content without permission and using it for a profit motive. Then there is the issue of copyright ability of the output itself, especially if there is human involvement in training the model, and finetuning and editing the end product.
Ultimately, only legislation or litigating copyright issues would give us some direction. Unfortunately, on the legislation part, most legislators severely lack the understanding of the implications of this rapidly growing technology. Either they would overreach or underachieve. On the litigation side, it would involve huge costs, take significant time and get judgements that are likely restricted to narrow cases.
Lawmakers’ inability to devise concrete rules governing AI adds significant uncertainties for companies and investors. Countries and supra-national bodies such as the European Union, Institute of Electrical and Electronics Engineers, and Organisation for Economic Co-operation and Development, have only come up with strategy papers, policies, vision documents and ethical guidelines—all non-binding.
It is no exaggeration to say that we are at the mercy of self-restraint and proactivity, shown by the tech industry itself. GAI portends a significant dehumanization of civilization which can only be tackled by the right combination of legal devices and industry self-regulation. The EU is planning a regulation called the ‘AI Act’ that lays down specific requirements and obligations of actors for particular use cases. Coupled with legislation, measures such as ‘watermarking’ GAI outputs, mitigation strategies, ‘humane-by design’ ideology, self-throttling, limited access and documentation of training data would ensure that we innovate responsibly.