GS Paper - II
A chatbot with the ability to call out fake news and misinformation was able to persuade participants in a study to have second thoughts about their beliefs — which suggests that artificial intelligence (AI) can be used as a tool to combat conspiracy theories and disinformation.
Conspiracy theories
- Conspiracy theories are thought to feed off the yearning of individuals for safety and stability in a world full of uncertainties.
- One of the potentially cool applications of this research is you could use AI to debunk conspiracy theories in real life.
Relevant and important
- The study shows that many people who strongly believe in “seemingly fact-resistant conspiratorial beliefs” can change their minds when presented with compelling evidence, the researchers wrote.
- From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: Conspiratorial rabbit holes may indeed have an exit. Psychological needs and motivations do not inherently blind conspiracists to evidence — it simply takes the right evidence to reach them.
- Studies have shown that almost 1 in every 2 Americans believe conspiracy theories — the claim that NASA “faked” the 1969 Moon landing has endured for decades. During the Covid-19 pandemic, some conspiracy theorists said vaccines were used to inject chips into the body to enable mass surveillance; in Germany, ideas such as these triggered violent protests.
How study was done
- The researchers said they sought to “leverage advancements in large language models (LLMs)”, a form of artificial intelligence (AI) that has access to vast amounts of information and the ability to generate bespoke arguments, “to try to directly refute” particular evidence each study participant cited as supporting their conspiratorial beliefs.
- Across two experiments, 2,190 Americans articulated — in their own words — a conspiracy theory in which they believe, along with the evidence they think supports this theory.
- They then engaged in a three-round conversation with the LLM GPT-4 Turbo [chatbot], which we prompted to respond to this specific evidence while trying to reduce participants’ belief in the conspiracy theory, the study says.
- The results were encouraging: across a wide range of conspiracy theories, “the treatment reduced participants’ belief in their chosen conspiracy theory by 20% on average”, and the “effect persisted undiminished for at least 2 months”. Also, the study noted, “AI did not reduce belief in true conspiracies”.