Can a specially trained chatbot help dissuade people from believing in conspiracy theories? This intriguing question is explored in research published in Science, titled Durably Reducing Conspiracy Beliefs through Dialogues with AI. In the study, participants were asked to rate their belief in conspiracy theories on a scale of 0 to 100 before engaging in conversations with the chatbot. The results were compelling—researchers observed a 20% reduction in conspiracy beliefs, and remarkably, this effect persisted for at least two months. So, should AI take the lead in conversations with conspiracy theorists?
Sounds like a decent success, and the researchers conclude with these words
It has become almost a truism that people “down the rabbit hole” of conspiracy belief are virtually impossible to reach. In contrast to this pessimistic view, we have shown that a relatively brief conversation with a generative AI model can produce a large and lasting decrease in conspiracy beliefs, even among people whose beliefs are deeply entrenched. It may be that it has proven so difficult to dissuade people from their conspiracy beliefs because they simply have not been given sufficiently good counterevidence. This paints a picture of human reasoning that is surprisingly optimistic: Even the deepest of rabbit holes may have an exit. Conspiracists are not necessarily blinded by psychological needs and motivations—it just takes a genuinely strong argument to reach them.
You can test the chatbot yourself and I think that also gives the most insight into how to interpret this result. It’s fun to do, but I still wonder if it really makes any practical sense.
At first, I looked at the state of understanding of the content of conspiracy theories. If I pretended to be someone convinced that there were multiple shooters in JFK’s assassination, I got a pretty complete answer that included not only the Warren Commission report but also the investigation by the United States House Select Committee on Assassinations. That committee’s final report did leave room for more than one shooter, but the chatbot also tells me right away that the acoustic evidence on which that was based was quickly overturned by research from the National Academy of Sciences.
Similarly, the chatbot’s response to ‘my belief’ in the existence of Manchurian candidates deployed to commit political assassinations is substantively well refuted by the chatbot with reference to what psychological research suggests is possible in terms of influencing behaviour.
While the content is very similar to what you find in Wikipedia on these topics, the obvious difference is the packaging. Rhetorically, the chatbot makes a better attempt to make you absorb the knowledge. It is worth bearing in mind that the chatbot does not debunk anything itself, it relies entirely on material gathered by journalists, historians, other scientists and fact-checkers, which is accessible via public sources.
I thought it would be nice to test how it would fare with a (conspiracy) theory about more recent events where it is not yet entirely clear what the most plausible scenarios are. The attacks on the Nordstream pipelines seemed like a suitable candidate, but when I told the chatbot that I believed in the conspiracy theory that the Russians were behind it because they are technologically capable of it and had the opportunity (and I made up another vague motive), to my surprise I got this message:
Not a conspiracy theory? So it must be true then!
Then I tried posing as someone who firmly believes that SARS-CoV-2 is an engineered virus leaked from a lab in Wuhan. Here occurred the shortcoming we often see with AI chatbots, at the slightest bit of backlash, the bot backs down and goes along with your arguments.
Below are screenshots of my conversation on lab-leak theory with the ‘DebunkBot’. They can just be read if you click on them.
Natasha Gerson came to a similar conclusion when she posed as a Holocaust denier. So no, with experienced conspiracy thinkers, this bot is not going to do any good, and may even backfire. This is also a point that cultural sociologist Stef Aupers notes. ‘You see the resistance to scientific and factual information, especially among heavy conspiracy thinkers. As long as they remain non-influenceable, you haven’t solved the real problem’ he told NOS.
I think there is also some truth in what the editor-in-chief of Science writes about the results of the study in his editorial:
It is perplexing why conspiracy theories seem to be multiplying while public trust in scientists remains high. Perhaps the robust trust in scientists—and thus in the information they generate—is what allows the counterevidence to conspiracies to be so effective when delivered efficiently and in high volume by the LLM. Purveyors of misinformation sometimes use a technique called the “Gish gallop”—named after a creationist who was particularly good at it—in which a conversation is flooded with an overwhelming number of mistruths. Humans are unable to respond to this effectively, no matter how skilled the manner of their responses may be. But the LLM cannot be overwhelmed; it can cite counterevidence indefinitely.
Indeed, the chatbot does not easily get frustrated by a Gish gallop, but as was evident from my chat on the lab-leak, that does not turn into steadfastly continuing to substantiate its ‘own story’ either.
Translated from the Dutch version on Kloptdatwel. Title image created with Openart.ai.