Skip to content

Debunkbot.com: Reducing Conspiracy Beliefs through Dialogue with AI

Join MIT Professor David Rand to explore how developments in AI, such as Large Language Models (LLMs) like GPT, can help create a better-informed public.

In this talk, Rand focuses on conspiracy theory beliefs, which are notoriously hard to record. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored - an issue that LLMs can help address. Rand will discuss experiments in which his group engaged conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo and found large and lasting reductions in belief. These findings suggest that many conspiracy theory believers can revise their views if presented with sufficiently compelling evidence and demonstrates that generative AI has potential for positive societal impact in this area.

Try it yourself at www.DebunkBot.com.

October 28
3 - 4:30pm
Free with Museum admission

Speaker

The image shows a man with short dark hair and a beard standing with his arms crossed. He is wearing a dark gray blazer over a light blue button-up shirt. He is leaning against a glass wall in a well-lit hallway. The man is smiling gently, and the background appears to be an office or a professional setting with warm lighting.

David Rand

Erwin H. Schell Professor and Professor of Management Science and Brain and Cognitive Sciences, MIT