Chatbots are surprisingly effective at debunking conspiracy theories


But facts aren’t dead. Our findings about conspiracy theories are the latest—and perhaps most extreme—in an emerging body of research demonstrating the persuasive power of facts and evidence. For example, while it was once believed that correcting falsehoods that aligns with one’s politics would just cause people to dig in and believe them even more, this idea of a “backfire” has itself been debunked: Many studies consistently find that corrections and warning labels reduce belief in, and sharing of, falsehoods—even among those who most distrust the fact-checkers making the corrections. Similarly, evidence-based arguments can change partisans’ minds on political issues, even when they are actively reminded that the argument goes against their party leader’s position. And simply reminding people to think about whether content is accurate before they share it can substantially reduce the spread of misinformation. 

And if facts aren’t dead, then there’s hope for democracy—though this arguably requires a consensus set of facts from which rival factions can work. There is indeed widespread partisan disagreement on basic facts, and a disturbing level of belief in conspiracy theories. Yet this doesn’t necessarily mean our minds are inescapably warped by our politics and identities. When faced with evidence—even inconvenient or uncomfortable evidence—many people do shift their thinking in response. And so if it’s possible to disseminate accurate information widely enough, perhaps with the help of AI, we may be able to reestablish the factual common ground that is missing from society today.

You can try our debunking bot yourself at at debunkbot.com

Thomas Costello is an assistant professor in social and decision sciences at Carnegie Mellon University. His research integrates psychology, political science, and human-computer interaction to examine where our viewpoints come from, how they differ from person to person, and why they change—as well as the sweeping impacts of artificial intelligence on these processes.

Gordon Pennycook is the Dorothy and Ariz Mehta Faculty Leadership Fellow and associate professor of psychology at Cornell University. He examines the causes and consequences of analytic reasoning, exploring how intuitive versus deliberative thinking shapes decision-making to understand errors underlying issues such as climate inaction, health behaviors, and political polarization.

David Rand is a professor of information science, marketing and management communication, and psychology at Cornell University. He uses approaches from computational social science and cognitive science to explore how human-AI dialogue can correct inaccurate beliefs, why people share falsehoods, and how to reduce political polarization and promote cooperation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *