聊天机器人散播事实与谎言,足以左右选民的选择。

内容来源:https://www.sciencenews.org/article/chatbots-facts-falsehood-sway-voters-ai
内容总结:
聊天机器人正以“信息轰炸”策略影响选民,其说服力与内容真伪无关
最新研究表明,无论陈述事实还是散布虚假信息,人工智能聊天机器人在影响选民政治倾向方面展现出同等强大的说服力。这种“信息量至上”的干预策略,可能对民主选举进程构成潜在威胁。
跨国实验揭示影响效果
据《自然》杂志12月4日发表的研究,在美国、加拿大和波兰三国开展的实验中,参与者仅与聊天机器人进行约6分钟对话后,其政治立场便向原本不支持的一方偏移。在美国2024年特朗普与哈里斯总统竞选前的实验中,支持特朗普的机器人使哈里斯支持者的立场平均向其偏移近4个百分点,反之偏移幅度约为2.3个百分点。在加拿大和波兰2025年大选前的实验中,这种偏移幅度更是达到约10个百分点。
“信息轰炸”成关键手段
《科学》杂志同期研究进一步揭示,聊天机器人的说服力并非源于情感共鸣或个性化叙事,而取决于其提供信息的数量。当被要求提供大量“高质量事实、证据和信息”时,机器人的说服效果比仅被要求“尽可能说服”时高出27%。然而,这种策略直接导致信息质量下降:当GPT-4o被要求优先提供事实时,其信息准确率从约80%降至60%。
虚假信息风险存在政治倾向差异
研究指出,右翼倾向的聊天机器人比左翼倾向者更易传播虚假信息。普渡大学计算社会科学家丽莎·阿盖尔在评论中警告,这种带有政治偏见且具有说服力的虚假信息,可能对“民主治理的合法性构成根本性威胁”。
长期影响与防护机制
尽管聊天机器人短期内难以完全改变选民的投票意向,但能在摇摆选民中产生关键影响。华盛顿大学专家吉莉安·费舍尔的研究发现,了解AI运作机制的受众对其说服性宣传更具抵抗力。随着人工智能应用普及,帮助公众识别其说服策略与误导风险,已成为维护社会健康运行的重要课题。
(综合《自然》《科学》最新研究成果报道)
中文翻译:
聊天机器人既能输出事实,也能散布谎言,足以左右选民意向。无论陈述真相还是编造谎言,人工智能的说服力同样惊人。单纯罗列事实很少能改变人们的立场——除非进行劝说的一方是聊天机器人。
12月4日《自然》杂志刊登的研究显示,在三个国家进行的实验中,与人工智能进行简短交流后,潜在选民都更倾向于支持原本不太看好的候选人。这一现象在特朗普与卡玛拉·哈里斯对决的2024年美国总统大选前夕同样存在:支持特朗普的机器人将哈里斯选民推向特朗普阵营,反之亦然。
《科学》杂志同期发表的另一篇论文指出,最具说服力的聊天机器人无需编织精彩故事或迎合个人信念,它们只需输出最大量的信息。但正是这些夸夸其谈的机器人,也同时散布着最多虚假信息。
"谎言并不比真相更具说服力,"麻省理工学院计算社会科学家、两篇论文的共同作者戴维·兰德解释道,"当你需要百万条论据时,优质事实终将耗尽。为凑足数量,就不得不掺入劣质信息。"
值得警惕的是,右翼倾向的聊天机器人比左翼机器人更易传播虚假信息。普渡大学计算社会科学家丽莎·阿盖尔在《科学》杂志的评论中指出,这种带有政治偏见却极具说服力的捏造信息"对民主治理的合法性构成根本性威胁"。
在《自然》的研究中,兰德团队于2024年夏末招募了2300余名美国参与者。参与者先以百分制评价对特朗普或哈里斯的支持度,再与支持某位候选人的聊天机器人进行约六分钟对话。与观点相同的机器人对话影响甚微,但哈里斯支持者与亲特朗普机器人交流后,平均支持度向特朗普偏移近4分;特朗普支持者与亲哈里斯机器人对话后,平均偏移约2.3分。一个月后的回访显示,这种影响虽减弱但仍可察觉。
聊天机器人虽未能直接改变多数人的投票意向,但"它能改变你对反对派候选人的好感度",阿盖尔指出,"只是不会动摇你对己方候选人的看法"。研究表明,在选民举棋未定的情况下,这些机器人足以影响选举结果。例如研究团队在2025年加拿大和波兰大选前,分别对1530名加拿大人和2118名波兰人重复实验,发现机器人能使参与者对非首选候选人的支持度提升约10分。
在《科学》的研究中,近7.7万名英国参与者与19种不同AI模型就700多个议题展开对话,以探究聊天机器人的说服机制。研究发现,数据训练量越大的AI模型说服力越强,但最显著的效果提升来自指令其堆砌事实。仅要求"尽可能说服对方"的指令使人们观点改变约8.3个百分点,而要求"提供大量高质量事实证据"的指令使改变幅度接近11个百分点——说服力提升27%。
通过用最具说服力(往往充斥大量事实)的对话训练机器人,可使其在后续交流中更具影响力。但这种训练方式会损害信息准确性:当指令GPT-4o采用事实堆砌而非讲故事或道德呼吁等策略时,其准确率从约80%降至60%。
为何堆砌事实能增强机器人的说服力,对人类却无效?华盛顿大学人工智能与社会专家吉莉安·费舍尔认为这仍是未解之谜。她推测人们认为人类比机器更易犯错。其团队7月在奥地利维也纳计算语言学协会年会上发表的研究带来希望:越了解AI运作机制的用户,越不易被其说服。"意识到机器人也会犯错,或许能成为我们的防护盾,"她表示。
随着人工智能的爆炸式普及,费舍尔等学者强调,帮助公众认识机器人的说服与误导能力对社会健康发展至关重要。但现实中的说服策略往往比实验场景更隐蔽。西北大学说服心理学专家雅各布·蒂尼指出:"用户可能只是询问晚餐建议,聊天机器人却回答'这是卡玛拉·哈里斯最爱的晚餐'"——这种看似平常的对话,实则暗含政治引导。
英文来源:
Chatbots spewing facts, and falsehoods, can sway voters
AIs are equally persuasive when they’re telling the truth or lying
Laundry-listing facts rarely changes hearts and minds – unless a bot is doing the persuading.
Briefly chatting with an AI moved potential voters in three countries toward their less preferred candidate, researchers report December 4 in Nature. That finding held true even in the lead-up to the contentious 2024 presidential election between Donald Trump and Kamala Harris, with pro-Trump bots pushing Harris voters in his direction, and vice versa.
The most persuasive bots don’t need to tell the best story or cater to a person’s individual beliefs, researchers report in a related paper in Science. Instead, they simply dole out the most information. But those bloviating bots also dole out the most misinformation.
“It’s not like lies are more compelling than truth,” says computational social scientist David Rand of MIT and an author on both papers. “If you need a million facts, you eventually are going to run out of good ones and so, to fill your fact quota, you’re going to have put in some not-so-good ones.”
Problematically, right-leaning bots are more prone to delivering such misinformation than left-leaning bots. These politically biased yet persuasive fabrications pose “a fundamental threat to the legitimacy of democratic governance,” writes Lisa Argyle, a computational social scientist at Purdue University in West Lafayette, Ind., in a Science commentary on the studies.
For the Nature study, Rand and his team recruited over 2,300 U.S. participants in late summer 2024. Participants rated their support for Trump or Harris out of 100 points, before conversing for roughly six minutes with a chatbot stumping for one of the candidates. Conversing with a bot that supported one’s views had little effect. But Harris voters chatting with a pro-Trump bot moved almost four points, on average, in his direction. Similarly, Trump voters conversing with a pro-Harris bot moved an average of about 2.3 points in her direction. When the researchers re-surveyed participants a month later, those effects were weaker but still evident.
The chatbots seldom moved the needle enough to change how people planned to vote. “[The bot] shifts how warmly you feel” about an opposing candidate, Argyle says. “It doesn’t change your view of your own candidate.”
But persuasive bots could tip elections in contexts where people haven’t yet made up their minds, the findings suggest. For instance, the researchers repeated the experiment with 1,530 Canadians and 2,118 Poles prior to their countries’ 2025 federal elections. This time, a bot stumping in favor of a person’s less favored candidate moved participants’ opinions roughly 10 points in their direction.
For the Science paper, the researchers recruited almost 77,000 participants in the United Kingdom and had them chat with 19 different AI models about more than 700 issues to see what makes chatbots so persuasive.
AI models trained on larger amounts of data were slightly more persuasive than those trained on smaller amounts of data, the team found. But the biggest boost in persuasiveness came from prompting the AIs to stuff their arguments with facts. A basic prompt telling the bot to be as persuasive as possible moved people’s opinions by about 8.3 percentage points, while a prompt telling the bot to present lots of high-quality facts, evidence and information moved people’s opinions by almost 11 percentage points – making it 27 percent more persuasive.
Training the chatbots on the most persuasive, largely fact-riddled exchanges made them even more persuasive on subsequent dialogues with participants.
But that prompting and training comprised the information. For instance, GPT-4o’s accuracy dropped from roughly 80 percent to 60 percent when it was prompted to deliver facts over other tactics, such as storytelling or appealing to users’ morals.
Why regurgitating facts makes chatbots, but not humans, more persuasive remains an open question, says Jillian Fisher, an AI and society expert at the University of Washington in Seattle. She suspects that people perceive humans as more fallible than machines. Promisingly, her research, reported in July at the annual Association for Computational Linguistics meeting in Vienna, Austria, suggests that users who are more familiar with how AI models work are less susceptible to their persuasive powers. “Possibly knowing that [a bot] does make mistakes, maybe that would be a way to protect ourselves,” she says.
With AI exploding in popularity, helping people recognize how these machines can both persuade and misinform is vital for societal health, she and others say. Yet, unlike the scenarios depicted in experimental setups, bots’ persuasive tactics are often implicit and harder to spot. Instead of asking a bot how to vote, a person might just ask a more banal question, and still be steered toward politics, says Jacob Teeny, a persuasion psychology expert at Northwestern University in Evanston, Ill. “Maybe they’re asking about dinner and the chatbot says, ‘Hey, that’s Kamala Harris’ favorite dinner.’”