«

人工智能驱动的虚假信息浪潮正威胁民主体制。

qimuai 发布于 阅读:23 一手编译


人工智能驱动的虚假信息浪潮正威胁民主体制。

内容来源:https://www.wired.com/story/ai-powered-disinformation-swarms-are-coming-for-democracy/

内容总结:

最新研究警告,人工智能(AI)驱动的虚假信息传播即将进入全新阶段,可能对民主社会构成系统性威胁。国际权威期刊《科学》5月23日发表由全球22位跨领域学者联合撰写的论文指出,AI技术将彻底改变认知操纵的运作模式——未来单凭一人操控AI系统,即可指挥成千上万个具有记忆功能和独立人格的虚拟账号集群,在社交媒体上模拟人类行为、实时调整策略并自主进化,其规模与隐蔽性将远超2016年俄罗斯“互联网研究机构”的人工水军模式。

研究显示,这种“AI集群”不仅能模仿人类社交动态,还可通过海量A/B测试实现自我优化,针对特定群体精准投放定制化信息。论文联合作者、挪威商学院传播学教授约纳斯·昆斯特指出,现有监测体系难以识别此类新型信息战手段,而社交媒体平台出于商业利益缺乏治理动力。更严峻的是,该技术可能已在测试阶段,预计将对2028年美国大选等重大政治事件产生冲击。

尽管部分专家对AI治理持乐观态度,但学界普遍认同当前防御机制存在严重滞后。研究团队呼吁建立由学术界与非政府组织主导的“AI影响力观察站”,通过标准化监测提升社会应对能力。前白宫虚假信息应对官员妮娜·扬科维茨警示,各国政府目前缺乏应对AI危害的政治意愿,若不立即行动,人类社会或将面临“民主终结”的危机。

中文翻译:

2016年,每天都有数百名俄罗斯人涌入圣彼得堡萨武什金纳街55号的一栋现代化办公楼;他们正是臭名昭著的"网络水军工厂"——互联网研究机构的成员。这些员工每周七天昼夜不停地手动评论新闻文章,在脸书和推特上发帖,竭力煽动美国民众对即将到来的总统选举的情绪。

当这一阴谋最终被揭露时,媒体进行了广泛报道,参议院举行了听证会,社交媒体平台也改进了用户验证方式。但事实上,尽管投入了大量资金和资源,互联网研究机构的影响力却微乎其微——尤其是与另一场与俄罗斯相关的行动相比:该行动在选举前夕泄露了希拉里·克林顿的电子邮件。

十年过去了,虽然互联网研究机构已不复存在,但虚假信息活动仍在不断演变,包括利用人工智能技术创建虚假网站和深度伪造视频。5月23日发表在《科学》杂志上的一篇新论文预测,虚假信息活动的运作方式即将发生阶段性变革。论文指出,未来不再需要数百名员工坐在圣彼得堡的办公桌前,只需一人掌握最新的人工智能工具,就能指挥成千上万个社交媒体账号组成的"集群"。这些账号不仅能生成与人类创作无异的内容,还能独立实时进化——整个过程无需持续的人工监督。

研究人员认为,这些人工智能集群可能导致全社会观点的转变,不仅影响选举,最终还可能终结民主——除非现在采取措施加以防范。

报告指出:"人工智能的进步为操纵大众信念和行为提供了可能。通过自适应地模仿人类社会动态,它们对民主构成了威胁。"

这篇论文由来自全球的22位专家共同撰写,涵盖计算机科学、人工智能、网络安全、心理学、计算社会科学、新闻学和政府政策等多个领域。

该领域的其他专家在审阅论文后,同样对人工智能技术将如何改变信息环境持悲观态度。

伦敦国王学院战争研究系访问高级研究员、《宣传:从虚假信息与影响到行动与信息战》一书的作者卢卡什·奥莱伊尼克表示:"针对特定个体或群体将变得更加容易和有效。这对民主社会构成了极其严峻的挑战,我们正面临巨大困境。"

即使是对人工智能帮助人类的潜力持乐观态度的人士也认为,这篇论文强调的威胁需要认真对待。

科克大学计算机科学与信息技术学院教授巴里·奥沙利文指出:"利用人工智能开展影响力活动确实符合当前技术的发展水平。正如论文所述,这也给治理措施和防御应对带来了巨大的复杂性。"

近几个月来,随着人工智能公司试图证明其值得投入数千亿美元,许多公司纷纷推出最新一代人工智能代理,作为该技术终将不负众望的证据。但论文作者认为,同样的技术可能很快就会被用于传播虚假信息和宣传,规模将前所未有。

作者描述的集群将由人工智能控制的代理组成,这些代理能够保持持久的身份,更重要的是具备记忆能力,从而模拟可信的在线身份。这些代理将协调合作以实现共同目标,同时创建个性化角色和输出以避免被检测。这些系统还能实时适应社交媒体平台共享的信号,并与真实人类进行对话。

挪威商学院传播学教授、报告合著者之一乔纳斯·孔斯特表示:"我们正进入社交媒体平台信息战的新阶段,技术进步使得传统的机器人方法已经过时。"

对于多年来追踪和打击虚假信息活动的专家来说,这篇论文描绘了一个可怕的未来。

前拜登政府虚假信息问题负责人、现任美国阳光计划首席执行官尼娜·扬科维茨说:"如果人工智能不仅仅是产生幻觉信息,而是成千上万个聊天机器人协同工作,制造根本不存在的草根支持假象呢?这就是这篇论文设想的未来——如同打了兴奋剂的俄罗斯水军工厂。"

研究人员表示,目前尚不清楚这种策略是否已被使用,因为现有的追踪和识别协同虚假行为的系统无法检测到它们。

孔斯特说:"由于它们模仿人类的隐蔽特性,实际上很难检测它们,也很难评估它们的存在程度。我们无法访问大多数社交媒体平台,因为平台变得越来越严格,因此很难深入了解。从技术上讲,这绝对是可能的。我们几乎可以肯定,这项技术正在测试中。"

孔斯特补充说,这些系统在开发过程中可能仍需要一些人工监督,并预测虽然它们可能不会对2026年11月的美国中期选举产生巨大影响,但很可能被部署用于扰乱2028年的总统选举。

社交媒体平台上与人类无异的账号只是问题之一。此外,研究人员表示,大规模绘制社交网络的能力将使协调虚假信息活动的人能够针对特定社区,确保产生最大影响。

他们写道:"具备这些能力的集群可以定位以实现最大影响,并根据每个社区的信仰和文化线索定制信息,从而实现比以往僵尸网络更精确的定位。"

这样的系统基本上可以自我改进,利用对其帖子的回应作为反馈来改进推理,从而更好地传递信息。研究人员写道:"只要有足够的信号,它们就可以运行数百万次微型A/B测试,以机器速度传播获胜的变体,迭代速度远超人类。"

为了应对人工智能集群带来的威胁,研究人员建议建立一个"人工智能影响力观察站",由学术团体和非政府组织的人员组成,致力于"标准化证据、提高态势感知能力,并实现更快的集体响应,而不是施加自上而下的声誉惩罚"。

未包括在内的群体是社交媒体平台的高管,主要是因为研究人员认为他们的公司优先考虑用户参与度,因此几乎没有动力去识别这些集群。

孔斯特说:"假设人工智能集群变得如此频繁,以至于你无法信任任何人,人们离开平台。当然,这会威胁到商业模式。如果它们只是增加了参与度,对平台来说,不揭露这一点更好,因为这似乎意味着更多的参与度,更多的广告被看到,这对特定公司的估值是有利的。"

除了平台缺乏行动外,专家认为政府也缺乏介入的动力。奥莱伊尼克说:"当前的地缘政治格局可能不利于本质上监控在线讨论的'观察站'。"扬科维茨也表示赞同:"这个未来最可怕的是,解决人工智能造成的危害几乎没有政治意愿,这意味着人工智能集群可能很快成为现实。"

英文来源:

In 2016, hundreds of Russians filed into a modern office building on 55 Savushkina Street in St. Petersburg every day; they were part of the now-infamous troll farm known as the Internet Research Agency. Day and night, seven days a week, these employees would manually comment on news articles, post on Facebook and Twitter, and generally seek to rile up Americans about the then-upcoming presidential election.
When the scheme was finally uncovered, there was widespread media coverage and Senate hearings, and social media platforms made changes in the way they verified users. But in reality, for all the money and resources poured into the IRA, the impact was minimal—certainly compared to that of another Russia-linked campaign that saw Hilary Clinton’s emails leaked just before the election.
A decade on, while the IRA is no more, disinformation campaigns have continued to evolve, including the use of AI technology to create fake websites and deepfake videos. A new paper, published in Science on Thursday, predicts an imminent step-change in how disinformation campaigns will be conducted. Instead of hundreds of employees sitting at desks in St. Petersburg, the paper posits, one person with access to the latest AI tools will be able to command “swarms” of thousands of social media accounts, capable not only of crafting unique posts indistinguishable from human content, but of evolving independently and in real time—all without constant human oversight.
These AI swarms, the researchers believe, could deliver society-wide shifts in viewpoint that not only sway elections but ultimately bring about the end of democracy—unless steps are taken now to prevent it.
“Advances in artificial intelligence offer the prospect of manipulating beliefs and behaviors on a population-wide level,” the report says. “By adaptively mimicking human social dynamics, they threaten democracy.”
The paper was authored by 22 experts from across the globe, drawn from fields including computer science, artificial intelligence, and cybersecurity, as well as psychology, computational social science, journalism, and government policy.
The pessimistic outlook on how AI technology will change the information environment is shared by other experts in the field who have reviewed the paper.
“To target chosen individuals or communities is going to be much easier and powerful,” says Lukasz Olejnik, a visiting senior research fellow at King’s College London’s Department of War Studies and the author of Propaganda: From Disinformation and Influence to Operations and Information Warfare. “This is an extremely challenging environment for a democratic society. We're in big trouble.”
Even those who are optimistic about AI’s potential to help humans believe the paper highlights a threat that needs to be taken seriously.
“AI-enabled influence campaigns are certainly within the current state of advancement of the technology, and as the paper sets out, this also poses significant complexity for governance measures and defense response,” says Barry O’Sullivan, a professor at the School of Computer Science and IT at University College Cork.
In recent months, as AI companies seek to prove they are worth the hundreds of billions of dollars that have been poured into them, many have pointed to the most recent crop of AI agents as evidence that the technology will finally live up to the hype. But the very same technology could soon be deployed, the authors argue, to disseminate disinformation and propaganda at a scale never before seen.
The swarms the authors describe would consist of AI-controlled agents capable of maintaining persistent identities and, crucially, memory, allowing for the simulation of believable online identities. The agents would coordinate in order to achieve shared objectives, while at the same time creating individual personas and output to avoid detection. These systems would also be able to adapt in real time to respond to signals shared by the social media platforms and in conversation with real humans.
“We are moving into a new phase of informational warfare on social media platforms where technological advancements have made the classic bot approach outdated,” says Jonas Kunst, a professor of communication at BI Norwegian Business School and one of the coauthors of the report.
For experts who have spent years tracking and combating disinformation campaigns, the paper presents a terrifying future.
"What if AI wasn't just hallucinating information, but thousands of AI chatbots were working together to give the guise of grassroots support where there was none? That's the future this paper imagines—Russian troll farms on steroids,” says Nina Jankowicz, the former Biden administration disinformation czar who is now CEO of the American Sunlight Project.
The researchers say it’s unclear whether this tactic is already being used because the current systems in place to track and identify coordinated inauthentic behavior are not capable of detecting them.
“Because of their elusive features to mimic humans, it's very hard to actually detect them and to assess to what extent they are present,” says Kunst. “We lack access to most [social media] platforms because platforms have become increasingly restrictive, so it's difficult to get an insight there. Technically, it's definitely possible. We are pretty sure that it's being tested.”
Kunst added that these systems are likely to still have some human oversight as they are being developed, and predicts that while they may not have a massive impact on the 2026 US midterms in November, they will very likely be deployed to disrupt the 2028 presidential election.
Accounts indistinguishable from humans on social media platforms are only one issue. In addition, the ability to map social networks at scale will, the researchers say, allow those coordinating disinformation campaigns to target agents at specific communities, ensuring the biggest impact.
“Equipped with such capabilities, swarms can position for maximum impact and tailor messages to the beliefs and cultural cues of each community, enabling more precise targeting than that with previous botnets,” they write.
Such systems could be essentially self-improving, using the responses to their posts as feedback to improve reasoning in order to better deliver a message. “With sufficient signals, they may run millions of microA/B tests, propagate the winning variants at machine speed, and iterate far faster than humans,” the researchers write.
In order to combat the threat posed by AI swarms, the researchers suggest the establishment of an “AI Influence Observatory,” which would consist of people from academic groups and nongovernmental organizations working to “standardize evidence, improve situational awareness, and enable faster collective response rather than impose top-down reputational penalties.”
One group not included is executives from the social media platforms themselves, primarily because the researchers believe that their companies incentivize engagement over everything else, and therefore have little incentive to identify these swarms.
“Let's say AI swarms become so frequent that you can't trust anybody and people leave the platform,” says Kunst. “Of course, then it threatens the model. If they just increase engagement, for a platform it's better to not reveal this, because it seems like there's more engagement, more ads being seen, that would be positive for the valuation of a certain company.”
As well as a lack of action from the platforms, experts believe that there is little incentive for governments to get involved. “The current geopolitical landscape might not be friendly for ‘Observatories’ essentially monitoring online discussions,” Olejnik says. Jankowicz agrees: “What's scariest about this future is that there's very little political will to address the harms AI creates, meaning [AI swarms] may soon be reality.”

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读