我们一直误解了人工智能的"真相危机"

内容总结:
AI“真相危机”加剧:虚假信息被揭穿后仍影响公众认知,真相捍卫者陷入被动
长期以来被预警的“真相衰退”时代似乎已全面到来:人工智能生成的内容不仅欺骗公众,更在谎言被揭穿后持续塑造社会信念,侵蚀着社会信任的根基。近期一系列事件显示,我们为应对这场危机所依赖的技术工具与治理策略正面临严峻挑战。
上周,美国国土安全部(DHS)首次被确认使用谷歌和Adobe的人工智能视频生成工具制作并向公众发布内容。与此同时,移民机构在社交媒体大量投放支持大规模驱逐议程的宣传材料,其中部分内容疑似由AI生成。这些动态引发了公众两极反应:一部分人对此并不意外,因为白宫此前就曾发布一张经数字篡改的抗议者逮捕照片,刻意渲染被捕者“歇斯底里、泪流满面”的形象,且拒绝回应是否故意篡改;另一部分人则认为媒体报道此类事件毫无意义,因为新闻机构自身也在使用AI编辑内容——例如MS Now(前MSNBC)曾播出经AI修饰、使人物更“英俊”的图片,尽管其辩称不知情。
这两类事件本质不同,却共同揭示了当前应对策略的根本缺陷。我们曾将希望寄托于“内容真实性倡议”等技术方案,例如为内容添加来源与AI使用情况的标签。然而现实是:Adobe仅对完全由AI生成的内容自动加注标签,其他情况依赖创作者自愿添加;而社交平台可轻易剥离或隐藏这些标签。更令人担忧的是,研究表明即使明确告知信息虚假,人们仍会受其情感影响做出判断——白宫篡改照片在曝光后仍广泛传播,正是例证。
这意味着我们预设的防御逻辑已然失效。社会曾相信“只要能够验证真伪,就能遏制虚假危害”,但如今我们面对的是“影响力不因曝光而消失,怀疑易被武器化,真相无法一键重置”的新现实。当生成与篡改内容的AI工具变得更先进、易用且廉价时,真相捍卫者的行动已远远落后于技术扩散的速度。
正如反虚假信息专家克里斯托弗·内林所指出的:“透明度有帮助,但远远不够。我们必须为深度伪造时代制定全新的应对方案。”在情感驱动胜过事实核查的传播环境中,重建社会信任需要超越技术标签的深层变革。
中文翻译:
关于人工智能“真相危机”的迷思
即便内容被证实经过篡改,它依然影响着我们的信念。真相的捍卫者已远远落后于时代。
本文原载于我们的每周人工智能通讯《算法》。若希望第一时间在收件箱中阅读此类报道,请在此订阅。
究竟需要多少证据,才能让你相信那个被反复警告的“真相衰退时代”已然降临?在这个时代,人工智能生成的内容不仅欺骗我们,甚至在我们识破谎言后仍持续塑造我们的认知,并在此过程中侵蚀社会信任。上周我发表的一篇报道让我彻底确信这一点,同时也让我意识到,那些被宣传为解决危机的工具正遭遇惨败。
上周四,我首次证实美国国土安全部(下辖移民机构)正在使用谷歌和Adobe的人工智能视频生成器制作面向公众的内容。与此同时,移民机构正通过社交媒体大量发布支持特朗普大规模驱逐议程的内容——其中部分明显由人工智能制作(例如关于“大规模驱逐后的圣诞节”视频)。
读者的两类反应或许恰恰揭示了我们所处的认知危机。一类读者毫不惊讶,因为白宫早在1月22日就发布过一张经数字篡改的照片:照片中在ICE抗议现场被捕的女性被刻意修改得歇斯底里、泪流满面。白宫副通讯主任凯兰·多尔对是否故意篡改照片未予回应,仅表示“表情包还会继续出现”。
另一类读者则认为报道国土安全部使用AI编辑公众内容毫无意义,因为新闻机构也在做同样的事。他们指出,新闻网络MS Now(前身为MSNBC)曾分享一张经AI修饰的亚历克斯·普雷蒂照片,使其显得更英俊,此事本周引发大量病毒式传播,甚至出现在乔·罗根的播客中。这是否意味着要以毒攻毒?MS Now发言人向Snopes表示,播出时并未意识到照片经过修改。
这两起篡改事件不应混为一谈,也不应视为“真相不再重要”的佐证。前者涉及美国政府向公众发布明显篡改的照片且拒绝说明是否故意操纵;后者则是新闻机构播出了本应识别却未识别的修改照片,但至少采取了部分纠错措施。
这些反应反而暴露出社会应对机制的根本缺陷。关于AI真相危机的警告始终围绕一个核心论点:无法辨别真伪将摧毁社会,因此需要独立验证真相的工具。而我的两个严峻发现是:这些工具正在失效;尽管核查真相依然必要,但仅靠它已无法重建我们曾被许诺的社会信任。
以2024年备受吹捧的“内容真实性倡议”为例——该由Adobe联合发起、获多家科技巨头采纳的计划,承诺为内容添加标注以披露制作时间、作者及是否使用AI。但Adobe仅对完全由AI生成的内容自动添加标注,其余情况均依赖创作者自愿标注。
而在X(原推特)这类平台上,篡改内容可轻易剥离标注(那张被捕女子照片的修改提示实为用户添加)。平台也可选择不显示标注:Adobe启动该计划时曾宣称五角大楼官方图片网站DVIDS将展示标注以证真实,但如今查阅该网站却未见任何标注。
白宫照片在被证实篡改后仍广泛传播的现象,让我想起《传播心理学》期刊一项新研究:实验中参与者观看深度伪造的“犯罪供认”,即使明确被告知证据系伪造,他们在判断嫌疑人罪行时仍受其影响。换言之,即便人们明知内容完全虚假,情绪仍会被其左右。
虚假信息专家克里斯托弗·内林针对该研究指出:“透明度虽有帮助,但远远不够。我们必须为应对深度伪造制定全新方案。”
生成和编辑内容的AI工具正变得更先进、易用且廉价——这正是美国政府加大采购力度的原因。我们曾反复预警,却只准备了应对“信息混乱”的方案。而现实是:我们正步入一个影响力不因曝光而消散、质疑易被武器化、真相无法充当重置按钮的时代。真相的捍卫者们,早已被远远抛在身后。
(2月2日更新:补充了Adobe应用内容真实性标注的具体细节)
深度解析
人工智能
将大语言模型视为外星生物的新一代生物学家
通过将大语言模型视作生命体而非计算机程序进行探究,科学家首次揭开了它们的部分奥秘。
杨立昆的新冒险:逆势押注对抗大语言模型
这位AI先驱在独家专访中透露了其巴黎新公司AMI Labs的发展蓝图。
2026年人工智能五大趋势前瞻
我们的AI领域撰稿人预测了未来一年的关键走向——这五大热点趋势值得密切关注。
保持联系
获取《麻省理工科技评论》最新动态
探索特别优惠、头条新闻、即将举办的活动等更多内容。
英文来源:
What we’ve been getting wrong about AI’s truth crisis
Even when content is revealed to be manipulated, it still shapes our beliefs. The defenders of truth are hopelessly behind.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
What would it take to convince you that the era of truth decay we were long warned about—where AI content dupes us, shapes our beliefs even when we catch the lie, and erodes societal trust in the process—is now here? A story I published last week pushed me over the edge. It also made me realize that the tools we were sold as a cure for this crisis are failing miserably.
On Thursday, I reported the first confirmation that the US Department of Homeland Security, which houses immigration agencies, is using AI video generators from Google and Adobe to make content that it shares with the public. The news comes as immigration agencies have flooded social media with content to support President Trump's mass deportation agenda—some of which appears to be made with AI (like a video about “Christmas after mass deportations”).
But I received two types of reactions from readers that may explain just as much about the epistemic crisis we’re in.
One was from people who weren’t surprised, because on January 22 the White House had posted a digitally altered photo of a woman arrested at an ICE protest, one that made her appear hysterical and in tears. Kaelan Dorr, the White House’s deputy communications director, did not respond to questions about whether the White House altered the photo but wrote, “The memes will continue.”
The second was from readers who saw no point in reporting that DHS was using AI to edit content shared with the public, because news outlets were apparently doing the same. They pointed to the fact that the news network MS Now (formerly MSNBC) shared an image of Alex Pretti that was AI-edited and appeared to make him look more handsome, a fact that led to many viral clips this week, including one from Joe Rogan’s podcast. Fight fire with fire, in other words? A spokesperson for MS Now told Snopes that the news outlet aired the image without knowing it was edited.
There is no reason to collapse these two cases of altered content into the same category, or to read them as evidence that truth no longer matters. One involved the US government sharing a clearly altered photo with the public and declining to answer whether it was intentionally manipulated; the other involved a news outlet airing a photo it should have known was altered but taking some steps to disclose the mistake.
What these reactions reveal instead is a flaw in how we were collectively preparing for this moment. Warnings about the AI truth crisis revolved around a core thesis: that not being able to tell what is real will destroy us, so we need tools to independently verify the truth. My two grim takeaways are that these tools are failing, and that while vetting the truth remains essential, it is no longer capable on its own of producing the societal trust we were promised.
For example, there was plenty of hype in 2024 about the Content Authenticity Initiative, cofounded by Adobe and adopted by major tech companies, which would attach labels to content disclosing when it was made, by whom, and whether AI was involved. But Adobe applies automatic labels only when the content is wholly AI-generated. Otherwise the labels are opt-in on the part of the creator.
And platforms like X, where the altered arrest photo was posted, can strip content of such labels anyway (a note that the photo was altered was added by users). Platforms can also simply not choose to show the label; indeed, when Adobe launched the initiative, it noted that the Pentagon's website for sharing official images, DVIDS, would display the labels to prove authenticity, but a review of the website today shows no such labels.
Noticing how much traction the White House’s photo got even after it was shown to be AI-altered, I was struck by the findings of a very relevant new paper published in the journal Communications Psychology. In the study, participants watched a deepfake “confession” to a crime, and the researchers found that even when they were told explicitly that the evidence was fake, participants relied on it when judging an individual’s guilt. In other words, even when people learn that the content they’re looking at is entirely fake, they remain emotionally swayed by it.
“Transparency helps, but it isn’t enough on its own,” the disinformation expert Christopher Nehring wrote recently about the study’s findings. “We have to develop a new masterplan of what to do about deepfakes.”
AI tools to generate and edit content are getting more advanced, easier to operate, and cheaper to run—all reasons why the US government is increasingly paying to use them. We were well warned of this, but we responded by preparing for a world in which the main danger was confusion. What we’re entering instead is a world in which influence survives exposure, doubt is easily weaponized, and establishing the truth does not serve as a reset button. And the defenders of truth are already trailing way behind.
Update: This story was updated on February 2 with details about how Adobe applies its content authenticity labels.
Deep Dive
Artificial intelligence
Meet the new biologists treating LLMs like aliens
By studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time.
Yann LeCun’s new venture is a contrarian bet against large language models
In an exclusive interview, the AI pioneer shares his plans for his new Paris-based company, AMI Labs.
What’s next for AI in 2026
Our AI writers make their big bets for the coming year—here are five hot trends to watch.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.