«

伊朗战争假新闻充斥社交平台X

qimuai 发布于 阅读:6 一手编译


伊朗战争假新闻充斥社交平台X

内容来源:https://www.wired.com/story/fake-ai-content-about-the-iran-war-is-all-over-x/

内容总结:

近期,埃隆·马斯克旗下人工智能聊天机器人Grok在核查社交媒体平台X(原推特)上的涉伊朗冲突信息时,多次出现严重事实性错误。当被要求验证一则关于“伊朗导弹袭击特拉维夫”的帖文时,Grok不仅屡次错判视频地点与时间,还试图引用AI生成的虚假图像作为“证据”。这一事件凸显出自2月28日美国与以色列对伊朗发动攻击以来,X平台已深陷虚假信息的泥潭。

随着冲突持续,AI生成的图像与视频在平台上泛滥成灾。这些内容由部分付费“蓝标”账号及伊朗官方账号传播,旨在夸大攻击效果。例如,3月2日,伊朗官方媒体曾发布巴林高楼起火的AI伪造视频;另一幅显示“美军B-2轰炸机被击落”的虚假图片在删除前浏览量超百万。尽管部分AI内容制作粗糙——如所谓“伊朗洞穴导弹工厂”视频——仍被大量转发。

研究机构“战略对话研究所”指出,伊朗方面还利用AI技术散布反犹太主义叙事,例如生成“正统犹太教徒引领美军作战”等煽动性内容。此外,一条涉及美国前总统特朗普的不实视频在删除前浏览量高达680万次。

虚假信息同样蔓延至非AI领域。针对2月28日伊朗米纳布小学遇袭事件(造成168人死亡,其中110名儿童),部分支持特朗普的账号挪用其他冲突画面,散布“伊朗政府导弹击中学校”的谣言。而《纽约时报》核实的视频显示,击中学校附近海军基地的实为美军使用的“战斧”巡航导弹。

面对AI虚假内容的激增,X平台近日宣布将对未标注的涉冲突AI视频发布者临时取消收益分成,但未透露处罚账号数量。此前,多名伊朗官员曾通过付费获得“蓝标”认证,借此提升帖文传播量与盈利可能。

媒体监督组织NewsGuard分析师指出,日益逼真的AI内容极易被当作“证据”误导公众,而现有AI检测工具仍存在局限。虚假信息研究专家塔勒·哈金警告:“若不及早规制AI滥用,基于事实的认知体系将面临崩塌风险。”

与此同时,Meta的监督委员会周二批评该公司对AI内容的标注机制“不足以应对危机期间虚假信息的规模与传播速度”。Meta回应称将重视该意见。当前,如何应对AI技术助长的信息失真,已成为全球社交平台面临的共同挑战。

中文翻译:

当虚假信息专家塔尔·哈金要求Grok核实X平台上一条关于伊朗导弹袭击特拉维夫的帖子时,埃隆·马斯克旗下这款人工智能聊天机器人的表现堪称灾难。它多次错误识别视频地点与日期——该视频实为伊朗官方媒体上周日在X平台首发。随后,聊天机器人竟试图通过分享一张AI生成的图像来佐证其错误观点。

"现在Grok开始用AI炮制的废墟垃圾图来回应了,完全是胡编乱造。"哈金在回应中写道。这场交锋精准折射出,自2月28日美以对伊朗发动袭击以来,X平台已与现实严重脱节。正如《连线》当时报道,该社交媒体平台迅速被大量虚假及移花接木的视频淹没。

随着冲突持续,虚假信息的洪流愈演愈烈。近日,AI图像与视频更让这股浪潮急剧升级,而Grok在核实平台信息时屡次提供错误情报。带有蓝标认证的付费账户和试图夸大战损的伊朗官员,正在竞相传播AI生成的图像。

易于获取的AI影音生成工具催生出日益逼真的伪造内容。例如3月2日,伊朗官员及官方媒体散布了巴林高层建筑起火的AI合成视频。这些影音素材足以迷惑众多观众:一张描绘美军B-2轰炸机被伊朗击落、美军遭扣押的图片在删除前浏览量超百万;另一组显示三角洲部队成员被伊朗当局俘虏的图像更获得超500万次浏览。

X平台推广的部分AI内容则相对拙劣。例如某段宣称展示伊朗部队在洞穴深处制造导弹的视频,虽制作粗糙,仍被大量账户传播,浏览量逾百万次。

战略对话研究所研究人员向《连线》提供的分析报告指出,伊朗政府正利用AI技术传播赤裸裸的反犹太主义叙事。其支持者在X平台组建的宣传网络中,大量散布AI生成的帖子,描绘正统派犹太人引领美军参战或庆祝美国人死亡。

该亲政权网络中的多个账户还传播了一段伪造视频,画面显示一队仅穿内衣的少女走过特朗普总统面前。据战略对话研究所统计,该帖子在被删除前获得超680万次浏览,目前仍在X平台其他账户间流传。

"这场战争最独特之处在于,我需要揭穿的AI生成内容呈现爆发式增长。"哈金告诉《连线》,"这既因AI技术已能蒙骗记者,也因用户能毫无代价地制造这类AI垃圾。若放任AI滥用不加规制,危害将日益深重。除非立即采取行动,否则基于AI的虚假新闻泛滥终将把我们推离事实世界的轨道。"

上周AI伪造内容开始席卷平台时,X宣布将暂时取消发布未标注AI生成的武装冲突视频的蓝标账户收益资格。对于新规实施后已处罚账户数量,X未回应置评请求。此前多名伊朗官员似乎曾购买X高级服务,其账户因此获得蓝标认证、提升互动量,并具备通过发帖盈利的资格。

非AI虚假信息在X平台同样持续滋生。近日焦点集中于2月28日伊朗米纳布小学遇袭事件(造成168人死亡,其中110名儿童)。亲特朗普账户挪用当前冲突中其他地区的影像,渲染"伊朗政府发射导弹袭击学校"的叙事。而伊朗新闻社周日发布、经《纽约时报》核实的视频显示,击中学校附近海军基地的实为战斧巡航导弹。尽管特朗普宣称伊朗拥有该武器,但冲突中唯一使用战斧导弹的只有美国。

虽然X平台充斥着大量战争虚假信息(包括AI生成内容),但Meta监督委员会周二批评了该公司标注其他AI内容的现行方案,指出其"在应对AI虚假信息的规模与速度方面不够严密全面,尤其在危机冲突期间"。Meta在线上声明中表示欢迎该委员会的研究发现。

"随着AI生成影音日益逼真,当那些支持伊朗主张的'证据'看起来如此真实时,用户可能不会质疑其真实性。"媒体监督机构NewsGuard分析师伊西斯·布拉谢告诉《连线》,"现有内容真实性核查手段也存在缺陷,例如AI检测工具识别AI内容的效果并不稳定。"

英文来源:

When disinformation expert Tal Hagin asked Grok to verify a post on X about Iranian missiles that had supposedly struck Tel Aviv, Elon Musk’s AI-powered chatbot failed miserably.
Grok repeatedly misidentified the location and date for the video, which was originally shared on X by an Iranian state-owned media outlet on Sunday. Then, the chatbot tried to prove its point by sharing an AI-generated image.
“Now Grok is replying with AI slop of destruction,” Hagin wrote in response. “Cooked I tell you.”
The interaction neatly sums up just how unhinged from reality X has become since the US and Israel began their attack on Iran on February 28. As WIRED reported at the time, the social media platform was quickly flooded with disinformation by accounts sharing fake and repurposed videos.
As the conflict has continued, the flood has only gotten worse. In recent days, it’s been supercharged by AI images and videos, while Grok has repeatedly given false information when asked to verify claims made on the platform. AI images are being shared by paid accounts bearing blue check marks and Iranian officials seeking to portray exaggerated damage.
The proliferation of easy-to-access AI image- and video-generation tools has led to increasingly sophisticated fake content. On March 2, for example, Iranian officials and state media shared AI-generated videos of a high-rise building in Bahrain on fire. The videos and images appear realistic enough for many: One image of a US B-2 bomber being shot down by Iran with US troops detained was viewed over a million times before it was deleted, while images of members of Delta Force being captured by Iranian authorities were viewed over 5 million times before they were deleted.
Some of the AI content promoted on X is less realistic. One video, for example, purports to show Iranian forces manufacturing missiles deep inside a cave. However, the video was still being shared by multiple accounts and has been viewed over a million times.
AI is also being used by the Iranian government to push overtly antisemitic narratives, with accounts in a pro-regime propaganda network on X sharing AI-generated posts depicting Orthodox Jews leading American soldiers to war or celebrating American deaths, according to researchers from the Institute of Strategic Dialogue (ISD), who shared their analysis with WIRED.
A number of accounts in this pro-regime network also shared a fake video that supposedly showed a line of young girls walking past President Donald Trump wearing only underwear. The post was viewed over 6.8 million times, according to ISD, before being taken down, though it continues to be shared by other accounts on X.
“What is particularly unique about this war is the dramatic uptick in AI-generated content I find myself debunking,” Hagin tells WIRED. “This is likely due to AI being advanced enough to fool journalists, and the ease with which users can create this AI slop with zero consequences. The longer we go without regulations against AI abuse, the more harm will be caused. I see the proliferation of AI-based fake news pushing us over the edge of a fact-based world unless we enact change now.”
When the flood of AI-generated fakes began taking over the platform last week, X announced it would temporarily demonetize blue-check-mark accounts if they post AI-generated videos of armed conflict without a label. X did not respond to a request for comment about how many accounts it had demonetized since introducing the measure. Until recently, a number of Iranian officials appeared to be paying X for its premium service, which provided their accounts with blue check marks, boosted engagement, and created the potential to earn money for their posts.
Non-AI disinformation has also continued to flourish on X. In recent days, this has focused on the attack on a primary school in Minab, Iran, on February 28, where more than 168 people, 110 of them children, were killed. Pro-Trump accounts have reused footage from elsewhere in the current conflict to push the narrative that the Iranian government fired the missile that struck the school. A video released by an Iranian news agency on Sunday, and verified by The New York Times, shows a Tomahawk cruise missile hitting a naval base located next to the school. Despite Trump’s claim that Iran has them, the US is the only party in the conflict which uses Tomahawk missiles.
While much disinformation about the war, including AI-generated content, is being shared on X, on Tuesday, Meta’s Oversight Board criticized the company’s approach to labeling other AI-generated content. The board said Meta is “neither robust nor comprehensive enough to handle the scale and speed of AI-generated misinformation, particularly during crises and conflicts.” Meta said in a statement posted online that it welcomed the board’s findings.
“As AI-generated images and videos are increasingly sophisticated, users might not put into question visuals that are pushed as ‘evidence’ to support pro-Iran claims when they look so real,” Isis Blachez, an analyst with media watchdog NewsGuard, tells WIRED. “The resources at hand to evaluate such a content’s authenticity also have shortcomings. For instance, AI detection tools are not consistently successful at recognizing AI content.”

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读