«

AI模型能识别深度伪造图片,但人类更擅长识破虚假视频。

qimuai 发布于 阅读:1 一手编译


AI模型能识别深度伪造图片,但人类更擅长识破虚假视频。

内容来源:https://www.sciencenews.org/article/ai-models-deepfakes

内容总结:

近日一项人机识别深度伪造(deepfake)内容的对比研究显示:在识别伪造图像时,人工智能的准确率显著高于人类;但在鉴别伪造视频时,人类反而更具优势。这一结果由美国佛罗里达大学心理学家娜塔莉·埃布纳团队于1月7日发表在《认知研究:原理与启示》期刊。

研究团队首先让约2200名参与者和两种机器学习算法对200张人脸图像的真实性进行评分(1分为伪造,10分为真实)。结果显示,人类识别准确率仅为50%左右,相当于随机猜测水平;而两种算法的准确率分别达到约97%和79%。随后在针对70段人物谈话短视频的测试中,约1900名人类参与者平均识别准确率达63%,而算法却回落至接近随机猜测的水平。

深度伪造技术通过人工智能生成逼真的虚假图像、音频和视频,已被用于金融诈骗、选举干预和名誉破坏。随着伪造内容日益逼真,人类与机器均面临被欺骗的风险。埃布纳指出,当前研究正从多角度分析人类与机器的决策机制差异,以探索双方如何协同应对深度伪造挑战。研究团队认为,未来需构建人机协作的鉴别体系,才能有效应对深度伪造技术带来的社会风险。

中文翻译:

人工智能模型能识破深度伪造图片,但人类更能辨别假视频。研究人员让人工智能与人类展开对决,以探究谁更擅长识别合成媒体。在识别深度伪造图片方面,人工智能系统远胜人类;但在鉴别深度伪造视频时,人类可能仍占优势。这项让人机对决检测数字伪造技术的研究得出了令人意外的反转结论。心理学家娜塔莉·埃布纳与同事1月7日在《认知研究:原理与启示》期刊上指出,研究结果表明未来需要人机协作来识别和对抗深度伪造技术。

深度伪造是指通过人工智能生成的图像、音频和视频,能虚假呈现人物的外貌、言论或行为,已被用于金融欺诈、干预选举和诋毁声誉。这类伪造内容正以惊人速度变得愈发逼真,足以同时欺骗人类和人工智能模型。

为探究人类与机器在识别深度伪造方面的能力差异,埃布纳团队首先邀请约2200名参与者和两种机器学习算法对200张人脸进行真实度评分(1分为伪造,10分为真实)。人类识别深度伪造的准确率仅接近随机猜测水平,约为50%。而机器表现更优,其中一种算法正确率约达97%,另一种平均准确率为79%。

随后,研究人员让约1900名参与者观看70段人物谈论话题的短视频,并对人脸真实度进行评分。令人惊讶的是,人类在此项任务中超越了算法表现。参与者平均正确率达63%,而算法准确率仅徘徊在随机猜测水平。

研究团队正深入探究人类与人工智能的决策机制。盖恩斯维尔佛罗里达大学的埃布纳表示:"我们想知道机器在特定条件下表现远超人类时究竟采用了何种判断依据?其推理方式与人类有何差异?人类大脑能感知并捕捉到哪些关键信息?我们正从多维度研究人机差异,不仅要判断'是'与'否',更要理解其背后的决策逻辑。"

研究团队指出,这些发现将帮助人类找到与人工智能协作的最佳方式,以应对深度伪造泛滥的未来挑战。

英文来源:

AI models spot deepfake images, but people catch fake videos
Researchers pitted humans against AI to see which was better at ferreting out synthetic media
AI systems are far better than people at spotting deepfake images, but when it comes to deepfake videos, humans may still have the edge. That’s the surprising twist from a new study that pits people against machines in the race to detect digital forgeries. The results suggest humans and machines will need to work together to identify and combat deepfakes going forward, psychologist Natalie Ebner and colleagues report January 7 in Cognitive Research: Principles and Implications.
Deepfakes are AI-generated images, audio and videos that can falsely represent what a person looks like, says or does and have already been used to commit financial fraud, influence elections and ruin reputations. They are becoming more convincing at an alarming rate, fooling humans and AI models alike.
To determine whether humans or machines were better at deepfake detection, Ebner and her colleagues first asked about 2,200 participants and two machine learning algorithms to rate the realness of 200 faces on a scale from 1 (fake) to 10 (real). Humans were able to spot deepfakes only at chance level, or about 50 percent of the time. But the machines performed better, with one algorithm getting the correct answer roughly 97 percent of the time and the other averaging 79 percent accuracy.
Next, the researchers asked about 1,900 human participants to watch 70 short videos of a person discussing a topic and then to rate how realistic the person’s face was. In a surprising twist, humans outperformed the algorithms in this task. Human participants got the right answer an average of 63 percent of the time while the algorithms performed at around chance level.
The researchers are now taking a deeper look at both human and AI decision-making. We want to know “what is the machine using, for it to be so much better under some conditions than the human? And how is it different from how the human reasons? What are we seeing in the brain that the human is becoming aware of and picking up on?” says Ebner, of the University of Florida in Gainesville. “We’re looking at all these different angles now in the human and in the machine to not just describe ‘yes’ or ‘no’ but to understand why are they coming to the yes and the no.”
That knowledge, the team argues, will help humans figure out how best to collaborate with AI to navigate our deepfake-saturated future.

AI科学News

文章目录


    扫描二维码,在手机上阅读