«

深度伪造"脱衣"技术日益阴暗,且愈发危险。

qimuai 发布于 阅读:22 一手编译


深度伪造"脱衣"技术日益阴暗,且愈发危险。

内容来源:https://www.wired.com/story/deepfake-nudify-technology-is-getting-darker-and-more-dangerous/

内容总结:

近期调查显示,利用人工智能技术制作“深度伪造”色情内容的黑色产业链正在全球范围内迅速蔓延,对女性与未成年人造成严重侵害。多家网站及社交平台上的生成工具仅需上传一张普通照片,即可付费制作逼真的虚假裸体视频或图像,甚至可定制多种露骨的性侵害场景。

据技术媒体调查,目前已有超过50个此类网站在运营,部分平台月访问量达数百万次。这些服务通常以“AI脱衣”“ nude生成”为噱头,提供数十种涉及性暴力的视频模板,并已形成年收入达数百万美元的灰色产业。尽管部分平台声称“需获得照片主体同意”,但实际几乎没有任何审核机制。

更令人担忧的是,该技术正通过即时通讯软件加速扩散。仅在Telegram平台上,就曾存在39个相关机器人和频道,注册用户超140万。经媒体曝光后,平台虽已下架部分工具,但仍有大量服务通过应用程序接口(API)暗中扩散,甚至出现“基础设施化”趋势——部分头部网站通过收购整合,向其他非法服务提供技术支持。

研究指出,深度伪造色情内容的受害者几乎均为女性及未成年人。加害者动机包括性勒索、恶意报复、寻求刺激或单纯出于“技术好奇”。有施害者在访谈中坦言,操纵这些工具会带来“如同上帝般的快感”。专家警告,此类技术正在助长针对女性的数字性暴力,而社会普遍对此类危害认识不足、法律规制严重滞后。

目前,包括开源模型在内的生成式AI技术已成为这类黑色产业的温床。研究者呼吁,全球社会需正视技术滥用背后的性别暴力本质,并加快完善法律监管与技术治理体系。

中文翻译:

打开某个露骨深度伪造生成器的网站,你会看到一系列令人毛骨悚然的选择菜单。只需点击几下,它就能将一张普通照片转换成八秒钟的色情视频片段,把女性嵌入逼真的性爱场景中。"用我们先进的AI技术,将任何照片转化为裸体版本,"网站上的文字如此宣称。

潜在的滥用可能性极为广泛。该网站提供的65个视频"模板"中,包含一系列女性"脱衣"视频,同时还有名为"深喉性爱机器"的露骨场景及各类"精液"视频。生成每段视频需支付小额费用,若添加AI生成的音频则需额外付费。

为减少其传播,本文不公开该网站名称。网站虽包含警示语,声称用户只应上传已获授权进行AI转换的照片,但实际是否存在审核机制尚不明确。

由埃隆·马斯克旗下公司开发的聊天机器人Grok,已被用于生成数千张未经同意的"脱衣"或"裸体化"比基尼照片——这进一步将数字性骚扰的过程工业化和常态化。但这只是冰山一角,远非最露骨的现象。多年来,一个由数十个网站、机器人和应用程序构成的深度伪造生态系统持续扩张,使得基于图像的性侵行为——包括制作儿童性虐待材料——比以往任何时候都更容易自动化。这个"裸体化"生态系统及其对妇女儿童造成的伤害,其复杂程度可能远超公众认知。

"这不再是粗糙的合成脱衣效果,"追踪该技术超过五年的深度伪造专家亨利·阿杰德指出,"我们谈论的是生成内容的高度逼真性,以及更广泛的功能范围。"这些服务合计年收入可能达数百万美元。"这是社会毒瘤,是我们目睹的AI革命与合成媒体革命中最黑暗、最恶劣的部分。"

过去一年间,本刊追踪到多个露骨深度伪造服务持续推出新功能,迅速扩展至有害视频制作领域。如今的图像转视频模型通常只需单张照片即可生成短片。本刊审查的50多个"深度伪造"网站(月访问量可能达数百万次)显示,近全部网站都提供高清露骨视频生成服务,并常列出数十种可嵌入女性的性爱场景模板。

与此同时,在Telegram平台上,数十个性主题深度伪造频道和机器人定期发布新功能与软件更新,例如不同的性爱姿势与体位。去年六月,某深度伪造服务推广"性爱模式"时宣传道:"尝试不同服饰、心仪姿势、年龄及其他设定。"另一服务则宣布将推出"更多风格"的图像视频,用户可通过自定义指令让AI系统"精准生成你想象中的画面"。

"这不仅是'你想脱掉某人衣服',而是'这里有各种幻想版本'——不同姿势、不同性爱体位,"独立分析师圣地亚哥·拉卡托斯指出。他与媒体《指示器》共同研究发现,"裸体化"服务常依托大型科技公司基础设施运作,并可能在此过程中牟取暴利。"甚至有能让人[看似]怀孕的版本,"拉卡托斯补充道。

本刊调查发现,Telegram上39个深度伪造生成机器人与频道的注册账户超过140万个。经本刊质询后,Telegram已移除至少32个相关工具。"未经同意的色情内容——包括深度伪造及其制作工具——在Telegram服务条款中严格禁止,"平台发言人表示,去年已删除4400万条违规内容。

拉卡托斯指出,近年来多个大型深度伪造网站已巩固市场地位,开始向其他非自愿图像视频生成者提供API接口,催生更多衍生服务。"他们通过收购其他网站或裸体化应用进行整合,增加功能以转型为基础设施供应商。"

性主题深度伪造技术于2017年末首次出现,当时用户需具备专业技术知识才能制作。过去三年生成式AI系统的广泛进步,包括精密开源图像视频生成器的普及,使该技术变得更易获取、更逼真、更易操作。

虽然针对政客和全球冲突的普通深度伪造视频常被用于传播虚假信息,但性主题深度伪造持续对妇女儿童造成广泛伤害。与此同时,相关保护法律的制定却进展缓慢或尚未出台。

"这个生态系统建立在开源模型基础之上,"麻省理工学院AI安全与治理研究员斯蒂芬·卡斯珀指出,他曾记录深度伪造视频滥用激增及其在非自愿私密图像生成中的作用。"通常只是将开源模型开发成应用程序供用户使用。"

非自愿私密图像(包括深度伪造及其他非自愿传播内容)的受害者几乎总是女性。虚假图像与非自愿视频会造成巨大伤害,包括骚扰、羞辱及"非人化"感受。近年来,露骨深度伪造被用于攻击政客、名人和社交媒体网红,也被男性用于骚扰同事朋友,被校园男生用于制作同学的非自愿私密图像。

"典型受害者往往是妇女儿童或其他性别与性少数群体,"新学院大学应用心理学副教授、性科技实验室创始人帕尼·法维德指出,"全球社会从未认真对待针对女性的暴力,无论其形式如何。"

法维德补充道:"这些行为中,部分施害者更具机会主义倾向,未能认识其造成的伤害,这与AI工具的呈现方式有关。某些AI伴侣服务会通过性别化服务锁定目标人群。另一些施害者则身处虐待圈或儿童虐待圈,或本就从事其他形式的暴力、性别暴力及性暴力行为。"

由研究员阿舍·弗林主导的澳大利亚研究访谈了25名深度伪造施害者与受害者。研究指出三大因素——日益易用的深度伪造工具、制作非自愿性图像的常态化、对伤害的轻描淡写——正阻碍应对这一持续蔓延的问题。与X平台Grok生成的非自愿性图像被公开传播不同,研究发现露骨深度伪造更倾向于私下传播给受害者或其亲友。"我只用了私人WhatsApp群组,"一名施害者告诉研究人员,"有些群组多达50人。"

该学术研究总结出深度伪造滥用的四大动机:性勒索、伤害他人、获取同伴认同、对工具功能的好奇。受访的10名施害者中8人自认为男性。

多位受访专家指出,许多开发深度伪造工具的社群对其造成的伤害持"轻率"态度。"使用这种工具制造甚至获取非自愿私密图像的行为,呈现出令人担忧的平庸化趋势,"人权组织Witness的政策倡导经理布鲁娜·马丁斯·多斯桑托斯表示。

对部分施害者而言,这项技术关乎权力与控制。"你只是想看看能做到什么程度,"一名施害者向弗林及其研究团队坦言,"当发现自己能创造这样的东西时,会产生一种类似上帝的快感。"

英文来源:

Open the website of one explicit deepfake generator and you’ll be presented with a menu of horrors. With just a couple of clicks, it offers you the ability to convert a single photo into an eight-second explicit videoclip, inserting women into realistic-looking graphic sexual situations. “Transform any photo into a nude version with our advanced AI technology,” text on the website says.
The options for potential abuse are extensive. Among the 65 video “templates” on the website are a range of “undressing” videos where the women being depicted will remove clothing—but there are also explicit video scenes named “fuck machine deepthroat” and various “semen” videos. Each video costs a small fee to be generated; adding AI-generated audio costs more.
The website, which WIRED is not naming to limit further exposure, includes warnings saying people should only upload photos they have consent to transform with AI. It’s unclear if there are any checks to enforce this.
Grok, the chatbot created by Elon Musk’s companies, has been used to created thousands of nonconsensual “undressing” or “nudify” bikini images—further industrializing and normalizing the process of digital sexual harassment. But it’s only the most visible—and far from the most explicit. For years, a deepfake ecosystem, comprising dozens of websites, bots, and apps, has been growing, making it easier than ever before to automate image-based sexual abuse, including the creation of child sexual abuse material (CSAM). This “nudify” ecosystem, and the harm it causes to women and girls, is likely more sophisticated than many people understand.
“It’s no longer a very crude synthetic strip,” says Henry Ajder, a deepfake expert who has tracked the technology for more than half a decade. “We’re talking about a much higher degree of realism of what's actually generated, but also a much broader range of functionality.” Combined, the services are likely making millions of dollars per year. “It's a societal scourge, and it’s one of the worst, darkest parts of this AI revolution and synthetic media revolution that we're seeing,” he says.
Over the past year, WIRED has tracked how multiple explicit deepfake services have introduced new functionality and rapidly expanded to offer harmful video creation. Image-to-video models typically now only need one photo to generate a short clip. A WIRED review of more than 50 “deepfake” websites, which likely receive millions of views per month, shows that nearly all of them now offer explicit, high-quality video generation and often list dozens of sexual scenarios women can be depicted into.
Meanwhile, on Telegram, dozens of sexual deepfake channels and bots have regularly released new features and software updates, such as different sexual poses and positions. For instance, in June last year, one deepfake service promoted a “sex-mode,” advertising it alongside the message: “Try different clothes, your favorite poses, age, and other settings.” Another posted that “more styles” of images and videos would be coming soon and users could “create exactly what you envision with your own descriptions” using custom prompts to AI systems.
“It's not just, 'You want to undress someone.’ It’s like, 'Here are all these different fantasy versions of it.’ It's the different poses. It's the different sexual positions,” says independent analyst Santiago Lakatos, who along with media outlet Indicator has researched how “nudify” services often use big technology company infrastructure and likely made big money in the process. “There’s versions where you can make someone [appear] pregnant,” Lakatos says.
A WIRED review found more than 1.4 million accounts were signed up to 39 deepfake creation bots and channels on Telegram. After WIRED asked Telegram about the services, the company removed at least 32 of the deepfake tools. “Nonconsensual pornography—including deepfakes and the tools used to create them—is strictly prohibited under Telegram’s terms of service,” a Telegram spokesperson says, adding that it removes content when it is detected and has removed 44 million pieces of content that violated its policies last year.
Lakatos says, in recent years, multiple larger “deepfake” websites have solidified their market position and now offer APIs to other people creating nonconsensual image and video generators, allowing more services to mushroom up. “They’re consolidating by buying up other different websites or nudify apps. They’re adding features that allow them to become infrastructure providers.”
So-called sexual deepfakes first emerged toward the end of 2017 and, at the time, required a user to have technical knowledge to create sexual imagery or videos. The widespread advances in generative AI systems of the past three years, including the availability of sophisticated open source photo and video generators, have allowed the technology to become more accessible, more realistic, and easier to use.
General deepfake videos of politicians and of conflicts around the world have been created to spread misinformation and disinformation. However, sexual deepfakes have continuously created widespread harm to women and girls. At the same time, laws to protect people have been slow to be implemented or not introduced at all.
“This ecosystem is built on the back of open-source models,” says Stephen Casper, a researcher working on AI safeguards and governance at Massachusetts Institute of Technology, who has documented the rise in deepfake video abuse and its role in nonconsensual intimate imagery generation. “Oftentimes it’s just an open-source model that has been used to develop an app that then a user uses,” Casper says.
The victims and survivors of nonconsensual intimate imagery (NCII), including deepfakes and other nonconsensually shared media, are nearly always women. False images and nonconsensual videos cause huge harm, including harassment, humiliation, and feeling “dehumanized.” Explicit deepfakes have been used to abuse politicians, celebrities, and social media influencers in recent years. But they have also been used by men to harass colleagues and friends, and by boys in schools to create nonconsensual intimate imagery of their classmates.
“Typically, the victims or the people who are affected by this are women and children or other types of gender or sexual minorities,” says Pani Farvid, associate professor of applied psychology and founder of The SexTech Lab at The New School. “We as a society globally do not take violence against women seriously, no matter what form it comes in.”
“There's a range of these different behaviors where some [perpetrators] are more opportunistic and do not see the harm that they're creating, and it is based on how an AI tool is also presented,” Farvid says, adding some AI companion services can target people with gendered services. “For others, this is because they are in abusive rings or child abuse rings, or they are folks who are already engaging in other forms of violence, gender-based violence, or sexual violence.”
One Australian study, led by the researcher Asher Flynn, interviewed 25 creators and victims of deepfake abuse. The study concluded that a trio of factors—increasingly easy-to-use deepfake tools, the normalization of creating nonconsensual sexual images, and the minimization of harms—could impact the prevention and response to the still growing problem. Unlike the widespread public sharing that happened with nonconsensual sexual images created using Grok on X, explicit deepfakes were more likely to be shared with victims or their friends and family privately, the study found. “I just simply used the personal WhatsApp groups,” one perpetrator told the researchers. “And some of these groups had up to 50 people.”
The academic research found four primary motivations for the deepfake abuse—of 10 perpetrators they interviewed, eight identified as men. These included sextortion, causing harm to others, getting reinforcement or bonding from their peers, and curiosity about the tools and what they could do with them.
Multiple experts WIRED spoke to said many of the communities developing deepfake tools have a “cavalier” or casual attitude to the harms they cause. “There's this tendency of a certain banality of the use of this tool to create NCII or even to have access to NCII that are concerning,” says Bruna Martins dos Santos, a policy and advocacy manager at Witness, a human rights group.
For some abusers creating deepfakes, the technology is about power and control. “You just want to see what’s possible,” one abuser told Flynn and fellow researchers involved in the study. “Then you have a little godlike buzz of seeing that you’re capable of creating something like that.”

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读