«

谷歌与OpenAI的聊天机器人可将照片中的女性衣物脱至比基尼。

qimuai 发布于 阅读:30 一手编译


谷歌与OpenAI的聊天机器人可将照片中的女性衣物脱至比基尼。

内容来源:https://www.wired.com/story/google-and-openais-chatbots-can-strip-women-in-photos-down-to-bikinis/

内容总结:

近期,部分用户滥用生成式人工智能工具,利用女性日常着装照片非法制作“比基尼换脸”深度伪造内容。此类行为多数未经当事人同意,相关网络社群甚至公开交流如何绕过AI安全限制,实现“脱衣”效果。

在Reddit平台一个已被删除的帖子中,有用户上传身穿印度纱丽的女性照片,要求“替换为比基尼”,并迅速获得他人生成的深度伪造图像作为回应。平台安全团队在接到举报后已删除违规内容,相关讨论版块因违反平台规则已被封禁。

尽管谷歌Gemini、OpenAI的ChatGPT等主流AI工具均设有禁止生成色情内容的安全防护机制,但用户仍能通过特定话术突破限制。测试显示,用简单英文指令即可将正常着装女性照片篡改为比基尼虚假图像。

谷歌与OpenAI发言人分别回应称,其政策明确禁止未经同意篡改他人形象及生成色情内容,并持续加强防护措施。然而,网络社群中关于制作女性非自愿虚假图像的讨论依然活跃。

电子前沿基金会法律主任指出,此类“滥用性色情图像”是AI图像生成器的核心风险之一,强调必须关注工具使用方式,并在造成损害时追究相关人员与企业责任。随着AI图像生成技术日益逼真,如何有效防治技术滥用已成为亟待解决的社会议题。

中文翻译:

部分热门聊天机器人的用户正利用女性身着日常服装的照片作为素材,生成比基尼换脸合成照。绝大多数此类伪造图像都是在照片女性毫不知情的情况下生成的。这些用户中还有人向他人传授技巧,指导如何利用生成式人工智能工具"脱去"照片中女性的衣物,使其呈现身着比基尼的视觉效果。

在一篇现已删除、标题为"Gemini生成限制级图像太简单"的Reddit帖子中,用户们相互交流如何让谷歌生成式人工智能模型Gemini制作女性身着暴露服装的图片。该讨论串中的多数图像完全由AI生成,但其中一则请求尤为引人注目:某用户上传了一张身着印度纱丽的女性的照片,要求他人"移除"其衣物并"换上比基尼"。随后便有用户以便脸合成图像满足了这一要求。在《连线》杂志向Reddit举报这些帖子并寻求置评后,Reddit安全团队删除了相关请求及AI换脸图像。

"Reddit的全站规则禁止未经许可传播私密媒体内容,包括此类行为。"平台发言人表示。该讨论发生的子版块r/ChatGPTJailbreak在被平台依据"禁止破坏网站"规则封禁前,已拥有超过20万关注者。

随着能够轻松制造逼真假图像的生成式AI工具持续扩散,使用者正不断通过非自愿换脸图像骚扰女性。已有数百万人访问过恶意的"脱衣"网站,这些网站专门供用户上传真人照片,并要求通过生成式AI技术实现"脱衣"效果。

除xAI的Grok作为显著例外,目前主流聊天机器人通常不允许在AI输出中生成限制级图像。包括谷歌Gemini和OpenAI的ChatGPT在内的这些机器人,都设有防护机制以阻止有害内容生成。去年11月,谷歌推出了擅长修改现有照片并生成超写实人像的新成像模型Nano Banana Pro;OpenAI则于上周发布了更新的成像模型ChatGPT Images作为回应。

随着技术迭代,当用户能够突破防护机制时,伪造肖像的逼真度可能将进一步提升。在另一则关于生成限制级图像的Reddit讨论中,有用户曾咨询如何规避防护机制,以调整人物着装使裙装更显紧身。在《连线》为验证这些技术对Gemini和ChatGPT有效性的有限测试中,我们仅使用通俗英语编写的基础指令,便将身着常服女性的图像转换成了比基尼换脸照。

针对用户利用Gemini生成比基尼换脸照的现象,谷歌发言人回应称公司"明确禁止使用其AI工具生成色情内容",并表示其工具正持续优化以"更准确地体现AI政策要求"。对于《连线》关于ChatGPT用户能生成比基尼换脸照的质询,OpenAI发言人承认今年已放宽对非性场景成人身体的某些限制,同时强调其使用政策禁止未经许可修改他人肖像,并对制作露骨换脸照的用户采取封号等措施。

关于生成女性限制级图像的线上讨论依然活跃。本月在r/GeminiAI子版块中,仍有用户向他人传授将照片中女性服装改为比基尼泳装的方法(经《连线》指出后Reddit已删除该评论)。电子前沿基金会法律总监科琳·麦克谢里指出,"滥用性化图像"已成为AI图像生成器的核心风险之一。

她强调这些图像工具除换脸技术外尚有其他用途,关注工具使用方式至关重要,同时"在造成潜在危害时追究个人与企业责任"也不可或缺。

英文来源:

Some users of popular chatbots are generating bikini deepfakes using photos of fully clothed women as their source material. Most of these fake images appear to be generated without the consent of the women in the photos. Some of these same users are also offering advice to others on how to use the generative AI tools to strip the clothes off of women in photos and make them appear to be wearing bikinis.
Under a now-deleted Reddit post titled “gemini nsfw image generation is so easy,” users traded tips for how to get Gemini, Google’s generative AI model, to make pictures of women in revealing clothes. Many of the images in the thread were entirely AI, but one request stood out.
A user posted a photo of a woman wearing an Indian sari, asking for someone to “remove” her clothes and “put a bikini” on instead. Someone else replied with a deepfake image to fulfil the request. After WIRED notified Reddit about these posts and asked the company for comment, Reddit’s safety team removed the request and the AI deepfake.
“Reddit's sitewide rules prohibit nonconsensual intimate media, including the behavior in question,” said a spokesperson. The subreddit where this discussion occurred, r/ChatGPTJailbreak, had over 200,000 followers before Reddit banned it under the platform’s “don't break the site” rule.
As generative AI tools that make it easy to create realistic but false images continue to proliferate, users of the tools have continued to harass women with nonconsensual deepfake imagery. Millions have visited harmful “nudify” websites, designed for users to upload real photos of people and request for them to be undressed using generative AI.
With xAI’s Grok as a notable exception, most mainstream chatbots don’t usually allow the generation of NSFW images in AI outputs. These bots, including Google’s Gemini and OpenAI’s ChatGPT, are also fitted with guardrails that attempt to block harmful generations.
In November, Google released Nano Banana Pro, a new imaging model that excels at tweaking existing photos and generating hyperrealistic images of people. OpenAI responded last week with its own updated imaging model, ChatGPT Images.
As these tools improve, likenesses may become more realistic when users are able to subvert guardrails.
In a separate Reddit thread about generating NSFW images, a user asked for recommendations on how to avoid guardrails when adjusting someone’s outfit to make the subject’s skirt appear tighter. In WIRED’s limited tests to confirm that these techniques worked on Gemini and ChatGPT, we were able to transform images of fully clothed women into bikini deepfakes using basic prompts written in plain English.
When asked about users generating bikini deepfakes using Gemini, a spokesperson for Google said the company has "clear policies that prohibit the use of [its] AI tools to generate sexually explicit content." The spokesperson claims Google's tools are continually improving at "reflecting" what's laid out in its AI policies.
In response to WIRED’s request for comment about users being able to generate bikini deepfakes with ChatGPT, a spokesperson for OpenAI claims the company loosened some ChatGPT guardrails this year around adult bodies in nonsexual situations. The spokesperson also highlights OpenAI’s usage policy, stating that ChatGPT users are prohibited from altering someone else’s likeness without consent and that the company takes action against users generating explicit deepfakes, including account bans.
Online discussions about generating NSFW images of women remain active. This month, a user in the r/GeminiAI subreddit offered instructions to another user on how to change women's outfits in a photo into bikini swimwear. (Reddit deleted this comment when we pointed it out to them.)
Corynne McSherry, a legal director at the Electronic Frontier Foundation, sees “abusively sexualized images” as one of AI image generators' core risks.
She mentions that these image tools can be used for other purposes outside of deepfakes and that focusing how the tools are used is critical—as well as “holding people and corporations accountable” when potential harm is caused.

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读