«

定制化AI换脸市场:真人女性肖像的深度伪造内幕

qimuai 发布于 阅读:12 一手编译


定制化AI换脸市场:真人女性肖像的深度伪造内幕

内容来源:https://www.technologyreview.com/2026/01/30/1131945/inside-the-marketplace-powering-bespoke-ai-deepfakes-of-real-women/

内容总结:

AI模型交易平台Civitai被曝助长定制化深度伪造,超九成伪造目标为女性真实形象

一项由斯坦福大学和印第安纳大学研究人员开展的最新调查显示,获得知名风投机构安德森·霍洛维茨基金支持的AI内容交易平台Civitai,其用户正通过平台交易可生成真人深度伪造内容的定制化模型文件,其中大量内容涉嫌生成该平台明令禁止的色情图像。研究指出,此类行为对女性权益构成严重侵害。

研究发现,在该平台2023年中至2024年底的用户需求中,尽管多数涉及动画内容,但针对真人深度伪造的“悬赏”需求占据相当比例,其中高达90%以女性为目标。尽管平台声称禁止真人色情深度伪造,但研究人员确认,平台上仍有大量在该禁令前提交的深度伪造需求及已成交的模型文件可供交易。

调查进一步揭露,平台86%的深度伪造需求围绕一种名为LoRA的指令文件展开。这类文件能够“调教”Stable Diffusion等主流AI模型,生成其未经训练的内容。用户常结合其他工具,最终产出逼真甚至色情的深度伪造图像。研究还发现,平台不仅提供交易基础设施,还通过教程文章指导用户如何生成色情内容,平台上的色情内容数量呈上升趋势。

尽管Civitai在2025年5月宣布将禁止所有深度伪造内容,并设置了由当事人手动申请下架的内容审核机制,但研究指出,平台将审核责任转嫁给公众,而非主动监管。法律专家表示,科技公司虽通常对用户内容享有责任豁免,但若“明知故犯”地为非法交易提供便利,则可能面临法律风险。

值得关注的是,深度伪造技术滥用问题在投资界与内容平台层面尚未得到足够重视。Civitai并非安德森·霍洛维茨基金投资组合中唯一涉及此类问题的公司,这反映出行业在伦理监管与内容治理上仍存在显著漏洞。

目前,Civitai与安德森·霍洛维茨基金均未就此事予以置评。随着AI生成内容技术快速普及,如何有效防治技术滥用、保护个人权益,已成为摆在行业与监管方面前的紧迫课题。

中文翻译:

在定制真人女性AI深度伪造品的市场内部
新研究详细揭示了Civitai如何允许用户买卖工具来微调深度伪造品,而该公司声称此类内容已被禁止。
Civitai是一个买卖AI生成内容的在线市场,由风险投资公司安德森·霍洛维茨基金支持。最新分析发现,该平台允许用户购买用于生成名人深度伪造品的定制指令文件,其中部分文件专门设计用于制作该网站明令禁止的色情图像。
斯坦福大学和印第安纳大学的研究人员分析了该平台上被称为"悬赏任务"的用户内容需求。研究发现,2023年中至2024年底期间,大多数悬赏任务要求动画内容,但相当一部分涉及真人深度伪造,其中90%的深度伪造请求针对女性。(该研究尚未经过同行评审。)
近期X旗下聊天机器人Grok因露骨图像引发的争议表明,关于深度伪造的讨论始终围绕平台应如何屏蔽此类内容。Civitai的情况则更为复杂:其市场不仅包含实际图像、视频和模型,还允许个人买卖名为LoRA的指令文件,这些文件能指导Stable Diffusion等主流AI模型生成其未经训练的内容。用户可将这些文件与其他工具结合,制作具有图形化或性暗示的深度伪造品。研究人员发现Civitai上86%的深度伪造请求针对LoRA文件。
在这些悬赏任务中,用户要求获取"高质量"模型以生成网红查理·达梅利奥或歌手格雷西·艾布拉姆斯等公众人物的图像,通常附上其社交媒体资料链接以便从网络抓取图像。部分请求特别注明需要能生成人物全身、准确还原纹身或允许改变发色的模型。有些请求针对特定领域的多位女性,例如录制ASMR视频的艺术家。甚至出现要求伪造据称是用户妻子的女性深度伪造的请求。任何用户均可提交为此任务开发的AI模型,最佳提交者可获得0.5至5美元不等的报酬——近92%的深度伪造悬赏任务最终支付了酬金。
Civitai与安德森·霍洛维茨基金均未回应置评请求。
用户购买这些LoRA文件可能用于制作非色情内容的深度伪造(尽管仍违反Civitai使用条款且存在伦理争议)。但该平台还提供如何使用外部工具进一步定制图像生成器输出的教程,例如改变人物姿势。网站同时收录用户撰写的文章,详细指导如何让模型生成色情内容。研究人员发现平台色情内容数量持续上升,目前每周大多数请求均涉及NSFW(不适合工作场所)内容。
"Civitai不仅提供了助长这些问题的技术基础,还明确教导用户如何利用它们。"斯坦福大学网络政策中心博士后研究员、该研究负责人之一马修·德维纳表示。
该公司曾仅禁止真人色情深度伪造,但在2025年5月宣布将禁止所有深度伪造内容。然而《麻省理工科技评论》证实,禁令前提交的无数深度伪造请求仍在网站活跃,许多满足这些请求的中标作品仍可供购买。
德维纳指出:"我认为他们采取的策略是尽可能少作为,以便在平台上培育更多——我猜他们会称之为——创造力。"
用户使用平台虚拟货币Buzz购买LoRA文件,该货币需用真实货币兑换。2025年5月,因持续存在的非自愿内容问题,Civitai的信用卡支付处理器终止了合作。用户现在必须使用礼品卡或加密货币购买Buzz获取露骨内容;平台对非露骨内容提供另一种代币。
Civitai会自动标记深度伪造请求的悬赏任务,并为内容涉及者提供手动申请下架的途径。这意味着平台虽能有效识别深度伪造悬赏,但仍将审核责任交由公众而非主动执行。
科技公司对用户行为的法律责任尚未完全明确。根据《通信规范法》第230条,科技公司通常对其内容享有广泛法律保护,但这种保护并非无限。"例如,网站不能明知故犯地促成非法交易。"华盛顿大学法学院专攻技术与AI的教授瑞安·卡洛指出(未参与本研究)。
2024年,Civitai与OpenAI、Anthropic等AI公司共同采纳设计原则,防止AI生成儿童性虐待材料的制作传播。此举源于斯坦福互联网观察站2023年的报告,该报告发现儿童性虐待社群提及的AI模型绝大多数是基于Stable Diffusion的模型,"主要通过Civitai获取"。
但成人深度伪造尚未获得内容平台或风投公司的同等关注。"他们对此不够警惕,容忍度过高。"卡洛表示,"无论是执法部门还是民事法庭都未能提供充分保护,两者差距悬殊。"
2023年11月,Civitai获得安德森·霍洛维茨基金500万美元投资。在该基金发布的视频中,联合创始人兼CEO贾斯汀·迈尔将打造"供人们为个人目的寻找和分享AI模型的主要平台"设为目标。他表示:"我们致力于让这个原本小众且技术门槛高的领域变得越来越平易近人。"
Civitai并非安德森·霍洛维茨基金投资组合中唯一存在深度伪造问题的公司。今年2月,《麻省理工科技评论》首次披露另一家被投公司Botify AI提供类似真人演员的AI伴侣,这些角色自称未满18岁,进行性暗示对话,提供"热辣照片",甚至在某些情况下将法定同意年龄法律描述为"武断的"且"注定要被打破的"。

深度解读
人工智能
认识将大语言模型视为外星生物的新生代生物学家
通过将大语言模型当作生命体而非计算机程序来研究,科学家们首次揭示了它们的部分奥秘。
2026年AI将走向何方
我们的AI作者对未来一年做出大胆预测——以下是五大值得关注的热点趋势。
杨立昆的新创企是对大语言模型的逆向押注
这位AI先驱在独家专访中分享了其巴黎新公司AMI实验室的发展规划。

保持联系
获取《麻省理工科技评论》最新动态
探索特别优惠、头条新闻、即将举办的活动等更多内容。

英文来源:

Inside the marketplace powering bespoke AI deepfakes of real women
New research details how Civitai lets users buy and sell tools to fine-tune deepfakes the company says are banned.
Civitai—an online marketplace for buying and selling AI-generated content, backed by the venture capital firm Andreessen Horowitz—is letting users buy custom instruction files for generating celebrity deepfakes. Some of these files were specifically designed to make pornographic images banned by the site, a new analysis has found.
The study, from researchers at Stanford and Indiana University, looked at people’s requests for content on the site, called “bounties.” The researchers found that between mid-2023 and the end of 2024, most bounties asked for animated content—but a significant portion were for deepfakes of real people, and 90% of these deepfake requests targeted women. (Their findings have not yet been peer reviewed.)
The debate around deepfakes, as illustrated by the recent backlash to explicit images on the X-owned chatbot Grok, has revolved around what platforms should do to block such content. Civitai’s situation is a little more complicated. Its marketplace includes actual images, videos, and models, but it also lets individuals buy and sell instruction files called LoRAs that can coach mainstream AI models like Stable Diffusion into generating content they were not trained to produce. Users can then combine these files with other tools to make deepfakes that are graphic or sexual. The researchers found that 86% of deepfake requests on Civitai were for LoRAs.
In these bounties, users requested “high quality” models to generate images of public figures like the influencer Charli D’Amelio or the singer Gracie Abrams, often linking to their social media profiles so their images could be grabbed from the web. Some requests specified a desire for models that generated the individual’s entire body, accurately captured their tattoos, or allowed hair color to be changed. Some requests targeted several women in specific niches, like artists who record ASMR videos. One request was for a deepfake of a woman said to be the user’s wife. Anyone on the site could offer up AI models they worked on for the task, and the best submissions received payment—anywhere from $0.50 to $5. And nearly 92% of the deepfake bounties were awarded.
Neither Civitai nor Andreessen Horowitz responded to requests for comment.
It’s possible that people buy these LoRAs to make deepfakes that aren’t sexually explicit (though they’d still violate Civitai’s terms of use, and they’d still be ethically fraught). But Civitai also offers educational resources on how to use external tools to further customize the outputs of image generators—for example, by changing someone’s pose. The site also hosts user-written articles with details on how to instruct models to generate pornography. The researchers found that the amount of porn on the platform has gone up, and that the majority of requests each week are now for NSFW content.
“Not only does Civitai provide the infrastructure that facilitates these issues; they also explicitly teach their users how to utilize them,” says Matthew DeVerna, a postdoctoral researcher at Stanford’s Cyber Policy Center and one of the study’s leaders.
The company used to ban only sexually explicit deepfakes of real people, but in May 2025 it announced it would ban all deepfake content. Nonetheless, countless requests for deepfakes submitted before this ban now remain live on the site, and many of the winning submissions fulfilling those requests remain available for purchase, MIT Technology Review confirmed.
“I believe the approach that they’re trying to take is to sort of do as little as possible, such that they can foster as much—I guess they would call it—creativity on the platform,” DeVerna says.
Users buy LoRAs with the site’s online currency, called Buzz, which is purchased with real money. In May 2025, Civita’s credit card processor cut off the company because of its ongoing problem with nonconsensual content. To pay for explicit content, users must now use gift cards or cryptocurrency to buy Buzz; the company offers a different scrip for non-explicit content.
Civitai automatically tags bounties requesting deepfakes and lists a way for the person featured in the content to manually request its takedown. This system means that Civitai has a reasonably successful way of knowing which bounties are for deepfakes, but it’s still leaving moderation to the general public rather than carrying it out proactively.
A company’s legal liability for what its users do isn’t totally clear. Generally, tech companies have broad legal protections against such liability for their content under Section 230 of the Communications Decency Act, but those protections aren’t limitless. For example, “you cannot knowingly facilitate illegal transactions on your website,” says Ryan Calo, a professor specializing in technology and AI at the University of Washington’s law school. (Calo wasn’t involved in this new study.)
Civitai joined OpenAI, Anthropic, and other AI companies in 2024 in adopting design principles to guard against the creation and spread of AI-generated child sexual abuse material . This move followed a 2023 report from the Stanford Internet Observatory, which found that the vast majority of AI models named in child sexual abuse communities were Stable Diffusion–based models “predominantly obtained via Civitai.”
But adult deepfakes have not gotten the same level of attention from content platforms or the venture capital firms that fund them. “They are not afraid enough of it. They are overly tolerant of it,” Calo says. “Neither law enforcement nor civil courts adequately protect against it. It is night and day.”
Civitai received a $5 million investment from Andreessen Horowitz (a16z) in November 2023. In a video shared by a16z, Civitai cofounder and CEO Justin Maier described his goal of building the main place where people find and share AI models for their own individual purposes. “We’ve aimed to make this space that’s been very, I guess, niche and engineering-heavy more and more approachable to more and more people,” he said.
Civitai is not the only company with a deepfake problem in a16z’s investment portfolio; in February, MIT Technology Review first reported that another company, Botify AI, was hosting AI companions resembling real actors that stated their age as under 18, engaged in sexually charged conversations, offered “hot photos,” and in some instances described age-of-consent laws as “arbitrary” and “meant to be broken.”
Deep Dive
Artificial intelligence
Meet the new biologists treating LLMs like aliens
By studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time.
What’s next for AI in 2026
Our AI writers make their big bets for the coming year—here are five hot trends to watch.
Yann LeCun’s new venture is a contrarian bet against large language models
In an exclusive interview, the AI pioneer shares his plans for his new Paris-based company, AMI Labs.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读