«

美国调查人员正借助人工智能技术,识别由AI生成的儿童虐待图像。

qimuai 发布于 阅读:4 一手编译


美国调查人员正借助人工智能技术,识别由AI生成的儿童虐待图像。

内容来源:https://www.technologyreview.com/2025/09/26/1124343/us-investigators-are-using-ai-to-detect-child-abuse-images-made-by-ai/

内容总结:

美国国土安全部下属网络犯罪中心正尝试运用人工智能技术,应对由AI生成的儿童虐待内容激增的挑战。该机构已于9月19日与旧金山科技公司Hive AI签订价值15万美元的合约,利用其算法识别涉嫌儿童性虐待的AI生成图像。

政府文件显示,美国失踪与受虐儿童中心2024年涉及生成式AI的儿童虐待案件举报量激增1325%。调查人员指出,海量数字内容迫使执法机构必须采用自动化工具进行高效筛查。新型检测工具的核心目标在于区分AI合成图像与真实受害者影像,从而确保调查资源优先用于救助实际面临危险的儿童。

Hive AI首席执行官Kevin Guo透露,其检测技术基于对图像像素组合特征的分析,无需针对儿童虐待内容进行专门训练即可实现通用识别。该公司此前已为美国军方提供深度伪造检测服务,并与非营利组织Thorn合作开发了针对已知虐待内容的哈希值屏蔽系统。

此次合约采用非竞标形式签署,政府文件援引芝加哥大学2024年研究结果称Hive的检测准确率优于四款同类产品。该试点项目将持续三个月,相关儿童保护机构未及时回应对检测模型有效性的质询。

中文翻译:

美国调查机构正尝试利用人工智能技术来甄别由AI生成的儿童虐待图像。尽管人工智能助长了合成儿童虐待内容的泛滥,但它同样被寄予厚望,有望成为保护真实受害者的利器。

根据最新政府文件显示,生成式AI导致儿童性虐待图像数量激增,目前美国打击儿童剥削犯罪的首席调查机构正在测试利用AI技术区分AI生成内容与真实受害者影像。隶属美国国土安全部的网络犯罪中心(负责跨国儿童剥削案件调查)已与旧金山初创企业Hive AI签订价值15万美元的合同,采购其能识别AI生成内容的软件系统。

这份9月19日公示的文件经过大量删节处理。Hive公司联合创始人兼CEO郭凯文向《麻省理工科技评论》透露,虽无法透露合同细节,但确认涉及将其AI检测算法应用于儿童性虐待材料(CSAM)的识别工作。文件引用美国失踪与受虐儿童中心数据指出,2024年涉及生成式AI的犯罪事件激增1325%。报告强调:"网络数字内容的爆炸式增长必须借助自动化工具实现高效分析处理。"

办案人员的首要任务是解救正处于危险中的真实受害者,但海量AI生成的CSAM内容使调查人员难以快速辨别影像真伪。能准确标记真实受害者的工具将极大提升案件侦办效率。文件指出,识别AI生成图像"可确保调查资源集中于涉及真实受害者的案件,从而最大化行动效果,保护弱势群体"。

Hive AI除提供视频图像生成工具外,还开发了能标记暴力、垃圾信息、色情内容乃至识别公众人物的内容审核系统。去年12月本刊曾报道该公司向美军出售深度伪造检测技术。针对CSAM识别,Hive与儿童安全非营利组织Thorn联合开发了平台嵌入工具,采用"哈希值"技术为已知CSAM内容赋予唯一识别码并阻断上传。此类工具已成为科技企业的标准防护手段。

但现有工具仅能判定内容是否属于CSAM,无法识别其是否由AI生成。Hive另开发了通用型AI图像识别工具,郭凯文表示该工具虽未针对CSAM进行专门训练,但通过"识别图像中特定像素组合规律"即可实现跨领域应用。网络犯罪中心将采用该技术评估CSAM材料,Hive会根据客户具体需求对检测工具进行针对性优化。

参与遏制CSAM传播工作的美国失踪与受虐儿童中心未能在截稿前回应关于检测模型有效性的问询。政府在文件中解释了采用非竞标方式采购Hive服务的依据,尽管部分内容被涂黑,但主要引用两点:芝加哥大学2024年研究显示Hive的AI检测工具在识别AI艺术作品方面优于四款同类产品;其与五角大楼签订的深度伪造识别合同。本次试验周期为三个月。

深度解析
人工智能
谷歌首次公布单次AI查询能耗数据
这是头部AI企业迄今最透明的能耗评估,为研究人员提供了期待已久的内幕视角。

OpenAI研究领域的两位掌舵人
与本刊独家对话的研究双主管马克·陈与雅各布·帕霍茨基,探讨通往更强推理模型与超对齐技术的研究路径。

心理咨询师秘密使用ChatGPT引发客户信任危机
部分治疗师在诊疗过程中使用AI,这种行为正在危及客户信任与隐私安全。

GPT-5正式亮相后的行业展望
这次备受期待的升级为ChatGPT用户体验带来多项改进,但距离通用人工智能仍道阻且长。

保持联系
订阅《麻省理工科技评论》
获取最新资讯、特惠活动、头条报道及即将举办的活动信息。

英文来源:

US investigators are using AI to detect child abuse images made by AI
Though artificial intelligence is fueling a surge in synthetic child abuse images, it’s also being tested as a way to stop harm to real victims.
Generative AI has enabled the production of child sexual abuse images to skyrocket. Now the leading investigator of child exploitation in the US is experimenting with using AI to distinguish AI-generated images from material depicting real victims, according to a new government filing.
The Department of Homeland Security’s Cyber Crimes Center, which investigates child exploitation across international borders, has awarded a $150,000 contract to San Francisco–based Hive AI for its software, which can identify whether a piece of content was AI-generated.
The filing, posted on September 19, is heavily redacted and Hive cofounder and CEO Kevin Guo told MIT Technology Review that he could not discuss the details of the contract, but confirmed it involves use of the company’s AI detection algorithms for child sexual abuse material (CSAM).
The filing quotes data from the National Center for Missing and Exploited Children that reported a 1,325% increase in incidents involving generative AI in 2024. “The sheer volume of digital content circulating online necessitates the use of automated tools to process and analyze data efficiently,” the filing reads.
The first priority of child exploitation investigators is to find and stop any abuse currently happening, but the flood of AI-generated CSAM has made it difficult for investigators to know whether images depict a real victim currently at risk. A tool that could successfully flag real victims would be a massive help when they try to prioritize cases.
Identifying AI-generated images “ensures that investigative resources are focused on cases involving real victims, maximizing the program’s impact and safeguarding vulnerable individuals,” the filing reads.
Hive AI offers AI tools that create videos and images, as well as a range of content moderation tools that can flag violence, spam, and sexual material and even identify celebrities. In December, MIT Technology Review reported that the company was selling its deepfake-detection technology to the US military.
For detecting CSAM, Hive offers a tool created with Thorn, a child safety nonprofit, which companies can integrate into their platforms. This tool uses a “hashing” system, which assigns unique IDs to content known by investigators to be CSAM, and blocks that material from being uploaded. This tool, and others like it, have become a standard line of defense for tech companies.
But these tools simply identify a piece of content as CSAM; they don’t detect whether it was generated by AI. Hive has created a separate tool that determines whether images in general were AI-generated. Though it is not trained specifically to work on CSAM, according to Guo, it doesn’t need to be.
“There’s some underlying combination of pixels in this image that we can identify” as AI-generated, he says. “It can be generalizable.”
This tool, Guo says, is what the Cyber Crimes Center will be using to evaluate CSAM. He adds that Hive benchmarks its detection tools for each specific use case its customers have in mind.
The National Center for Missing and Exploited Children, which participates in efforts to stop the spread of CSAM, did not respond to requests for comment on the effectiveness of such detection models in time for publication.
In its filing, the government justifies awarding the contract to Hive without a competitive bidding process. Though parts of this justification are redacted, it primarily references two points also found in a Hive presentation slide deck. One involves a 2024 study from the University of Chicago, which found that Hive’s AI detection tool outranked four other detectors in identifying AI-generated art. The other is its contract with the Pentagon for identifying deepfakes. The trial will last three months.
Deep Dive
Artificial intelligence
In a first, Google has released data on how much energy an AI prompt uses
It’s the most transparent estimate yet from one of the big AI companies, and a long-awaited peek behind the curtain for researchers.
The two people shaping the future of OpenAI’s research
An exclusive conversation with Mark Chen and Jakub Pachocki, OpenAI’s twin heads of research, about the path toward more capable reasoning models—and superalignment.
Therapists are secretly using ChatGPT. Clients are triggered.
Some therapists are using AI during therapy sessions. They’re risking their clients’ trust and privacy in the process.
GPT-5 is here. Now what?
The much-hyped release makes several enhancements to the ChatGPT user experience. But it’s still far short of AGI.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读