Anthropic指控DeepSeek等中国公司利用Claude训练其人工智能模型。

内容总结:
美国人工智能公司Anthropic近日指控三家中国AI企业——深度求索、MiniMax和月之暗面——通过创建约2.4万个虚假账户、发起超1600万次非正当交互,系统性滥用其Claude模型以提升自身产品性能。该公司称此举属于“工业级规模”的模型蒸馏行为,即通过大模型输出训练小模型,虽属合法方法但可能被用于规避独立研发所需的时间与成本。
据披露,深度求索与Claude的交互量超15万次,重点针对其逻辑推理能力,并试图生成涉及异见人士、政党领导及权威体制等敏感政治议题的“符合内容审核要求的替代表述”。月之暗面和MiniMax的交互量则分别达到340万次和1300万次。Anthropic警告称,此类行为可能使未经安全防护的AI能力被用于军事、情报及监控系统,助长网络攻击、虚假信息传播和大规模监控。
该公司呼吁AI行业、云服务商及立法机构共同关注模型蒸馏滥用风险,并指出“芯片获取限制”可能抑制此类行为的规模。此前,OpenAI亦曾向美国国会指称深度求索存在“搭便车”行为。目前相关中国企业尚未对此指控作出公开回应。
中文翻译:
Anthropic公司指控深度求索等三家中国AI公司滥用其Claude模型,试图提升自身产品性能。据《华尔街日报》此前报道,Anthropic在周一发布的公告中指出,这些"工业级规模行动"涉及创建约2.4万个虚假账户,并与Claude进行了超1600万次交互。
Anthropic指控深度求索等中国公司利用Claude训练自家AI模型。据称深度求索不仅针对Claude的推理能力进行模仿,还利用该模型生成"针对政治敏感问题的审查规避版本"。
涉事的三家公司——深度求索、MiniMax和月之暗面——被指控对Claude进行"模型蒸馏",即基于更先进的模型训练较小规模的AI模型。Anthropic承认蒸馏本是"合法的训练方法",但强调该方法也可能被用于非法目的,包括"以远低于独立研发所需的时间和成本,获取其他实验室的强大技术能力"。
该公司进一步指出,非法蒸馏的模型"极不可能"继承原有安全防护机制。"外国实验室若对美国模型进行蒸馏,可能将这种无防护的能力注入军事、情报和监控系统,使威权政府能够将前沿AI技术用于网络攻击、虚假信息宣传和大规模监控。"
凭借强大高效的模型引发行业震动的深度求索,据称与Claude进行了超15万次交互,重点模仿其推理能力。该公司还被指控利用Claude生成"关于异见人士、政党领袖或威权主义等政治敏感问题的审查规避表述"。OpenAI上周致信立法者时同样指控深度求索"持续试图免费获取OpenAI及其他美国前沿实验室研发的技术能力"。
月之暗面和MiniMax分别与Claude进行了340万次和1300万次交互。Anthropic正呼吁AI行业其他成员、云服务提供商及立法者共同应对模型蒸馏问题,并指出"芯片获取限制"可能制约模型训练规模及"非法蒸馏的扩张速度"。
英文来源:
Anthropic claims DeepSeek and two other Chinese AI companies misused its Claude AI model in an attempt to improve their own products. In an announcement on Monday, Anthropic says the “industrial-scale campaigns” involved the creation of around 24,000 fraudulent accounts and more than 16 million exchanges with Claude, as reported earlier by The Wall Street Journal.
Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI
DeepSeek allegedly targeted Claude’s reasoning capabilities, while generating ‘censorship-safe alternatives to politically sensitive questions.’
DeepSeek allegedly targeted Claude’s reasoning capabilities, while generating ‘censorship-safe alternatives to politically sensitive questions.’
The three companies — DeepSeek, MiniMax, and Moonshot — are accused of “distilling” Claude, or training a smaller AI model based on a more advanced one. Though Anthropic says that distillation is a “legitimate training method,” it adds that it can “also be used for illicit purposes,” including “to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”
Anthropic adds that illicitly distilled models are “unlikely” to carry over existing safeguards. “Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems — enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” Anthropic writes.
DeepSeek, which caused a stir in the AI industry for its powerful but more efficient models, held over 150,000 exchanges with Claude and targeted its reasoning capabilities, according to Anthropic. It’s also accused of using Claude to generate “censorship-safe alternatives to politically sensitive questions about dissidents, party leaders, or authoritarianism.” In a letter to lawmakers last week, OpenAI similarly accused DeepSeek of “ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs.”
Moonshot and MiniMax had more than 3.4 million and 13 million exchanges with Claude, respectively. Anthropic is calling on other members in the AI industry, cloud providers, and lawmakers to address distillation, adding that “restricted chip access” could limit model training and “the scale of illicit distillation.”
文章标题:Anthropic指控DeepSeek等中国公司利用Claude训练其人工智能模型。
文章链接:https://qimuai.cn/?post=3399
本站文章均为原创,未经授权请勿用于任何商业用途