OpenAI与谷歌员工纷纷声援Anthropic,共同应对国防部诉讼。

内容总结:
据法庭文件披露,超过30名来自OpenAI和谷歌DeepMind的员工于本周一联合提交声明,支持人工智能公司Anthropic对美国国防部提起的诉讼。此前五角大楼以“供应链风险”为由对Anthropic进行制裁,引发科技行业强烈反弹。
此次争议源于Anthropic公司拒绝国防部将其AI技术用于大规模监控美国公民或自主操作武器系统。国防部则主张其应有权将AI用于任何“合法”用途,不应受私营承包商限制。上周晚些时候,五角大楼随即以“供应链风险”为由对Anthropic进行标记——该标签通常仅用于外国竞争对手。
联署声明指出:“政府将Anthropic标记为供应链风险的行为属于权力滥用,将对整个行业产生严重影响。”签署者包括谷歌DeepMind首席科学家杰夫·迪恩等多位行业领军人物。声明强调,若国防部对其与Anthropic的合同条款不满,完全可以选择终止合作并转向其他AI服务商,而非采取制裁手段。
值得注意的是,在标记Anthropic为供应链风险后,国防部随即与OpenAI签署了合作协议,此举引发众多ChatGPT开发公司员工的抗议。联名文件警告称:“若放任此举,对美国领先AI公司的惩罚将损害美国在人工智能及相关领域的工业与科学竞争力,并抑制业界对AI系统风险的公开讨论。”
文件同时指出,在缺乏AI使用公共法规的情况下,开发者通过合同与技术手段对系统施加的限制,已成为防止灾难性滥用的关键保障。据悉,近期已有大量科技从业者签署公开信,要求国防部撤销相关标记,并呼吁企业领导者支持Anthropic、拒绝AI系统的单方面军事应用。
该声明提交于Anthropic对国防部及相关联邦机构提起双重诉讼数小时后,由《连线》杂志率先报道。
中文翻译:
本周一,超过30名OpenAI和谷歌DeepMind的员工联名提交声明,支持人工智能公司Anthropic对美国国防部提起的诉讼。此前,美国联邦机构将Anthropic列为供应链风险企业。根据法庭文件显示,这份声明指出:"政府将Anthropic指定为供应链风险的行为属于不当且武断的权力行使,对我们行业将产生严重影响。"该声明的签署者包括谷歌DeepMind首席科学家杰夫·迪恩。
上周末,五角大楼将Anthropic列为供应链风险企业——这类标签通常仅用于外国竞争对手。此前,这家AI公司拒绝允许国防部将其技术用于对美国民众的大规模监控或自主武器发射。国防部曾主张,其应能将AI用于任何"合法"目的,而不应受私营承包商限制。
在Anthropic对国防部及其他联邦机构提起两起诉讼数小时后,支持该公司的法庭之友陈述书出现在案件目录中。《连线》杂志率先报道了这一消息。
谷歌与OpenAI员工在法庭文件中指出,如果五角大楼"对其与Anthropic合同约定的条款不再满意",该机构本可以"直接取消合同,转而采购其他领先AI公司的服务"。事实上,国防部在将Anthropic列为供应链风险企业的同时,立即与OpenAI签署了合作协议——此举遭到ChatGPT开发商的众多员工抗议。
陈述书强调:"若放任这种惩罚美国领先AI企业的行为,必将对美国在人工智能及其他领域的工业与科学竞争力造成影响,并将阻碍我们领域内关于当今AI系统风险与收益的公开讨论。"
文件同时申明,Anthropic划定的技术红线是值得建立严格防护措施的合理关切。陈述书指出,在缺乏规范AI使用的公共法律的情况下,开发人员对其系统设置的合同与技术限制,是防止灾难性滥用的关键保障。
近期签署声明的员工中,有多人曾在过去几周联名致信国防部,敦促其撤销风险标签,并呼吁各自公司领导层支持Anthropic,拒绝单方面开放其AI系统用于军事用途。
英文来源:
More than 30 OpenAI and Google DeepMind employees filed a statement Monday supporting Anthropic’s lawsuit against the U.S. Defense Department after the federal agency labeled the AI firm a supply-chain risk, according to court filings.
“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,” reads the brief, whose signatories include Google DeepMind chief scientist Jeff Dean.
Late last week, the Pentagon labeled Anthropic a supply-chain risk — usually reserved for foreign adversaries — after the AI firm refused to allow the Department of Defense (DOD) to use its technology for mass surveillance of Americans or autonomously firing weapons. The DOD had argued that it should be able to use AI for any “lawful” purpose and not be constrained by a private contractor.
The amicus brief in support of Anthropic showed up on the docket a few hours after the Claude maker filed two lawsuits against the DOD and other federal agencies. Wired was first to report the news.
In the court filing, the Google and OpenAI employees make the point that if the Pentagon was “no longer satisfied with the agreed-upon terms of its contract with Anthropic,” the agency could have “simply canceled the contract and purchased the services of another leading AI company.”
The DOD did, in fact, sign a deal with OpenAI within moments of designating Anthropic a supply-chain risk — a move many of the ChatGPT maker’s employees protested.
“If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the brief reads. “And it will chill open deliberation in our field about the risks and benefits of today’s AI systems.”
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
The filing also affirms that Anthropic’s stated red lines are legitimate concerns warranting strong guardrails. Without public law to govern AI use, it argues, the contractual and technical restrictions developers impose on their systems are a critical safeguard against catastrophic misuse.
Many of the employees who signed the statement also signed open letters over the last couple of weeks urging the DOD to withdraw the label and calling on the leaders of their companies to support Anthropic and refuse unilateral use of their AI systems.
文章标题:OpenAI与谷歌员工纷纷声援Anthropic,共同应对国防部诉讼。
文章链接:https://qimuai.cn/?post=3530
本站文章均为原创,未经授权请勿用于任何商业用途