美国军方将Anthropic列为"供应链风险"后,该公司作出反击。

内容来源:https://www.wired.com/story/anthropic-supply-chain-risk-shockwaves-silicon-valley/
内容总结:
五角大楼将AI明星企业Anthropic列为“供应链风险”,硅谷震动
美国国防部长皮特·赫格塞斯于周五指示五角大楼,将人工智能公司Anthropic正式列为“供应链风险”。这一决定在硅谷引发震动,众多企业紧急评估是否还能继续使用这一行业最受欢迎的AI模型之一。
赫格塞斯在社交媒体上宣布:“立即生效,任何与美国军方有业务往来的承包商、供应商或合作伙伴,均不得与Anthropic进行任何商业活动。”此举源于双方长达数周的紧张谈判。Anthropic本周曾公开主张,其与五角大楼的合同应禁止其技术被用于对美国民众的大规模监控或全自主武器系统。而五角大楼则要求Anthropic同意美军将其AI技术用于“所有合法用途”,不得设置例外。
“供应链风险” designation(认定)使五角大楼有权限制或排除被认为存在安全漏洞(如涉及外国所有权、控制或影响的风险)的供应商参与国防合同,旨在保护敏感的军事系统和数据。然而,Anthropic在周五晚间的回应中表示将“在法庭上挑战任何供应链风险认定”,并称此举将为任何与美国政府谈判的美国公司“开创一个危险的先例”。公司还指出,未收到国防部或白宫就AI使用谈判的任何直接沟通,并质疑国防部长“缺乏法定权力”支持其声明。
此事件在硅谷及业界引发强烈反响。美国创新基金会高级研究员、白宫前人工智能高级政策顾问迪恩·鲍尔批评此举是“美国政府所做过的‘最令人震惊、最具破坏性且最越权’的行为”,堪比“制裁了一家美国公司”。Y Combinator创始人保罗·格雷厄姆等人也在社交媒体上表达了震惊与不满。OpenAI研究员博阿兹·巴拉克则认为,此举是“对我们领先AI公司的沉重打击,近乎最糟糕的‘乌龙球’”。
颇具对比意味的是,OpenAI首席执行官萨姆·阿尔特曼同日宣布,公司已与国防部达成协议,将在保密环境中部署其AI模型,并明确包含了禁止大规模国内监控和确保人类对使用武力负责等安全原则。
法律模糊与市场困惑
目前,局面充满法律不确定性。Anthropic援引相关法律称,该认定应仅适用于国防部与供应商的直接合同,不涉及承包商使用其Claude软件服务其他客户的行为。多名联邦合同法律专家表示,目前无法确定哪些Anthropic客户(如有)必须立即与该公司切断联系,赫格塞斯的声明“目前并未基于任何我们所能洞悉的明确法律”。
包括亚马逊、微软、谷歌和英伟达在内的多家同时服务美军并与Anthropic合作的公司,均未立即回应置评请求。有科技公司高管透露,在国防部的指令超越社交媒体帖子成为正式文件前,其公司处于观望状态,正由律师团队研判。
分析指出,此类认定通常不会立即生效,政府需完成风险评估并通知国会。但战略与国际研究中心专家格雷格·艾伦警告,此事已向所有科技公司传递出强烈信号,即与国防部合作可能意味着失去商业自主性,从而打击企业合作意愿。
法律专家普遍预计Anthropic很可能起诉政府。然而,诉讼可能耗时数月甚至数年,在此期间若企业被迫与Anthropic切割,其业务将遭受重创。此次争议也给英伟达、亚马逊、谷歌、Palantir等众多与Anthropic关系密切的美国军方重要合作伙伴带来了紧迫的合规难题。五角大楼对此拒绝置评。
中文翻译:
美国国防部长皮特·赫格塞斯周五指示五角大楼将人工智能公司Anthropic列为"供应链风险",这一决定在硅谷引发震动,多家企业正紧急评估是否还能继续使用这个业界最热门的人工智能模型。
赫格塞斯在社交媒体发文称:"立即生效,任何与美国军方有业务往来的承包商、供应商或合作伙伴,均不得与Anthropic进行任何商业活动。"
此项认定源于五角大楼与Anthropic就美军如何使用该公司人工智能模型进行的数周紧张谈判。Anthropic在本周的博客文章中强调,其与国防部的合同不应允许相关技术被用于对美国民众的大规模监控或全自主武器系统。而五角大楼则要求该公司同意美军将其人工智能应用于"所有合法用途",且不得设置任何特殊例外条款。
根据供应链风险认定机制,若供应商被认为存在安全隐患(如涉及外国所有权、控制权或影响力风险),五角大楼有权限制或禁止其参与国防合同。该机制旨在保护敏感的军事系统和数据免受潜在威胁。
Anthropic于周五晚间发布声明回应,表示将"在法庭上挑战任何供应链风险认定",并警告此类认定"将为所有与政府谈判的美国企业开创危险先例"。该公司同时透露,迄今未收到国防部或白宫关于其人工智能模型使用谈判的任何直接沟通。
"赫格塞斯部长暗示此项认定将禁止所有与军方合作者与Anthropic往来,但其言论缺乏法定授权支撑。"该公司在声明中写道。五角大楼对此拒绝置评。
美国创新基金会高级研究员、前白宫人工智能高级政策顾问迪恩·鲍尔评论称:"这是我所见美国政府最令人震惊、最具破坏性且最越权的行为。我们实质上制裁了一家美国公司。每个美国人都该思考,十年后是否还应留在这个国家。"
硅谷各界人士在社交媒体表达了类似震惊。创业加速器Y Combinator创始人保罗·格雷厄姆表示:"现任政府决策者冲动且报复心强,这足以解释他们的行为。"OpenAI研究员博阿兹·巴拉克指出:"重创本国领先人工智能企业无异于自摆乌龙,希望理性能够占上风,撤销这项决定。"
与此同时,OpenAI首席执行官萨姆·奥特曼周五晚间宣布,公司已与国防部达成协议,将在保密环境中部署其人工智能模型,并设置了豁免条款。"我们最重要的两项安全原则是禁止国内大规模监控,以及确保人类对使用武力(包括自主武器系统)承担责任。国防部认同这些原则,将其纳入法律政策,我们也将其写入协议。"奥特曼表示。
困惑的客户群体
Anthropic在周五的博客中强调,根据《美国法典》第10编第3252条授权,供应链风险认定仅适用于国防部与供应商的直接合同,不涉及承包商使用其Claude人工智能软件服务其他客户的行为。
三位联邦合同专家表示,目前无法确定哪些Anthropic客户(如有)必须立即终止合作。麦卡特-英格利希律师事务所合伙人亚历克斯·梅杰指出,赫格塞斯的声明"未基于任何现行可循的法律依据"。
为美军提供服务且与Anthropic合作的亚马逊、微软、谷歌和英伟达均未立即回应媒体问询。两家知名人工智能国防科技公司Anduril和Shield AI亦拒绝置评。
人工智能法律研究所高级研究员查理·布洛克指出,供应链风险认定通常不会立即生效,美国政府需完成风险评估并通知国会后,军事合作伙伴才需终止与相关企业或其产品的联系。
战略与国际研究中心瓦德瓦尼人工智能中心高级顾问格雷格·艾伦警告,此事可能阻碍其他科技公司与五角大楼合作。"国防部向所有企业传递了强烈信号:一旦涉足国防合同领域,我们随时可能将你们彻底拖入其中。"
多位法律专家向媒体表示,Anthropic很可能起诉政府。赫格塞斯此前曾暗示国防部可援引《国防生产法》施压,强制该公司向五角大楼提供技术。艾伦认为这种反复无常的态度削弱了军方关于Anthropic构成真实供应链风险的说法。
然而诉讼可能耗时数月甚至数年,在此期间若企业被迫断绝往来,Anthropic的业务将遭受重创。这场争端对英伟达、亚马逊、谷歌、Palantir等与Anthropic紧密合作的知名美军合作伙伴提出了严峻挑战。
一位要求匿名的科技公司高管透露,其公司软件被美军使用,在国防部指令超越社交媒体表态之前,公司将保持观望态势,并由律师团队研究相关问题。该高管以《国防授权法案》第889条为例指出,若新指令类似该条款禁止联邦机构与使用特定中国电信设备作为系统"重要或核心组件"的企业合作,则将形成"难以逾越的高门槛"——即使科技公司内部使用Anthropic的Claude代码,也可能不被定义为最终售予政府产品的"核心"部分。
英文来源:
US secretary of defense Pete Hegseth directed the Pentagon to designate Anthropic a “supply-chain risk” on Friday, sending shock waves through Silicon Valley and leaving many companies scrambling to understand whether they can keep using one of the industry’s most popular AI models.
“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote in a social media post.
The designation comes after weeks of tense negotiations between the Pentagon and Anthropic over how the US military could use the startup’s AI models. In a blog post this week, Anthropic argued its contracts with the Pentagon should not allow for its technology to be used for mass domestic surveillance of Americans or fully autonomous weapons. The Pentagon asked that Anthropic agree to let the US military apply its AI to “all lawful uses” with no specific exceptions.
A supply-chain-risk designation allows the Pentagon to restrict or exclude certain vendors from defense contracts if they’re deemed to pose security vulnerabilities, such as risks related to foreign ownership, control, or influence. It is intended to protect sensitive military systems and data from potential compromise.
Anthropic responded in another blog post on Friday evening, saying it would “challenge any supply chain risk designation in court,” and that such a designation would “set a dangerous precedent for any American company that negotiates with the government.”
Anthropic added that it hadn’t received any direct communication from the Department of Defense or the White House regarding negotiations over the use of its AI models.
“Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement,” the company wrote.
The Pentagon declined to comment.
“This is the most shocking, damaging, and overreaching thing I have ever seen the United States government do,” says Dean Ball, a senior fellow at the Foundation for American Innovation and the former senior policy adviser for AI at the White House. “We have essentially just sanctioned an American company. If you are an American, you should be thinking about whether or not you should live here 10 years from now.”
People across Silicon Valley chimed in on social media expressing similar shock and dismay. “The people running this administration are impulsive and vindictive. I believe this is sufficient to explain their behavior,” Paul Graham, founder of the startup accelerator Y Combinator said.
Boaz Barak, an OpenAI researcher, said in a post that “kneecapping one of our leading AI companies is right about the worst own goal we can do. I hope very much that cooler heads prevail and this announcement is reversed.”
Meanwhile, OpenAI CEO Sam Altman announced on Friday night that the company reached an agreement with the Department of Defense to deploy its AI models in classified environments, seemingly with carve-outs. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” said Altman. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
Confused Customers
In its Friday blog post, Anthropic said a supply-chain-risk designation, under the authority 10 USC 3252, only applies to Department of Defense contracts directly with suppliers, and doesn’t cover how contractors use its Claude AI software to serve other customers.
Three experts in federal contracts say it’s impossible at this point to determine which Anthropic customers, if any, must now cut ties with the company. Hegseth’s announcement “is not mired in any law we can divine right now,” says Alex Major, a partner at the law firm McCarter & English, which works with tech companies.
Amazon, Microsoft, Google, and Nvidia—all companies that provide services to the US military and work with Anthropic—did not immediately respond to WIRED’s request for comment. Anduril and Shield AI, two prominent AI-focused defense-tech companies, both declined to comment.
Supply-chain-risk designations typically do not go into effect immediately, and the US government is required to complete risk assessments and notify Congress before military partners have to cut ties with a company or its products, according to Charlie Bullock, senior research fellow at the Institute for Law and AI.
But the situation could still discourage other tech companies from working with the Pentagon, according to Greg Allen, senior adviser at the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS). “The Defense Department just sent a huge message to every company that if you dip your toe in the defense contracting waters, we will grab your ankle and pull you all the way in, anytime we want,” he says.
Several legal experts tell WIRED that Anthropic is likely to sue the government. Hegseth previously suggested that the DOD could attack Anthropic by invoking the Defense Production Act, which would force the company to provide its technology to the Pentagon. Allen says the flip-flopping undermines the Pentagon’s argument that Anthropic is a genuine supply chain risk.
A lawsuit could take months or years to resolve, however, and Anthropic’s business could suffer in the meantime if companies are forced to sever ties.
The dispute raises critical questions for a plethora of prominent US military partners, such as Nvidia, Amazon, Google, and Palantir, which work closely with Anthropic.
One tech executive, whose company’s software is used by the US military and requested anonymity because of the sensitivity of the situation, said that until the Department of Defense’s directive goes beyond a post on social media, their company is in a holding pattern and has lawyers examining the issue.
As a comparison, the executive pointed to Section 889 of the National Defense Authorization Act, a procurement prohibition that bars federal agencies from contracting with companies that use certain Chinese telecom equipment as a “substantial or essential component” of any system. If this new mandate is similar, that could be a “high bar to clear,” the executive says, because even if a tech company is using Anthropic’s Claude Code internally, it might not be defined as an “essential” part of the product it is ultimately selling to the government.
文章标题:美国军方将Anthropic列为"供应链风险"后,该公司作出反击。
文章链接:https://qimuai.cn/?post=3452
本站文章均为原创,未经授权请勿用于任何商业用途