«

最新法庭文件披露,五角大楼曾告知人工智能公司Anthropic双方立场已基本达成一致——而此时距离特朗普宣布双方合作关系破裂仅过去一周。

qimuai 发布于 阅读:3 一手编译


最新法庭文件披露,五角大楼曾告知人工智能公司Anthropic双方立场已基本达成一致——而此时距离特朗普宣布双方合作关系破裂仅过去一周。

内容来源:https://techcrunch.com/2026/03/20/new-court-filing-reveals-pentagon-told-anthropic-the-two-sides-were-nearly-aligned-a-week-after-trump-declared-the-relationship-kaput/

内容总结:

当地时间3月21日周五下午,人工智能公司Anthropic向加利福尼亚州联邦法院提交了两份宣誓声明,正式反驳美国国防部关于其构成“不可接受的国家安全风险”的指控。该公司指出,政府的指控基于技术误解,且相关主张在此前数月的谈判中从未被提及。

此次声明是Anthropic在3月26日周二旧金山法院听证会前,针对美国国防部诉讼提交的答复简报的一部分。争议源于今年2月底,当时美国政府公开宣布与Anthropic断绝合作,原因是该公司拒绝其人工智能技术被无限制地用于军事用途。

提交声明的两位公司高管分别是政策主管莎拉·赫克和公共部门业务主管蒂亚古·拉马萨米。赫克曾任奥巴马政府国家安全委员会官员,她亲自出席了2月24日公司首席执行官达里奥·阿莫代与国防部高层举行的会议。她在声明中驳斥了政府文件中的一项核心指控,即Anthropic曾要求对军事行动拥有某种审批权,称这“完全不符合事实”。她还指出,国防部关于Anthropic可能在操作中禁用或篡改技术的担忧,在谈判期间从未提出,而是首次出现在法庭文件中,使公司没有回应的机会。

赫克声明中一个引人注目的细节是:在3月4日——即国防部正式对Anthropic完成供应链风险认定的次日——国防部副部长曾通过电子邮件告知阿莫代,双方在自主武器和对美国公民的大规模监控这两个关键议题上“非常接近”。而政府目前正是以这两点立场作为Anthropic构成国家安全威胁的证据。赫克质问道,如果公司的这些立场构成威胁,为何国防部官员在认定后立即表示双方立场几乎一致?

拉马萨米则从技术层面进行反驳。他拥有在亚马逊云服务部门为政府客户部署人工智能的丰富经验。他明确指出,一旦Claude模型通过第三方承包商部署在政府安全的“物理隔离”系统中,Anthropic就无法远程访问、禁用或推送未经授权的更新;修改模型必须获得国防部明确批准并自行实施。公司甚至无法查看政府用户的输入数据。此外,他否认公司雇用外籍员工构成安全风险,强调相关员工均已通过美国政府的安全审查,并称Anthropic是唯一由获准人员构建可在保密环境中运行的人工智能模型的AI公司。

Anthropic在诉讼中指控,这项首次针对美国公司实施的供应链风险认定,实质上是政府对其公开表达AI安全观点的报复,违反了宪法第一修正案。政府在本周早些时候提交的40页文件中完全否认了这一说法,称Anthropic限制技术军事用途是商业决策而非受保护的言论,相关认定是基于国家安全的直接判断,并非对其观点的惩罚。

中文翻译:

周五傍晚,人工智能公司Anthropic向加利福尼亚州联邦法院提交了两份宣誓声明,反驳美国国防部关于该公司"对国家安全构成不可接受风险"的指控,并指出政府的诉讼依据存在技术性误解,且相关主张在争端爆发前数月的谈判中从未被实际提出。

这些声明随同Anthropic起诉美国国防部的答辩状一并提交,为3月24日(下周二)在旧金山由Rita Lin法官主持的听证会做准备。此次争端可追溯至二月底,当时特朗普总统与国防部长皮特·赫格塞斯公开宣布与Anthropic断绝合作,起因是该公司拒绝允许军方无限制使用其人工智能技术。

提交声明的两人分别是Anthropic政策主管莎拉·赫克与公共部门业务负责人蒂亚古·拉马萨米。赫克曾任白宫国家安全委员会官员,在奥巴马政府时期工作,后转至Stripe支付公司,继而加入Anthropic负责政府关系与政策事务。她亲自出席了2月24日的会议,当时公司首席执行官达里奥·阿莫代与国防部长赫格塞斯、国防部副部长埃米尔·迈克尔进行了会谈。

赫克在声明中指出政府文件中的核心谬误:所谓Anthropic要求对军事行动拥有审批权。她写道:"在与国防部的整个谈判过程中,我及Anthropic所有员工从未表示公司希望获得此类权限。"她还强调,国防部关于Anthropic可能在行动中禁用或篡改技术的担忧从未在谈判中被提及,而是首次出现于政府向法院提交的文件中,使公司失去回应机会。

赫克声明中另一个引人注目的细节是:3月4日——即国防部正式完成对Anthropic供应链风险认定的次日——副部长迈克尔曾致信阿莫代,称双方在自主武器与美国公民大规模监控这两个议题上"立场已非常接近",而政府如今却将这两点作为认定Anthropic构成国家安全威胁的证据。赫克将此邮件作为声明附件提交。

这封邮件与迈克尔后续的公开表态形成鲜明对比:3月5日阿莫代发表声明称公司与国防部保持着"富有成效的对话";3月6日迈克尔却在社交平台X宣称"国防部未与Anthropic进行任何积极谈判";一周后他更对CNBC表示"重启谈判的可能性为零"。赫克的核心质疑在于:若这两项议题的立场真使Anthropic构成国安威胁,为何国防部高官会在风险认定完成后立即表示双方立场基本一致?

拉马萨米则为案件提供了专业技术视角。他在2025年加入Anthropic前,曾于亚马逊云科技任职六年,负责为政府客户(包括涉密环境)部署人工智能系统。在Anthropic,他领导团队将Claude模型引入国家安全及国防领域,其中包括去年夏季公布的2亿美元国防部合同。

针对政府关于Anthropic可能通过禁用技术或改变运行模式干扰军事行动的指控,拉马萨米在声明中从技术层面予以驳斥。他指出,一旦Claude模型部署于政府安全防护、由第三方承包商运营的"物理隔离"系统中,Anthropic将无法接触该系统——既无远程终止开关,也无后门程序,更不存在推送未授权更新的机制。他强调任何"操作否决权"均属虚构,解释称模型变更需经国防部明确批准并手动安装。拉马萨米表示,公司甚至无法查看政府用户在系统中的输入内容,更遑论提取数据。

对于政府声称Anthropic雇佣外籍员工构成安全风险的说法,拉马萨米也提出异议。他说明所有Anthropic员工均通过美国政府安全审查程序——该程序与接触机密信息所需背景调查标准相同,并特别指出:"据我所知,Anthropic是唯一一家由获安全许可人员实际构建涉密环境人工智能模型的AI公司。"

Anthropic在诉讼中主张,此项针对美国企业的首例供应链风险认定,实为政府对其公开表态的AI安全立场进行的报复,违反了宪法第一修正案。而政府在本周早些时候提交的40页文件中完全否认这一说法,强调Anthropic拒绝所有合法军事技术应用属于商业决策而非受保护的言论自由,相关认定纯粹基于国家安全考量,并非对其立场的惩罚。

英文来源:

Anthropic submitted two sworn declarations to a California federal court late Friday afternoon, pushing back on the Pentagon’s assertion that the AI company poses an “unacceptable risk to national security” and arguing that the government’s case relies on technical misunderstandings and claims that were never actually raised during the months of negotiations that preceded the dispute.
The declarations were filed alongside Anthropic’s reply brief in its lawsuit against the Department of Defense and come ahead of a hearing this coming Tuesday, March 24, before Judge Rita Lin in San Francisco.
The dispute traces back to late February, when President Trump and Defense Secretary Pete Hegseth publicly declared they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its AI technology.
The two people who submitted the declarations are Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the company’s Head of Public Sector.
Heck is a former National Security Council official who worked at the White House under the Obama administration before moving to Stripe and then Anthropic, where she runs the company’s government relationships and policy work. She was personally present at the February 24 meeting where CEO Dario Amodei sat down with Defense Secretary Hegseth and the Pentagon’s Under Secretary Emil Michael.
In her declaration, Heck calls out what she describes as a central falsehood in the government’s filings: that Anthropic demanded some kind of approval role over military operations. That claim, she says, simply isn’t true. “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role,” she wrote.
She also points out that the Pentagon’s concern about Anthropic potentially disabling or altering its technology mid-operation was never raised during negotiations. Instead, she says, it appeared for the first time in the government’s court filings, which gave Anthropic no opportunity to respond.
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
Another detail in Heck’s declaration sure to draw attention is that on March 4 — the day after the Pentagon formally finalized its supply-chain risk designation against Anthropic — Under Secretary Michael emailed Amodei to say the two sides were “very close” on the two issues the government now cites as evidence that Anthropic is a national security threat: its positions on autonomous weapons and mass surveillance of Americans.
The email, which Heck attaches as an exhibit to her declaration, is worth reading alongside what Michael said publicly in the days afterward. On March 5, Amodei published a statement saying the company had been having “productive conversations” with the Pentagon. The day after that, Michael posted on X that “there is no active Department of War negotiation with Anthropic.” A week after that, he told CNBC there was “no chance” of renewed talks.
Heck’s point appears to be: If Anthropic’s stance on those two issues is what makes it a national security threat, why was the Pentagon’s own official saying the two sides were nearly aligned on exactly those issues right after the designation was finalized?
Ramasamy brings a different kind of expertise to the case. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government customers, including classified environments. At Anthropic, he’s credited with building the team that brought its Claude models into national security and defense settings, including the $200 million contract with the Pentagon announced last summer.
His declaration takes on the government’s claim that Anthropic could theoretically interfere with military operations by disabling the technology or otherwise altering how it behaves, which Ramasamy says isn’t technically possible. Per his telling, once Claude is deployed inside a government-secured, “air-gapped” system operated by a third-party contractor, Anthropic has no access to it; there is no remote kill switch, no backdoor, and no mechanism to push unauthorized updates. Any kind of “operational veto” is a fiction, he suggests, explaining that a change to the model would require the Pentagon’s explicit approval and action to install.
Anthropic, he says, can’t even see what government users are typing into the system, let alone extract that data.
Ramasamy also disputes the government’s claim that Anthropic’s hiring of foreign nationals makes the company a security risk. He notes that Anthropic employees have undergone U.S. government security clearance vetting — the same background check process required for access to classified information, adding in his declaration that “to my knowledge,” Anthropic is the only AI company where cleared personnel actually built the AI models designed to run in classified environments.
Anthropic’s lawsuit argues that the supply-chain risk designation — the first ever applied to an American company — amounts to government retaliation for the company’s publicly stated views on AI safety, in violation of the First Amendment.
The government, in a 40-page filing earlier this week, rejected that framing entirely, saying that Anthropic’s refusal to allow all lawful military uses of its technology was a business decision, not protected speech, and that the designation was a straightforward national security call and not punishment for the company’s views.

TechCrunchAI大撞车

文章目录


    扫描二维码,在手机上阅读