«

Anthropic为自己设下的困局

qimuai 发布于 阅读:2 一手编译


Anthropic为自己设下的困局

内容来源:https://techcrunch.com/2026/02/28/the-trap-anthropic-built-for-itself/

内容总结:

【独家深度】AI巨头Anthropic遭美政府封杀背后:一场自食其果的监管困局

周五下午,一则突发新闻震动硅谷:特朗普政府以国家安全为由,宣布切断与旧金山AI公司Anthropic的一切合作。国防部长皮特·赫格塞斯援引国家安全法,将这家由达里奥·阿莫迪于2021年创立的企业列入国防部黑名单。导火索是该公司拒绝将其AI技术用于两项争议用途——对美国公民的大规模监控,以及无需人类干预即可自主选择并击杀目标的武装无人机。

此举导致Anthropic面临高达2亿美元的合同损失,且被禁止与所有国防承包商合作。此前特朗普在Truth Social上发文,要求所有联邦机构“立即停止使用Anthropic技术”。(该公司已表示将诉诸法律途径。)

麻省理工学院物理学家、未来生命研究所创始人马克斯·泰格马克对此评价尖锐:“通往地狱的路往往由善意铺就。”他在接受专访时指出,Anthropic与其竞争对手们实则自酿苦果——问题的根源可追溯至多年前全行业对强制性监管的集体抵制。

安全承诺的崩塌
尽管Anthropic一直以“安全优先”的AI公司自居,但其与国防及情报机构的合作(至少可追溯至2024年)已显露矛盾。泰格马克直指行业乱象:从谷歌放弃“不作恶”信条、OpenAI将“安全”一词从使命宣言中删除,到xAI解散安全团队,乃至本周Anthropic放弃其核心安全承诺(即在未确保系统无害前不发布强大AI),四大AI巨头均已违背自身的安全诺言。

监管真空下的危险游戏
“美国对AI系统的监管甚至比对三明治的监管还要宽松。”泰格马克比喻道,“若想开三明治店,卫生检查员发现厨房有15只老鼠会立即勒令整改;但若宣称要开发可能导致青少年自杀的AI女友,或可能颠覆政府的超级智能,监管者却无计可施。”他认为,正是这些公司长期游说反对AI立法,导致如今陷入被动局面:“如果有法律明确禁止开发用于杀害美国人的AI,政府便无法突然提出此类要求。他们搬起石头砸了自己的脚。”

“中国牌”背后的逻辑漏洞
针对AI公司常以“中美竞赛”为由抵制监管的说法,泰格马克驳斥道:中国正在全面禁止拟人化AI,并非为了讨好美国,而是认为其危害青年、削弱国力。“当人们鼓吹必须竞逐超级智能以战胜中国时,却忽略了一个事实——我们根本不知道如何控制它。中国共产党显然不会容忍任何AI公司开发可能颠覆政权的技术,这对美国政府而言同样是国家安全威胁。”

超级智能:国安威胁而非资产
泰格马克预警,失控的超级智能应被视作国家安全威胁,而非战略资产。他引用Anthropic创始人达里奥·阿莫迪的著名演讲片段:“他提到‘很快我们将拥有数据中心里的天才国度’——国安部门或许该思考,是否该将这个‘数据中心国度’列入威胁名单。”

技术爆炸与就业危机
AI技术进展远超预期。泰格马克与合作者数月前发布的论文显示,GPT-4已达到通用人工智能(AGI)标准的27%,GPT-5已达57%。“照此速度,突破可能近在眼前。我告诉MIT的学生,即使还要四年,等他们毕业时,许多工作岗位可能已不复存在。”

行业站队时刻来临
在OpenAI宣布与五角大楼达成合作后,泰格马克呼吁行业展现真正立场:“此刻正是检验本色的时刻。如果其他巨头保持沉默,将是企业的耻辱,也会令员工失望。”

出路何在?
泰格马克提出一条可行路径:取消企业豁免权,像对待其他行业一样要求AI公司进行独立专家评审的“临床试验”,证明其能控制强大系统后再发布。“这样我们才能迎来AI的黄金时代,而非生存危机。这条路虽非当前所向,但仍有希望。”

(根据TechCrunch专访内容整理)

中文翻译:

周五下午,就在本次访谈开始之际,我的电脑屏幕上突然弹出一条新闻快讯:特朗普政府正与旧金山人工智能公司Anthropic断绝关系。这家由达里奥·阿莫代于2021年创立的公司,因拒绝将其技术用于美国公民的大规模监控或无需人类干预即可自主选择击杀目标的武装无人机,被国防部长皮特·赫格塞斯援引国家安全法列入黑名单,禁止其与五角大楼进行商业往来。

这一连串事件令人瞠目结舌。Anthropic不仅可能失去价值高达2亿美元的合同,在特朗普总统于Truth Social平台下令所有联邦机构"立即停止使用Anthropic技术"后,更将被禁止与其他国防承包商合作。(该公司随后表示将在法庭上挑战五角大楼的决定。)

麻省理工学院物理学家马克斯·泰格马克近十年来不断警告:建造更强大人工智能系统的竞赛正超越世界的治理能力。他于2014年创立未来生命研究所,并协助组织公开信——最终获得埃隆·马斯克等三万三千余人联署——呼吁暂停先进人工智能开发。

他对Anthropic危机的评价毫不留情:这家公司与其竞争对手一样,早已为自己种下困境的种子。泰格马克的论点并非始于五角大楼,而是源于多年前全行业共同做出的选择——抵制具有约束力的监管。Anthropic、OpenAI、谷歌DeepMind等公司长期承诺会负责任地自我约束,但Anthropic本周甚至放弃了其安全承诺的核心原则——即保证在确信不会造成危害前不发布日益强大的人工智能系统。

泰格马克指出,在规则缺失的当下,这些参与者缺乏足够保护。以下是经过精简整理的访谈内容节选,完整对话将于下周在TechCrunch的StrictlyVC播客中播出。

看到Anthropic新闻时,您的第一反应是什么?

地狱之路由善意铺就。回想十年前的情景十分有趣,当时人们兴奋地讨论如何用人工智能治愈癌症、促进美国繁荣、增强国家实力。如今美国政府却因这家公司不愿将AI用于国内大规模监控、拒绝开发能自主决定杀戮对象的机器人而震怒。

Anthropic始终以安全优先的AI公司自居,却与国防及情报机构合作(至少可追溯至2024年)。这是否自相矛盾?

确实矛盾。恕我直言——Anthropic确实擅长以安全为营销标签。但若审视事实而非宣传,你会发现Anthropic、OpenAI、谷歌DeepMind和xAI都大谈重视安全,却无人像其他行业那样支持具有约束力的安全监管。这四家公司如今都违背了自身承诺:谷歌先抛弃"不作恶"信条,再放弃避免AI伤害的长期承诺,以便向监控和武器领域销售AI;OpenAI刚将"安全"一词从使命宣言中删除;xAI解散了整个安全团队;Anthropic本周则放弃了最核心的安全承诺——即在确保无害前不发布强大AI系统。

这些曾高调承诺安全的公司为何沦落至此?

所有这些公司——尤其是OpenAI和谷歌DeepMind,Anthropic某种程度上也是——始终游说反对AI监管,声称"相信我们,我们会自我约束"。他们游说成功了。如今美国对AI系统的监管比对三明治的监管还宽松:开三明治店若被卫生检查员发现15只老鼠,必须整改才能营业;但若宣称"我要向11岁儿童销售与自杀事件关联的AI女友,还要发布可能颠覆美国政府的超智能系统,不过我对自己的产品感觉良好",检查员只能说"行吧,只要不卖三明治就行"。

存在食品安全监管,却没有AI监管。

我认为这些公司都难辞其咎。若他们当初能汇集那些关于安全自律的承诺,共同请求政府将其转化为约束所有竞争对手的美国法律,局面将截然不同。如今我们处于完全的监管真空状态,而企业完全豁免监管的后果众所周知——沙利度胺悲剧、烟草公司向儿童推销香烟、石棉导致肺癌。讽刺的是,他们抵制AI行为规范立法,如今正反噬自身。

当前没有法律禁止开发用于杀害美国人的AI,所以政府可以突然提出要求。如果这些公司早先主动要求立法,就不会陷入如此困境。他们真是搬起石头砸自己的脚。

企业常以中美竞赛为由辩驳——美国公司不做,北京就会做。这种论点成立吗?

我们来分析一下。AI公司游说团体——其资金规模和人数已超过化石燃料、制药和军工复合体游说团的总和——最常用的说辞是:每当有人提议监管,他们就喊"中国威胁"。但看看现实:中国正在全面禁止AI女友,不仅设年龄限制,更考虑禁止所有拟人化AI。为什么?不是为讨好美国,而是认为这正在摧毁中国青年、削弱国家实力。显然,这同样在削弱美国青年。

当人们说必须竞逐超智能以战胜中国时——我们其实并不知道如何控制超智能,默认结果将是人类将地球控制权拱手让给异化机器——猜猜会怎样?中国共产党极其重视控制权。谁会认为习近平能容忍中国AI公司开发颠覆政府的技术?绝无可能。同样,若首家造出超智能的美国公司发动政变推翻政府,对美国也是灾难。这是国家安全威胁。

将超智能视为国家安全威胁而非资产的论述很有说服力。您认为这种观点在华盛顿会获得认同吗?

我认为如果国家安全界人士听取达里奥·阿莫代描述其愿景——他那场著名演讲中说"很快我们将拥有数据中心里的天才国度"——他们可能会想:等等,达里奥刚说了"国度"这个词?或许该把这个数据中心里的天才国度列入监控清单,因为这对美国政府构成威胁。我相信很快会有足够多的美国国家安全人士意识到,不可控的超智能是威胁而非工具。这与冷战完全类似:当年我们与苏联进行经济军事霸权竞赛,美国在不参与第二场竞赛(即看谁能在对方国土炸出更多核弹坑)的情况下赢得了胜利。人们意识到那只是自杀行为,没有赢家。同样的逻辑适用于此。

这对更广泛的AI发展节奏意味着什么?您认为我们距离您描述的系统还有多远?

六年前,我认识的几乎所有AI专家都预测人类水平语言知识能力的AI还需数十年——可能是2040或2050年。他们全错了,因为我们现在已经实现。我们看到AI在某些领域从高中水平迅速进阶至大学、博士乃至教授水平。去年AI在国际数学奥林匹克竞赛夺金,这几乎是人类最难的任务。几个月前我与约书亚·本吉奥、丹·亨德里克斯等顶尖AI研究者合著论文,给出了AGI的严格定义。据此标准,GPT-4已达到27%,GPT-5达到57%。我们尚未完全实现,但从27%快速跃升至57%表明可能不需太久。

昨天我在麻省理工学院授课时告诉学生:即使还需要四年,意味着他们毕业时可能已无工作可找。现在开始准备绝对不算早。

Anthropic已被列入黑名单。我好奇接下来会发生什么——其他AI巨头会声援并表态拒绝同类合作吗?还是像xAI这样的公司会举手说"Anthropic不要的合同我们要"?(编者注:访谈结束数小时后,OpenAI宣布了与五角大楼的合作协议。)

昨晚萨姆·奥尔特曼表态支持Anthropic并坚持相同底线。我钦佩他的勇气。谷歌在我们开始访谈时尚未表态。若保持沉默,作为企业将极其尴尬,许多员工也会如此认为。xAI目前也未见发声。所以这将是展现真面目的关键时刻。

是否存在向好发展的可能性?

是的,这也正是我莫名乐观的原因。明明存在清晰的替代方案:如果我们像对待其他企业那样对待AI公司——取消企业豁免权——他们在发布强大系统前就必须进行类似临床试验的流程,向独立专家证明掌控能力。那样我们将迎来AI造福人类的黄金时代,而非存在性焦虑。虽然当前道路并非如此,但转变仍有可能。

英文来源:

Friday afternoon, just as this interview was getting underway, a news alert flashed across my computer screen: the Trump administration was severing ties with Anthropic, the San Francisco AI company founded in 2021 by Dario Amodei. Defense Secretary Pete Hegseth had invoked a national security law to blacklist the company from doing business with the Pentagon after Amodei refused to allow Anthropic’s tech to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input.
It was a jaw-dropping sequence. Anthropic stands to lose a contract worth up to $200 million and will be barred from working with other defense contractors after President Trump posted on Truth Social directing every federal agency to “immediately cease all use of Anthropic technology.” (Anthropic has since said it will challenge the Pentagon in court.)
Max Tegmark has spent the better part of a decade warning that the race to build ever-more-powerful AI systems is outpacing the world’s ability to govern them. The MIT physicist founded the Future of Life Institute in 2014 and helped organize an open letter — ultimately signed by more than 33,000 people, including Elon Musk — calling for a pause in advanced AI development.
His view of the Anthropic crisis is unsparing: the company, like its rivals, has sown the seeds of its own predicament. Tegmark’s argument doesn’t begin with the Pentagon but with a decision made years earlier — a choice, shared across the industry, to resist binding regulation. Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Anthropic this week even dropped the central tenet of its own safety pledge — its promise not to release increasingly powerful AI systems until the company was confident they wouldn’t cause harm.
Now, in the absence of rules, there’s not a lot to protect these players, says Tegmark. Here’s more from that interview, edited for length and clarity. You can hear the full conversation this coming week on TechCrunch’s StrictlyVC Download podcast.
When you saw this news just now about Anthropic, what was your first reaction?
The road to hell is paved with good intentions. It’s so interesting to think back a decade ago, when people were so excited about how we were going to make artificial intelligence to cure cancer, to grow the prosperity in America and make America strong. And here we are now where the U.S. government is pissed off at this company for not wanting AI to be used for domestic mass surveillance of Americans, and also not wanting to have killer robots that can autonomously — without any human input at all — decide who gets killed.
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
Anthropic has staked its entire identity on being a safety-first AI company, and yet it was collaborating with defense and intelligence agencies [dating back to at least 2024]. Do you think that’s at all contradictory?
It is contradictory. If I can give a little cynical take on this — yes, Anthropic has been very good at marketing themselves as all about safety. But if you actually look at the facts rather than the claims, what you see is that Anthropic, OpenAI, Google DeepMind and xAI have all talked a lot about how they care about safety. None of them has come out supporting binding safety regulation the way we have in other industries. And all four of these companies have now broken their own promises. First we had Google — this big slogan, ‘Don’t be evil.’ Then they dropped that. Then they dropped another longer commitment that basically said they promised not to do harm with AI. They dropped that so they could sell AI for surveillance and weapons. OpenAI just dropped the word safety from their mission statement. xAI shut down their whole safety team. And now Anthropic, earlier in the week, dropped their most important safety commitment — the promise not to release powerful AI systems until they were sure they weren’t going to cause harm.
How did companies that made such prominent safety commitments end up in this position?
All of these companies, especially OpenAI and Google DeepMind but to some extent also Anthropic, have persistently lobbied against regulation of AI, saying, ‘Just trust us, we’re going to regulate ourselves.’ And they’ve successfully lobbied. So we right now have less regulation on AI systems in America than on sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell any sandwiches until you fix it. But if you say, ‘Don’t worry, I’m not going to sell sandwiches, I’m going to sell AI girlfriends for 11-year-olds, and they’ve been linked to suicides in the past, and then I’m going to release something called superintelligence which might overthrow the U.S. government, but I have a good feeling about mine’ — the inspector has to say, ‘Fine, go ahead, just don’t sell sandwiches.’
There’s food safety regulation and no AI regulation.
And this, I feel, all of these companies really share the blame for. Because if they had taken all these promises that they made back in the day for how they were going to be so safe and goody-goody, and gotten together, and then gone to the government and said, ‘Please take our voluntary commitments and turn them into U.S. law that binds even our most sloppy competitors’ — this would have happened instead. We’re in a complete regulatory vacuum. And we know what happens when there’s a complete corporate amnesty: you get thalidomide, you get tobacco companies pushing cigarettes on kids, you get asbestos causing lung cancer. So it’s sort of ironic that their own resistance to having laws saying what’s okay and not okay to do with AI is now coming back and biting them.
There is no law right now against building AI to kill Americans, so the government can just suddenly ask for it. If the companies themselves had earlier come out and said, ‘We want this law,’ they wouldn’t be in this pickle. They really shot themselves in the foot.
The companies’ counter-argument is always the race with China — if American companies don’t do this, Beijing will. Does that argument hold?
Let’s analyze that. The most common talking point from the lobbyists for the AI companies — they’re now better funded and more numerous than the lobbyists from the fossil fuel industry, the pharma industry and the military-industrial complex combined — is that whenever anyone proposes any kind of regulation, they say, ‘But China.’ So let’s look at that. China is in the process of banning AI girlfriends outright. Not just age limits — they’re looking at banning all anthropomorphic AI. Why? Not because they want to please America but because they feel this is screwing up Chinese youth and making China weak. Obviously, it’s making American youth weak, too.
And when people say we have to race to build superintelligence so we can win against China — when we don’t actually know how to control superintelligence, so that the default outcome is that humanity loses control of Earth to alien machines — guess what? The Chinese Communist Party really likes control. Who in their right mind thinks that Xi Jinping is going to tolerate some Chinese AI company building something that overthrows the Chinese government? No way. It’s clearly really bad for the American government too if it gets overthrown in a coup by the first American company to build superintelligence. This is a national security threat.
That’s compelling framing — superintelligence as a national security threat, not an asset. Do you see that view gaining traction in Washington?
I think if people in the national security community listen to Dario Amodei describe his vision — he’s given a famous speech where he says we’ll soon have a country of geniuses in a data center — they might start thinking: wait, did Dario just use the word ‘country’? Maybe I should put that country of geniuses in a data center on the same threat list I’m keeping tabs on, because that sounds threatening to the U.S. government. And I think fairly soon, enough people in the U.S. national security community are going to realize that uncontrollable superintelligence is a threat, not a tool. This is totally analogous to the Cold War. There was a race for dominance — economic and military — against the Soviet Union. We Americans won that one without ever engaging in the second race, which was to see who could put the most nuclear craters in the other superpower. People realized that was just suicide. No one wins. The same logic applies here.
What does all of this mean for the pace of AI development more broadly? How close do you think we are to the systems you’re describing?
Six years ago, almost every expert in AI I knew predicted we were decades away from having AI that could master language and knowledge at human level — maybe 2040, maybe 2050. They were all wrong, because we already have that now. We’ve seen AI progress quite rapidly from high school level to college level to PhD level to university professor level in some areas. Last year, AI won the gold medal at the International Mathematics Olympiad, which is about as difficult as human tasks get. I wrote a paper together with Yoshua Bengio, Dan Hendrycks, and other top AI researchers just a few months ago giving a rigorous definition of AGI. According to this, GPT-4 was 27% of the way there. GPT-5 was 57% of the way there. So we’re not there yet, but going from 27% to 57% that quickly suggests it might not be that long.
When I lectured to my students yesterday at MIT, I told them that even if it takes four years, that means when they graduate, they might not be able to get any jobs anymore. It’s certainly not too soon to start preparing for it.
Anthropic is now blacklisted. I’m curious to see what happens next — will the other AI giants stand with them and say, we won’t do this either? Or does someone like xAI raise their hand and say, Anthropic didn’t want that contract, we’ll take it? [Editor’s note: Hours after the interview, OpenAI announced its own deal with the Pentagon.]
Last night, Sam Altman came out and said he stands with Anthropic and has the same red lines. I admire him for the courage of saying that. Google, as of when we started this interview, had said nothing. If they just stay quiet, I think that’s incredibly embarrassing for them as a company, and a lot of their staff will feel the same. We haven’t heard anything from xAI yet either. So it’ll be interesting to see. Basically, there’s this moment where everybody has to show their true colors.
Is there a version of this where the outcome is actually good?
Yes, and this is why I’m actually optimistic in a strange way. There’s such an obvious alternative here. If we just start treating AI companies like any other companies — drop the corporate amnesty — they would clearly have to do something like a clinical trial before they released something this powerful, and demonstrate to independent experts that they know how to control it. Then we get a golden age with all the good stuff from AI, without the existential angst. That’s not the path we’re on right now. But it could be.

TechCrunchAI大撞车

文章目录


    扫描二维码,在手机上阅读