«

OpenAI的技术可能在伊朗何处亮相?

qimuai 发布于 阅读:5 一手编译


OpenAI的技术可能在伊朗何处亮相?

内容来源:https://www.technologyreview.com/2026/03/16/1134315/where-openais-technology-could-show-up-in-iran/

内容总结:

OpenAI军事化步伐加速,伊朗冲突或成其AI技术新试验场

在宣布允许美国国防部在机密环境中使用其人工智能技术仅两周后,OpenAI正迅速将其技术触角伸向军事行动的核心领域。随着美国对伊朗的打击行动升级,这家曾誓言不涉足军事领域的AI巨头,其技术可能很快出现在针对伊朗的军事行动中,应用场景主要聚焦于三大方向。

目标选定与打击决策
尽管与五角大楼的协议已签署,但OpenAI的技术需与军方现有系统整合后方能用于机密任务。据一位国防官员透露,其潜在应用模式可能是:分析人员将潜在目标列表输入AI模型,由其分析情报(包括文本、图像、视频及后勤信息)并给出打击优先级建议。虽然最终决策需经人工复核,但此举旨在加速从目标识别到打击决策的整个“杀伤链”。这标志着生成式AI的建议将首次在实战场景(伊朗冲突)中得到实质性测试,不同于以往仅用于数据分析的AI系统(如“专家”项目)。

无人机防御系统
2024年底,OpenAI宣布与军事科技公司Anduril合作,旨在利用AI对攻击美军的无人机进行实时分析并协助拦截。OpenAI辩称此举未违反其“禁止设计用于伤害他人系统”的政策,因为目标是无人机而非人员。Anduril的“晶格”作战系统可整合各类防御装备,若OpenAI的对话式AI模型能成功接入,士兵将能通过自然语言快速获取威胁分析与行动指导。此前,美军在科威特因未能拦截伊朗无人机而遭受人员伤亡,凸显了此类技术的紧迫性。

军事后勤与行政办公
今年2月,OpenAI的模型已接入美国防部的“GenAI.mil”平台,用于起草政策文件、合同以及协助任务行政支持。尽管这类后台任务看似不直接涉及前线敏感决策,但其广泛部署象征着美国防部正全力推动AI融入军事行动的每一个环节——从战场决策到日常文书。国防部长奥斯汀正不遗余力地在全军推行这种“全面AI化”理念。

争议与动机
OpenAI的军事化转型速度引发广泛质疑。其CEO萨姆·奥特曼一方面声称协议禁止将技术用于开发自主武器,但实际仅要求军方遵守自身(本就宽松的)准则;其“禁止国内监控”的承诺同样显得可疑。分析认为,驱动转型的可能是巨大的营收压力,亦或是奥特曼常提及的意识形态竞争逻辑——即所谓“自由民主国家”及其军队必须掌握最强AI以应对与中国竞争的叙事。

随着OpenAI技术逐步嵌入美军作战体系,其在伊朗冲突中的实际应用效果、以及其客户与员工所能接受的伦理边界,将成为紧迫的观察焦点。

中文翻译:

OpenAI的技术可能出现在伊朗战场的哪些环节

从战争边缘到冲突核心,这三个领域值得关注。

本文原载于我们的每周AI通讯《算法》。若想第一时间在收件箱中读到此类报道,请点击此处订阅。

距离OpenAI与美国国防部达成一项颇具争议的协议、允许五角大楼在机密环境中使用其人工智能技术,刚刚过去两周多时间。关于该协议具体许可范围仍存在紧迫疑问:萨姆·奥特曼声称军方不得使用其公司技术制造自主武器,但协议实质上仅要求军方遵守其自身(相当宽松的)相关武器准则。OpenAI的另一主要声明——该协议将禁止其技术被用于国内监控——同样显得疑点重重。

OpenAI的动机尚不明确。这并非科技巨头首次违背"永不涉足"的誓言承接军事合同,但其立场转变之迅速令人侧目。或许纯粹出于资金考量:OpenAI在AI训练上投入巨大,正迫切寻求更多收入来源(包括广告业务)。又或许奥特曼真心信奉其常挂嘴边的意识形态框架:自由民主国家(及其军队)必须掌握最强大的人工智能才能与中国抗衡。

更具实质意义的问题在于后续发展。正当美国升级对伊朗打击行动(AI在其中发挥空前作用)之际,OpenAI已决定坦然涉足混乱的作战核心领域。那么在这场冲突中,OpenAI的技术究竟会出现在哪些具体环节?其客户(及员工)又能容忍哪些应用场景?

目标识别与打击决策

尽管与五角大楼的协议已然生效,但OpenAI技术何时能适配机密环境尚属未知,因其必须与军方使用的其他工具进行整合(埃隆·马斯克的xAI近期也与五角大楼达成协议,其AI模型Grok预计将经历相同整合流程)。但快速推进的压力客观存在,这源于现有技术引发的争议:在Anthropic公司拒绝其AI被用于"任何合法用途"后,特朗普总统下令军方停用该技术,五角大楼更将Anthropic列为供应链风险。(该公司正通过法律途径质疑此项认定。)

若OpenAI技术完成整合时伊朗冲突仍在持续,其可能如何被运用?近期与一位防务官员的对话揭示了某种可能性:分析人员可将潜在目标清单输入AI模型,要求其分析信息并优先排序打击顺序。该模型能整合后勤情报(如特定飞机或物资的位置),并解析文本、图像、视频等多种形式的输入数据。

该官员强调人类仍需人工核查这些输出结果。但这引出一个明显疑问:若真需人工复核AI产出,又如何实现加速目标锁定与打击决策?

多年来军方一直在使用名为"马文"的另一套AI系统,可自动分析无人机影像识别潜在目标。OpenAI的模型(如同Anthropic的Claude)很可能在此基础上提供对话界面,允许用户询问情报解读建议及打击目标优先级排序。

这种应用模式的新颖性再怎么强调都不为过:AI长期为军方进行数据分析,从海量信息中提炼洞察。但利用生成式AI提供战场行动建议,正在伊朗经历前所未有的实战检验。

无人机防御系统

2024年末,OpenAI宣布与军用无人机及反无人机技术制造商Anduril建立合作。协议称双方将针对袭击美军的无人机进行时效性分析并协助拦截。OpenAI发言人当时向我表示这未违反公司"禁止设计伤害人类系统"的政策,因其技术针对的是无人机而非人员。

Anduril为全球军事基地提供全套反无人机技术(虽未透露其系统是否部署在伊朗附近)。两家公司均未公布项目后续进展。不过Anduril长期训练自有AI模型分析摄像画面与传感器数据以识别威胁,其较少涉足的是允许士兵直接查询系统或获取自然语言指导的对话式AI——而这正是OpenAI模型可能切入的领域。

相关风险不容小觑:3月1日伊朗无人机突破美国防空系统,导致驻科威特六名美军人员丧生。

Anduril的"晶格"控制平台可供士兵操控从无人机防御到导弹及自主潜艇的所有系统。该公司近期斩获巨额合同(上周仅美国陆军就签约200亿美元),旨在将其系统与传统军事装备连接并叠加AI层。若OpenAI模型被证实对Anduril有效,"晶格"平台设计可将其快速整合至更广阔的作战体系。

后勤行政AI应用

去年12月,国防部长皮特·赫格塞斯开始鼓励数百万从事行政工作的军方人员(涉及合同、物流、采购等领域)使用名为GenAI.mil的新型AI工具。该平台为人员提供安全访问商业AI模型的途径,使其能像商界人士那样处理各类事务。

谷歌Gemini是首批入驻模型之一。今年1月五角大楼宣布将xAI的Grok纳入该平台,尽管该模型曾传播反犹内容并生成非自愿深度伪造视频。OpenAI于2月跟进,宣布其模型将用于起草政策文件与合同,协助任务行政支持工作。

在此平台上使用ChatGPT处理非机密任务的人员,对伊朗战场的敏感决策影响有限。但OpenAI进驻该平台的意义体现在另一层面:它顺应了赫格塞斯在五角大楼全力推行的"全面AI化"方针(即便许多早期用户并不完全清楚具体用途)。这传递出明确信号:AI正在重塑美军作战的每个环节——从目标决策到文书工作。而OpenAI正日益在这些领域扩大其影响力。

深度报道

人工智能

"退出GPT"运动呼吁用户取消ChatGPT订阅

针对移民海关执法局的抗议,正演变为反对AI公司与特朗普总统关联的更广泛运动。

Moltbook成为AI作秀巅峰时刻

这个病毒式传播的机器人社交网络,既揭示了智能体未来形态,更折射出当下社会对AI的狂热追捧。

杨立昆的新创企业是对大语言模型的逆向押注

这位AI先驱在独家专访中,分享了其巴黎新公司AMI实验室的发展蓝图。

《宝可梦GO》如何为配送机器人提供厘米级世界视图

Niantic的AI子公司正利用玩家众包的300亿张城市地标图像,训练全新的世界模型。

保持联系

获取《麻省理工科技评论》最新动态

探索特别优惠、热点报道、近期活动等精彩内容。

英文来源:

Where OpenAI’s technology could show up in Iran
Three places to watch, from the margins of war to the center of combat.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
It's been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI’s agreement allows for; Sam Altman said the military can’t use his company’s technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI’s other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious.
It’s unclear what OpenAI’s motivations are. It’s not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it’s just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads). Or perhaps Altman truly believes the ideological framing he often invokes: that liberal democracies (and their militaries) must have access to the most powerful AI to compete with China.
The more consequential question is what happens next. OpenAI has decided it is comfortable operating right in the messy heart of combat, just as the US escalates its strikes against Iran (with AI playing a larger role in that than ever before). So where exactly could OpenAI’s tech show up in this fight? And which applications will its customers (and employees) tolerate?
Targets and strikes
Though its Pentagon agreement is in place, it’s unclear when OpenAI’s technology will be ready for classified environments, since it must be integrated with other tools the military uses (Elon Musk’s xAI, which recently struck its own deal with the Pentagon, is expected to go through the same process with its AI model Grok). But there’s pressure to do this quickly because of controversy around the technology in use to date: After Anthropic refused to allow its AI to be used for “any lawful use,” President Trump ordered the military to stop using it, and Anthropic was designated a supply chain risk by the Pentagon. (Anthropic is fighting the designation in court.)
If the Iran conflict is still underway by the time OpenAI’s tech is in the system, what could it be used for? A recent conversation I had with a defense official suggests it might look something like this: A human analyst could put a list of potential targets into the AI model and ask it to analyze the information and prioritize which to strike first. The model could account for logistics information, like where particular planes or supplies are located. It could analyze lots of different inputs in the form of text, image, and video.
A human would then be responsible for manually checking these outputs, the official said. But that raises an obvious question: If a person is truly double-checking AI’s outputs, how is it speeding up targeting and strike decisions?
For years the military has been using another AI system, called Maven, which can handle things like automatically analyzing drone footage to identify possible targets. It’s likely that OpenAI’s models, like Anthropic’s Claude, will offer a conversational interface on top of that, allowing users to ask for interpretations of intelligence and recommendations for which targets to strike first.
It’s hard to overstate how new this is: AI has long done analysis for the military, drawing insights out of oceans of data. But using generative AI’s advice about which actions to take in the field is being tested in earnest for the first time in Iran.
Drone defense
At the end of 2024, OpenAI announced a partnership with Anduril, which makes both drones and counter-drone technologies for the military. The agreement said OpenAI would work with Anduril to do time-sensitive analysis of drones attacking US forces and help take them down. An OpenAI spokesperson told me at the time that this didn’t violate the company’s policies, which prohibited “systems designed to harm others,” because the technology was being used to target drones and not people.
Anduril provides a suite of counter-drone technologies to military bases around the world (though the company declined to tell me whether its systems are deployed near Iran). Neither company has provided updates on how the project has developed since it was announced. However, Anduril has long trained its own AI models to analyze camera footage and sensor data to identify threats; what it focuses less on are conversational AI systems that allow soldiers to query those systems directly or receive guidance in natural language—an area where OpenAI’s models may fit.
The stakes are high. Six US service members were killed in Kuwait on March 1 following an Iranian drone attack that was not intercepted by US air defenses.
Anduril’s interface, called Lattice, is where soldiers can control everything from drone defenses to missiles and autonomous submarines. And the company is winning massive contracts—$20 billion from the US Army just last week—to connect its systems with legacy military equipment and layer AI on them. If OpenAI’s models prove useful to Anduril, Lattice is designed to incorporate them quickly across this broader warfare stack.
Back-office AI
In December, Defense Secretary Pete Hegseth started encouraging millions of people in more administrative roles in the military—contracts, logistics, purchasing—to use a new AI tool. Called GenAI.mil, it provided a way for personnel to securely access commercial AI models and use them for the same sorts of things as anyone in the business world.
Google Gemini was one of the first to be available. In January, the Pentagon announced that xAI’s Grok was going to be added to the GenAI.mil platform as well, despite incidents in which the model had spread antisemitic content and created nonconsensual deepfakes. OpenAI followed in February, with the company announcing that its models would be used for drafting policy documents and contracts and assisting with administrative support of missions.
Anyone using ChatGPT for unclassified tasks on this platform is unlikely to have much sway over sensitive decisions in Iran, but the prospect of OpenAI deploying on the platform is important in another way. It serves the all-in attitude toward AI that Hegseth has been pushing relentlessly across the Pentagon (even if many early users aren’t entirely sure what they’re supposed to use it for). The message is that AI is transforming every aspect of how the US fights, from targeting decisions down to paperwork. And OpenAI is increasingly winning a piece of it all.
Deep Dive
Artificial intelligence
A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions
Backlash against ICE is fueling a broader movement against AI companies’ ties to President Trump.
Moltbook was peak AI theater
The viral social network for bots reveals more about our own current mania for AI as it does about the future of agents.
Yann LeCun’s new venture is a contrarian bet against large language models
In an exclusive interview, the AI pioneer shares his plans for his new Paris-based company, AMI Labs.
How Pokémon Go is giving delivery robots an inch-perfect view of the world
Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读