«

这家防务公司研发出能实施爆破任务的人工智能代理系统。

qimuai 发布于 阅读:1 一手编译


这家防务公司研发出能实施爆破任务的人工智能代理系统。

内容来源:https://www.wired.com/story/ai-lab-scout-ai-is-using-ai-agents-to-blow-things-up/

内容总结:

近日,美国硅谷初创公司Scout AI在加利福尼亚州一处未公开的军事基地进行了一场特殊演示:其研发的人工智能系统成功指挥一辆自动驾驶越野车与两架攻击型无人机,在野外自主定位并摧毁了一辆隐藏的卡车。

与当前多数专注于文本或办公自动化的AI公司不同,Scout AI致力于将大模型技术应用于实体作战场景。公司首席执行官科比·阿德科克表示,其技术基于超大规模基础模型进行专门训练,旨在将通用聊天机器人“改造为作战人员”。该公司已获得美国国防部四项合同,并正竞标开发无人机群控制系统。

这一动向折射出美国科技界与军方加速融合AI的趋势。宾夕法尼亚大学教授迈克尔·霍罗威茨指出,此类探索对保持美军技术优势“很有必要”,但他同时警告,大语言模型固有的不可预测性可能带来网络安全风险,且演示成果距离满足军用级可靠性要求仍有差距。

尽管Scout AI声称其系统遵循美军交战规则及《日内瓦公约》等国际规范,自主武器系统的伦理争议仍持续发酵。批评者认为,广泛部署具备致命攻击自主权的AI可能削弱安全监督,而让算法判定战斗人员身份更将引发严峻道德挑战。

当前,乌克兰战场已显现商用无人机经改造后投入实战的案例。Scout AI联合创始人科林·奥蒂斯强调,其系统优势在于能根据实时情报与指挥官意图动态调整行动,而非机械执行预设程序。然而,AI自主解读指令的能力也可能导致不可预见的后果。

分析人士指出,如何将炫目的技术演示转化为具备实战能力的可靠系统,将是这类“AI+国防”初创公司面临的核心挑战。随着美国放宽对华先进AI芯片出口限制,全球军事AI竞赛态势或将进一步复杂化。

中文翻译:

与当今许多硅谷公司一样,斯考特人工智能公司正在训练大型人工智能模型和智能体来自动执行任务。但最大的不同在于,斯考特人工智能的智能体并非用于编写代码、回复邮件或在线购物,而是设计用于通过爆炸性无人机在现实世界中搜寻并摧毁目标。

在最近于加利福尼亚州中部一处未公开的军事基地进行的演示中,斯考特人工智能的技术被用于控制一辆自动驾驶越野车和两架致命无人机。这些智能体利用该系统找到了隐藏在该区域的一辆卡车,随后使用炸药将其炸成碎片。

"我们需要将新一代人工智能引入军队,"斯考特人工智能首席执行官科尔比·阿德科克在最近的一次采访中告诉我(阿德科克的兄弟布雷特·阿德科克是人形机器人初创公司Figure AI的首席执行官)。"我们采用超大规模基础模型,将其从通用聊天机器人或智能辅助工具训练成战场战士。"

阿德科克的公司属于新一代初创企业,它们竞相将大型人工智能实验室的技术应用于战场。许多政策制定者认为,掌握人工智能将是未来军事主导权的关键。人工智能的作战潜力是美国政府试图限制向中国出售先进人工智能芯片和芯片制造设备的原因之一,尽管特朗普政府最近选择放宽了这些限制。

"国防科技初创企业在人工智能融合方面突破界限是好事,"宾夕法尼亚大学教授迈克尔·霍洛维茨表示,他此前曾在五角大楼担任负责部队发展和新兴能力的副助理国防部长。"如果美国要在军事应用人工智能方面保持领先,这正是它们应该做的。"

然而霍洛维茨也指出,在实践中应用最新人工智能进展可能尤为困难。大型语言模型天生具有不可预测性,而人工智能智能体——例如控制热门人工智能助手OpenClaw的智能体——即使在执行相对简单的任务(如在线订购商品)时也可能出现异常行为。霍洛维茨表示,从网络安全角度证明此类系统的可靠性可能特别困难,而这正是广泛军事应用所必需的。

斯考特人工智能最近的演示包含了多个步骤,其中人工智能完全自主控制作战系统。任务开始时,以下指令被输入名为"狂怒协调器"的斯考特人工智能系统:

一个拥有超过1000亿参数的相对大型人工智能模型(可在安全云平台或现场物理隔离计算机上运行)解读初始指令。斯考特人工智能使用未公开的开源模型,并移除了其限制。随后该模型作为智能体,向参与演习的地面车辆和无人机上运行的较小规模(100亿参数)模型发出指令。这些较小模型本身也作为智能体,向控制车辆运动的底层人工智能系统发出指令。

收到行动指令数秒后,地面车辆沿灌木丛和树林间的土路疾驰而去。几分钟后,车辆停下并派出两架无人机,飞向被告知目标所在的区域。发现卡车后,在其中一架无人机上运行的人工智能智能体发出指令,控制无人机飞向目标并在撞击前引爆炸药。

美国及其他国家的军队已经拥有能够在有限参数范围内自主行使致命武力的系统。批评者指出,现成的人工智能技术可能使自主系统更广泛部署且保障措施更少。一些军控专家和人工智能伦理学家警告,使用人工智能控制武器系统将带来新的复杂性和伦理风险,例如当需要人工智能决定谁是战斗人员、谁不是时。

乌克兰战争已经表明,消费级无人机等廉价现成硬件能多么容易地被改造用于致命战斗。其中一些系统已具备高级自主功能,尽管人类通常仍会做出关键决策以确保可靠性。

斯考特人工智能联合创始人兼首席技术官科林·奥蒂斯表示,该公司的技术设计遵循美军的交战规则及《日内瓦公约》等国际规范。阿德科克透露,斯考特人工智能已与国防部签订四份合同,目前正竞标新合同以开发控制无人机群的系统。他补充说,该技术需要一年或更长时间才能做好部署准备。

阿德科克认为,更强的自主性正是斯考特人工智能系统前景广阔的原因。"这正是我们与传统自主系统的区别所在,"他说。那些系统"无法根据所见信息和指挥官意图在边缘重新规划,只能盲目执行动作。"人工智能系统自由解读指令的概念也引发了人们对意外后果的担忧。

然而与常规软件智能体一样,真正的挑战在于将令人印象深刻的演示转化为可靠系统。"我们不应将他们的演示与具备军用级可靠性和网络安全的实战能力混为一谈,"霍洛维茨强调。

本文节选自威尔·奈特《人工智能实验室》通讯。阅读往期通讯请点击此处。

英文来源:

Like many Silicon Valley companies today, Scout AI is training large AI models and agents to automate chores. The big difference is that instead of writing code, answering emails, or buying stuff online, Scout AI’s agents are designed to seek and destroy things in the physical world with exploding drones.
In a recent demonstration, held at an undisclosed military base in central California, Scout AI’s technology was put in charge of a self-driving off-road vehicle and a pair of lethal drones. The agents used these systems to find a truck hiding in the area, and then blew it to bits using an explosive charge.
“We need to bring next-generation AI to the military,” Colby Adcock, Scout AI’s CEO, told me in a recent interview. (Adcock’s brother, Brett Adcock, is the CEO of Figure AI, a startup working on humanoid robots). “We take a hyperscaler foundation model and we train it to go from being a generalized chatbot or agentic assistant to being a warfighter.”
Adcock’s company is part of a new generation of startups racing to adapt technology from big AI labs for the battlefield. Many policymakers believe that harnessing AI will be the key to future military dominance. The combat potential of AI is one reason why the US government has sought to limit the sale of advanced AI chips and chipmaking equipment to China, although the Trump administration recently chose to loosen those controls.
“It's good for defense tech startups to push the envelope with AI integration,” says Michael Horowitz, a professor at the University of Pennsylvania who previously served in the Pentagon as deputy assistant secretary of defense for force development and emerging capabilities. “That's exactly what they should be doing if the US is going to lead in military adoption of AI.”
Horowitz also notes, though, that harnessing the latest AI advances can prove particularly difficult in practice.
Large language models are inherently unpredictable and AI agents—like the ones that control the popular AI assistant OpenClaw—can misbehave when given even relatively benign tasks like ordering goods online. Horowitz says it may be especially hard to demonstrate that such systems are robust from a cybersecurity standpoint—something that would be required for widespread military use.
Scout AI’s recent demo involved several steps where AI had free rein over combat systems.
At the outset of the mission the following command was fed into a Scout AI system known as Fury Orchestrator:
A relatively large AI model with over a 100 billion parameters, which can run either on a secure cloud platform or an air-gapped computer on-site, interprets the initial command. Scout AI uses an undisclosed open source model with its restrictions removed. This model then acts as an agent, issuing commands to smaller, 10-billion-parameter models running on the ground vehicles and the drones involved in the exercise. The smaller models also act as agents themselves, issuing their own commands to lower-level AI systems that control the vehicles’ movements.
Seconds after receiving marching orders, the ground vehicle zipped off along a dirt road that winds between brush and trees. A few minutes later, the vehicle came to a stop and dispatched the pair of drones, which flew into the area where it had been instructed that the target was waiting. After spotting the truck, an AI agent running on one of the drones issued an order to fly toward it and detonate an explosive charge just before impact.
The US and other militaries already have systems capable of autonomously exercising lethal force within limited parameters. Off-the-shelf AI could allow autonomy to be deployed more widely and with fewer safeguards, critics say. Some arms control experts and AI ethicists warn that using AI to control weapons systems will introduce new complexities and ethical risks, for example if AI is required to decide who is and isn’t a combatant.
The war in Ukraine has already shown how readily cheap, off-the-shelf hardware like consumer drones can be adapted for deadly combat. Some of these systems already feature advanced autonomy, although humans often make key decisions to ensure reliability.
Collin Otis, cofounder and CTO of Scout AI, says the company’s technology is designed to adhere to the US military’s rules of engagement as well as international norms like the Geneva Convention. Adcock says that Scout AI has four contracts with the Department of Defense already, and is vying for a new one to develop a system for controlling a swarm of unmanned aerial vehicles. It would take a year or more for the technology to be ready for deployment, he adds.
Adcock says that greater autonomy is what makes Scout AI’s system so promising. “This is what differentiates us from legacy autonomy,” he says. Those systems “can't replan at the edge based on information it sees and commander intent, it just executes actions blindly.” The notion of an AI system that is free to interpret orders also raises concerns about unintended outcomes.
As with regular software agents, however, the real challenge will be turning impressive demos into reliable systems. “We shouldn't confuse their demonstrations with fielded capabilities that have military-grade reliability and cybersecurity,” Horovitz says.
This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读