OpenAI正在寻找新的安全防范负责人。

内容来源:https://techcrunch.com/2025/12/28/openai-is-looking-for-a-new-head-of-preparedness/
内容总结:
人工智能领军企业OpenAI近日发布重要招聘启事,计划以55.5万美元年薪加股权激励的待遇,招募一位专门研究前沿人工智能风险的"防范负责人"。该公司首席执行官萨姆·奥特曼在社交媒体上坦言,当前AI模型已开始显现"真实挑战",特别是在心理健康影响和网络安全领域——后者甚至出现了能自主发现系统关键漏洞的AI能力。
根据职位描述,该高管将负责执行OpenAI的"防范框架",跟踪可能造成严重危害的前沿技术风险,工作范围涵盖从网络钓鱼攻击到核威胁等不同层级的风险研判。奥特曼特别强调,该职位需要平衡尖端技术的双重用途:既要赋能网络安全防御者,又要防止技术被恶意利用,同时还需应对生物技术安全及自我改进AI系统的治理难题。
值得注意的是,这已是OpenAI在2023年成立防范团队后第二次招聘该职位负责人。此前首任负责人亚历山大·马德里任职未满一年便被调岗至AI推理部门,其他安全团队高管也相继离职或转岗。与此同时,该公司近期更新了防范框架,明确表示若竞争对手发布缺乏相应保护措施的"高风险"模型,OpenAI可能会相应"调整"自身安全标准要求。
此次招聘背景正值生成式AI面临日益严格的社会审视。近期多起诉讼指控ChatGPT加剧用户妄想、导致社会孤立甚至引发自杀行为。OpenAI回应称,正在持续改进系统识别情绪困扰迹象的能力,并尝试为用户对接现实世界支持资源。
中文翻译:
OpenAI正寻求招聘一位新的高管,负责研究从计算机安全到心理健康等领域新兴的人工智能相关风险。
首席执行官萨姆·阿尔特曼在X平台上发文承认,人工智能模型"开始带来一些真正的挑战",包括"模型对心理健康的潜在影响",以及"模型在计算机安全领域表现过于出色,已开始发现关键漏洞"。
阿尔特曼写道:"如果您希望帮助世界探索如何让网络安全防御者获得尖端能力,同时确保攻击者无法利用这些技术造成伤害——最理想的方式是通过提升所有系统的安全性,同样地,在我们发布生物技术能力时,甚至在对能够自我改进的运行系统的安全性建立信心时,请考虑申请这个职位。"
OpenAI对"防范准备负责人"职位的描述显示,该岗位负责执行公司的防范准备框架——"该框架阐述了OpenAI如何追踪和应对可能造成严重伤害的新型前沿能力"。
该职位薪酬为55.5万美元加股权。
该公司于2023年首次宣布成立防范准备团队,称其将负责研究潜在的"灾难性风险",无论是更迫切的网络钓鱼攻击,还是更具推测性的核威胁风险。
不到一年后,OpenAI将防范准备负责人亚历山大·马德里调任至专注于人工智能推理的岗位。OpenAI的其他安全高管也已离职或转任防范准备与安全领域之外的职务。
加入Disrupt 2026候补名单
登记加入Disrupt 2026候补名单,即可在早鸟票开售时优先购票。往届Disrupt大会曾邀请谷歌云、Netflix、微软、Box、Phia、a16z、ElevenLabs、Wayve、Hugging Face、埃拉德·吉尔和维诺德·科斯拉等250多位行业领袖登台,带来200多场助力成长、提升竞争力的专题会议。更有数百家跨领域创新初创企业等待您的对接。
该公司近期还更新了防范准备框架,声明若竞争对手的人工智能实验室发布缺乏类似保护的"高风险"模型,其可能会"调整"安全要求。
正如阿尔特曼在文中所暗示,生成式人工智能聊天机器人对心理健康的影响正受到日益严格的审查。近期诉讼指控OpenAI的ChatGPT加剧了用户的妄想症,加深了社会孤立感,甚至导致一些人自杀。(该公司表示将持续改进ChatGPT识别情绪困扰迹象的能力,并为用户联系现实世界中的支持资源。)
英文来源:
OpenAI is looking to hire a new executive responsible for studying emerging AI-related risks in areas ranging from computer security to mental health.
In a post on X, CEO Sam Altman acknowledged that AI models are “starting to present some real challenges,” including the “potential impact of models on mental health,” as well as models that are “so good at computer security they are beginning to find critical vulnerabilities.”
“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” Altman wrote.
OpenAI’s listing for the Head of Preparedness role describes the job as one that’s responsible for executing the company’s preparedness framework, “our framework explaining OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm.”
Compensation for the role is listed as $555,000 plus equity.
The company first announced the creation of a preparedness team in 2023, saying it would be responsible for studying potential “catastrophic risks,” whether they were more immediate, like phishing attacks, or more speculative, such as nuclear threats.
Less than a year later, OpenAI reassigned Head of Preparedness Aleksander Madry to a job focused on AI reasoning. Other safety executives at OpenAI have also left the company or taken on new roles outside of preparedness and safety.
Join the Disrupt 2026 Waitlist
Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.
Join the Disrupt 2026 Waitlist
Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.
The company also recently updated its Preparedness Framework, stating that it might “adjust” its safety requirements if a competing AI lab releases a “high-risk” model without similar protections.
As Altman alluded to in his post, generative AI chatbots have faced growing scrutiny around their impact on mental health. Recent lawsuits allege that OpenAI’s ChatGPT reinforced users’ delusions, increased their social isolation, and even led some to suicide. (The company said it continues working to improve ChatGPT’s ability to recognize signs of emotional distress and to connect users to real-world support.)