«

在市场压力下,Anthropic公司降低了其人工智能安全政策的标准。

qimuai 发布于 阅读:1 一手编译


在市场压力下,Anthropic公司降低了其人工智能安全政策的标准。

内容来源:https://aibusiness.com/generative-ai/anthropic-downgrades-its-ai-safety-policy

内容总结:

AI安全承诺松动:Anthropic为生存与竞争调整策略

在生成式AI浪潮中,曾以“安全与责任先行”为核心定位的明星公司Anthropic,近期悄然调整了其坚守多年的安全路线。面对激烈的市场竞争与监管环境变化,这家谷歌云支持的AI厂商正试图在创新加速与安全承诺之间寻找新的平衡点。

安全承诺“退守”:从“绝不”到“展示”

本周,Anthropic首席科学官Jared Kaplan向《时代》周刊证实,公司将不再遵守2023年作出的“除非确定具备充分安全措施,否则绝不训练AI系统”的严格承诺。取而代之的是,Anthropic将致力于向企业客户清晰展示其模型在安全测试中的表现。这一转变标志着其《负责任扩展政策》(RSP)——一套设定AI能力等级并包含违规即暂停开发等强制安全协议的框架——正在被实质性放宽。

分析普遍认为,这一策略调整源于双重压力。一方面,AI市场竞争白热化,OpenAI等对手的步步紧逼迫使Anthropic必须保持发展速度。伊利诺伊大学芝加哥分校数据与AI战略协理副校长Michael Bennett指出,在政府监管缺位、竞争对手全速创新的环境下,Anthropic担心若自我设限过严,将被其他不那么重视安全的厂商超越。

另一方面,Anthropic正与美国国防部陷入僵局。国防部近日称,由于Anthropic不愿其技术被用于对美国人的大规模监控或全自主武器系统,该公司可能构成“供应链风险”。若无法解决此争端,Anthropic可能失去利润丰厚的政府合同以及与国防部有商业往来的合作伙伴,这对其生存构成直接威胁。

市场与客户的复杂反应

业内观察人士指出,Anthropic的“退守”可能获得部分企业客户的理解。Forrester Research分析师Jeff Pollard表示,许多客户最关心的是利用Anthropic的Claude Code代理来更快、更好地编写代码,安全基础虽是“加分项”却非“必需品”。Bennett也认为,部分认同RSP精神的客户会理解,Anthropic在涉及争议性应用时仍会划清界限,政策调整不意味着其成为“不道德的参与者”。

然而,担忧随之而来。专注数字经济法律的Metaverse Law创始人李莉莉律师表示,尽管理解公司面临的双重威胁,但弱化安全承诺从长远看可能损害其品牌与底线。她与许多用户同样青睐具备强有力保障的AI模型。

监管转向与未来影响

美国联邦层面的监管环境变化也为Anthropic的调整提供了背景。从拜登政府时期推动AI监管的初步尝试,到特朗普总统强调“无拘束创新”并阻止各州自行立法,监管风向的转变让部分企业客户对Anthropic的政策调整抱以同情。

尽管联邦监管滞后,但科罗拉多州等地方立法已将监管重点从AI工具开发者转向使用者,旨在防止AI在住房、就业等关键领域造成歧视。这或许预示着另一种监管路径。

分析师Pollard预计,Anthropic仍将是优先考虑安全的主流AI厂商之一,“如果我们希望市场中有公司秉持这样的抱负,就需要Anthropic生存下去。”Bennett则推测,若保持当前发展势头,安全政策的松动甚至可能催生Anthropic推出更强大的下一代Claude模型,进而加剧整个行业的竞争速度。

Anthropic的此次转向,折射出在资本、竞争与地缘政治多重压力下,AI行业“安全优先”原则面临的现实挑战。当一家标杆企业选择为生存而变得“灵活”,整个行业追逐创新与坚守安全的平衡点,正在发生微妙而深刻的变化。

中文翻译:

由谷歌云赞助
选择您的首个生成式AI应用场景
要开始使用生成式AI,首先应关注那些能够提升人类信息体验的领域。

这家AI供应商曾承诺仅发布其认定为安全的模型。在多年以"安全为先、责任为先"的AI模型提供商自居后,Anthropic决定缩减其AI安全承诺并放宽"负责任扩展政策",这反映出该公司面临的经济与政治压力,以及为在AI市场中生存而保持灵活性的需求。

Anthropic首席科学官贾里德·卡普兰本周向《时代》杂志透露,公司将不再遵守2023年作出的"除非确保具备充分安全措施,否则绝不训练AI系统"的承诺。卡普兰表示,这一转变是为了在动荡的AI市场中保持竞争力。

取而代之的是,Anthropic现在承诺向企业清晰展示其模型在安全测试中的表现。卡普兰此番表态之际,尽管面临老对手OpenAI的竞争,该AI模型提供商的Claude模型使用量正显著增长。

然而,由于Anthropic目前与美国国防部的纠纷,这一增长正处于临界点。国防部官员近日表示,由于该生成式AI供应商不愿其技术被用于对美国民众的大规模监控或全自主武器系统,Anthropic可能成为"供应链风险"。若无法解决与国防部的矛盾,该公司可能失去政府合同以及与五角大楼有业务往来的商业合作伙伴。

失去部分或全部利润丰厚的政府合同将对Anthropic造成重大打击。这也意味着AI市场格局的显著转变——从至少有一家供应商坚守安全底线,转变为所有供应商竞逐创新,因为目前鲜有AI供应商将AI安全使用置于首位。与此同时,美国对AI技术的监管几乎空白。

"无论是美国政府还是世界大部分地区的政府,都未真正积极监管这项技术,"伊利诺伊大学芝加哥分校数据与AI战略协理副校长迈克尔·贝内特指出。他补充说,许多AI供应商可以随心所欲地自由创新。因此Anthropic意识到风险:如果创新滞后,其他不注重安全的AI供应商将抢占先机。

"他们实际上在说:'我们需要在这场竞赛中继续前进。政府没有真正提供帮助,竞争对手也不采纳我们的建议。既然别人不这么做,我们也不必以此标准束缚自己,'"贝内特解释道。

此外,美国监管环境已发生变化:从前总统乔·拜登任内联邦层面相对微小但意义重大的AI监管举措,转向唐纳德·特朗普总统强调无约束创新、反对监管的立场——其去年12月签署的禁止各州自行制定AI法律的行政令便是明证。

贝内特认为,由于监管环境变化,部分Anthropic企业客户可能会理解其AI安全政策的调整。"许多客户(或至少部分客户)仍认同负责任扩展政策的精神,他们会理解Anthropic在更具争议性的技术应用方面仍坚守底线。政策调整未必意味着Anthropic会成为该领域的不道德参与者。"

负责任扩展政策是Anthropic建立的框架,设定了AI模型能力的可量化层级,并制定了强制安全协议,包括在违反标准时暂停开发。

福雷斯特研究公司分析师杰夫·波拉德指出,部分企业并不那么关注安全性,他们更希望使用Anthropic热门的Claude代码代理来开发软件。"他们希望编写更多代码、更好代码、更快代码,潜在的安全基础或许是锦上添花,但对大多数客户而言并非必需。"

但Anthropic降低AI安全重视度的决定仍可能带来一些后果。"我对Anthropic的举措略有担忧,"专注数字经济隐私、AI与网络安全法律的Metaverse Law创始人、AI风险与数据隐私律师李莉莉表示,"我完全理解公司面对双重威胁时的选择,但令我担忧的是,包括我在内的许多人都更青睐具备强安全保障的AI模型。"

"这些安全承诺被稀释得越多,长远来看可能反而损害企业根本利益,"她补充道。

不过波拉德认为,Anthropic很可能仍是注重安全性的顶级AI供应商。"如果我们相信这是Anthropic价值的核心部分……如果我们希望市场中有企业秉持这样的理念,那么我们需要Anthropic生存下去。"

贝内特指出,安全政策的弱化甚至可能催生Anthropic的下一个强大模型。"如果它们保持去年Claude模型系列重大更新所展现的发展势头,那么下一代Claude版本可能会更强大……这也可能促使部分竞争对手竭尽全力加速开发。"

李莉莉同时表示,AI监管前景并未完全黯淡。科罗拉多等州正将监管重点从AI工具开发者转向使用者。《科罗拉多人工智能法案》已于2月1日生效,该法案通过规范AI工具部署来防止住房、就业、医疗和金融领域的歧视。

英文来源:

Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
The AI vendor previously committed to releasing only models it classified as safe.
After portraying itself as a safety and responsible AI-first model provider for years, Anthropic's decision to scale back its AI safety pledge and loosen its "Responsible Scaling Policy" reflects economic and political pressures it is facing and a need to be flexible to survive in the AI market.
Anthropic's chief science officer, Jared Kaplan, told Time Magazine this week it will no longer adhere to the commitment it made in 2023 to never train an AI system unless it is certain that adequate safety measures are in place. According to Kaplan, Anthropic made the shift to stay competitive in the turbulent AI market.
Instead, Anthropic will now commit to clearly showing enterprises how its models perform in safety tests.
Kaplan's revelation came as the AI model provider is seeing significant growth in the use of its Claude models, despite competition from archrival OpenAI.
However, that growth is at a tipping point due to the current battle Anthropic is fighting with the U.S. Department of Defense. Defense department officials have said in recent days that Anthropic could become a "supply chain risk" because the generative AI vendor does not want its technology used for mass surveillance of Americans or in fully autonomous weapons systems. If Anthropic can't resolve its problems with the defense department, it could lose its government contracts and access to commercial partners that do business with the Pentagon.
Losing some or all of its lucrative government contracts would be a dramatic setback for Anthropic. It would also represent a significant shift in the AI market from at least one vendor standing securely on safety to all vendors chasing innovation, as few AI vendors strongly prioritize the safe use of AI. Meanwhile, regulation of AI technology in the U.S. is nearly non-existent.
"Governments domestically, but also in large parts around the world, aren't really going about aggressively regulating this technology," said Michael Bennett, associate vice chancellor for data and AI strategy at the University of Illinois Chicago. He added that many AI vendors have free rein to innovate as quickly as they want. Therefore, Anthropic senses a risk that, if it fails to innovate, another AI vendor not committed to safety will take the lead.
"They're effectively saying, 'Hey, we need to keep going in this race. The government's not really helping. Competitors are not taking our suggestion here. So, let us not check ourselves at this rate when others are not doing it,'" Bennett said.
Moreover, the regulatory landscape in the U.S. has shifted from relatively small but significant steps toward AI regulation at the federal level under former President Joe Biden to President Donald Trump's emphasis on unbridled innovation and opposition to regulation, highlighted by his December executive order that blocks states from passing their own AI laws.
Because of the changed regulatory climate, a portion of Anthropic's enterprise customers will likely sympathize with the vendor's AI safety policy change, Bennett said.
"Many of the clients, or some of their clients, at least, who are committed to the spirit of the Responsible Scaling Policy will understand that Anthropic is still holding a line when it comes to more controversial applications [of its technology]," he said. "The policy shift does not necessarily mean that Anthropic becomes an immoral actor in the space."
The RSP is Anthropic’s framework that sets measurable levels of AI models' capabilities and lays out mandatory safety protocols, including development pauses if standards are violated.
Moreover, some enterprises are not so focused on safety. Rather, they want to use Anthropic's popular Claude Code agent to build software, said Jeff Pollard, an analyst at Forrester Research.
"They want to be able to write more code, write better code, write code faster, and the potential safety and security underpinnings of that might have been a nice thing to have, but I don't think they were a must-have for very many of those customers," Pollard said.
But Anthropic's decision to reduce its emphasis on AI safety will likely still have some consequences.
"I am a little concerned about Anthropic's move," said Lily Li, an AI risk, data privacy and cybersecurity lawyer and founder of Metaverse Law, which focuses on privacy, AI, and cybersecurity law for the digital economy. "I completely understand why the company is doing what it's doing in the face of these dual threats, but I am concerned because a lot of people, myself included, do favor AI models that have strong safeguards."
"The more you water down these public representations of safety, it actually might, in the long run, hurt the bottom line," Li continued.
But Anthropic will likely remain a top AI vendor that prioritizes safety and security, Pollard said.
"If we believe that this is a core part of Anthropic's value … if we want companies to have that aspiration in the market, then we need Anthropic to survive," he said.
The weakening of the safety policy might even lead to Anthropic's next powerful model, Bennett said.
"If they have the kind of momentum that the last release suggests," he said, referring to the significant updates to the Claude model family over the past year. "Then you may see a more powerful next Claude version … that may also encourage some of its competitors to do whatever they can to accelerate their development," he said.
Moreover, the prospects for AI are not completely dead, Li said. Some states, such as Colorado, are shifting focus from AI tool developers to users.
The Colorado Artificial Intelligence Act took effect on Feb. 1 and regulates the deployment of AI tools to prevent discrimination in housing, employment, healthcare, and finance.

商业视角看AI

文章目录


    扫描二维码,在手机上阅读