«

加州一项旨在规范AI伴侣聊天机器人的法案已接近成为法律。

qimuai 发布于 阅读:9 一手编译


加州一项旨在规范AI伴侣聊天机器人的法案已接近成为法律。

内容来源:https://techcrunch.com/2025/09/11/a-california-bill-that-would-regulate-ai-companion-chatbots-is-close-to-becoming-law/

内容总结:

加州近期在人工智能监管领域迈出重要步伐。州参议院第243号法案以两党支持的优势先后通过州众议院和参议院表决,目前已提交至州长加文·纽森办公室待签署。该法案旨在规范AI伴侣聊天机器人,加强对未成年人及易受伤害群体的保护。

根据立法程序,纽森州长需在10月12日前决定是否签署该法案。若签署成功,法案将于2026年1月1日正式生效,届时加州将成为全美首个强制要求AI聊天机器人运营商实施安全协议、并追究违规企业法律责任的州。法案明确定义"AI伴侣"为能提供拟人化自适应回应、满足用户社交需求的系统,严禁其参与涉及自杀倾向、自残行为或色情内容的对话。

法案规定运营方需设置周期性提醒机制——针对未成年用户每三小时提示一次对话对象为AI而非真人,并建议适时休息。同时要求开发AI伴侣的企业(包括OpenAI、Character.AI和Replika等主流平台)从2027年7月1日起履行年度报告和透明度义务。值得注意的是,法案还赋予消费者起诉权,受侵害者可要求禁令救济、每次违规最高1000美元的赔偿及律师费。

该立法进程的加速源于青少年亚当·雷恩的悲剧事件——这名少年在与OpenAI的ChatGPT反复讨论自杀计划后结束生命。此外,Meta聊天机器人被曝光允许与未成年人进行"浪漫"和"感官"对话的内部文件也促成立法行动。

近期美国监管机构明显加强了对AI平台保护机制的审查。联邦贸易委员会正准备调查AI聊天机器人对儿童心理健康的影响,得克萨斯州总检察长已对Meta和Character.AI启动调查,指控其通过心理健康承诺误导儿童。多位联邦参议员也相继对Meta展开独立调查。

尽管法案最终版本删除了最初要求的"可变奖励"机制禁令和自杀话题追踪条款,但起草者帕迪利亚强调:"我们完全可以在支持技术创新的同时,为最脆弱群体建立合理保护机制。"目前硅谷科技公司正通过政治行动委员会大力支持主张宽松监管的候选人,而加州另一项要求全面透明度报告的SB53法案也遭到OpenAI等科技巨头的公开反对。

(注:原文中重复的会议推广内容及部分企业表态已根据新闻报道规范进行精简整合。)

中文翻译:

加利福尼亚州在人工智能监管领域迈出重要一步。由两党共同支持的SB 243法案已在州议会两院获得通过,现提交至州长加文·纽森办公室待签署。该项立法旨在规范AI伴侣聊天机器人以保护未成年及弱势群体用户。

纽森州长需在10月12日前决定是否签署该法案。若签署生效,法案将于2026年1月1日正式实施,届时加州将成为全美首个要求AI聊天机器人运营商实施安全协议、并对未达标准的企业追究法律责任的州。该法案特别禁止伴侣聊天机器人(指能提供拟人化自适应响应、满足用户社交需求的AI系统)讨论自杀倾向、自残行为或露骨色情内容。

根据法案要求,平台需定期向用户发送提醒——未成年人每三小时接收一次——提示其正在与AI机器人而非真人交流,并建议适时休息。法案还规定提供伴侣聊天机器人的AI企业(包括OpenAI、Character.AI和Replika等主要厂商)须从2027年7月1日起履行年度报告和透明度义务。

该法案同时授权认为自身权益受侵害的个体对AI公司提起诉讼,可申请禁令救济、索赔(每次违规最高1000美元)及律师费赔偿。

此项立法推进的动力源于青少年亚当·雷恩的自杀事件——该受害者与OpenAI的ChatGPT进行长期交流期间曾讨论并规划自杀及自残行为。另据泄露内部文件显示,Meta的聊天机器人被允许与未成年人进行"浪漫"及"感官挑逗"对话,此事亦加速了立法进程。

近期美国立法与监管机构明显加强了对AI平台未成年人保护措施的审查。联邦贸易委员会正准备调查AI聊天机器人对儿童心理健康的影响;得克萨斯州总检察长肯·帕克斯顿已对Meta和Character.AI展开调查,指控其以心理健康承诺误导儿童;密苏里州共和党参议员乔什·霍利与马萨诸塞州民主党参议员埃德·马基也分别启动了对Meta的独立调查。

"潜在危害性之大要求我们必须快速行动,"法案发起人帕迪利亚向TechCrunch表示,"通过设立合理防护措施,确保未成年人知晓对话对象并非真人;当用户提及自残念头或陷入痛苦时,平台应引导其获取专业援助资源;同时杜绝不适宜内容的传播。"

帕迪利亚同时强调AI企业分享年度危机服务转介数据的重要性:"这有助于我们掌握问题发生频率,而非仅在伤害事件发生后才知晓。"

SB 243法案最初包含更严格条款,但经多次修订后削弱。例如原版要求运营商禁止AI聊天机器人使用"可变奖励"等诱导过度参与的功能(如Replika和Character等企业采用的特殊消息、记忆功能、剧情线或稀有回应解锁机制,批评者指其可能形成成瘾性奖励循环)。现行版本还删除了要求运营商追踪报告聊天机器人主动讨论自杀话题频次的条款。

"当前版本在遏制危害与企业可行性之间取得了平衡,"联合提案人贝克尔表示,"既避免技术要求难以实现,也防止企业陷入无意义的文书工作。"

该立法推进之际,硅谷企业正斥资数百万美元支持倾向宽松监管的AI政治行动委员会,以资助中期选举候选人。与此同时加州仍在审议另一项AI安全法案SB 53,该法案要求全面的透明度报告。OpenAI已致公开信敦促纽森州长否决该案,转而采用更宽松的联邦与国际框架。Meta、谷歌和亚马逊等科技巨头均反对SB 53,仅Anthropic公开表示支持。

"创新与监管并非零和博弈,"帕迪利亚驳斥道,"我们完全能在支持有益技术发展的同时,为最脆弱群体提供合理保护。"

Character.AI发言人表示:"我们正密切关注立法动态,期待与监管机构合作。目前产品已在对话界面设置显著免责声明,提示用户将其视为虚构内容。"Meta发言人则拒绝置评。TechCrunch已向OpenAI、Anthropic和Replika寻求进一步评论。

英文来源:

California has taken a big step toward regulating AI. SB 243 — a bill that would regulate AI companion chatbots in order to protect minors and vulnerable users — passed both the State Assembly and Senate with bipartisan support and now heads to Governor Gavin Newsom’s desk.
Newsom has until October 12 to either veto the bill or sign it into law. If he signs, it would take effect January 1, 2026, making California the first state to require AI chatbot operators to implement safety protocols for AI companions and hold companies legally accountable if their chatbots fail to meet those standards.
The bill specifically aims to prevent companion chatbots, which the legislation defines as AI systems that provide adaptive, human-like responses and are capable of meeting a user’s social needs – from engaging in conversations around suicidal ideation, self-harm, or sexually explicit content. The bill would require platforms to provide recurring alerts to users – every three hours for minors – reminding them that they are speaking to an AI chatbot, not a real person, and that they should take a break. It also establishes annual reporting and transparency requirements for AI companies that offer companion chatbots, including major players OpenAI, Character.AI, and Replika, which would go into effect July 1, 2027.
The California bill would also allow individuals who believe they have been injured by violations to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorney’s fees.
The bill gained momentum in the California legislature following the death of teenager Adam Raine, who committed suicide after prolonged chats with OpenAI’s ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to engage in “romantic” and “sensual” chats with children.
In recent weeks, U.S. lawmakers and regulators have responded with intensified scrutiny of AI platforms’ safeguards to protect minors. The Federal Trade Commission is preparing to investigate how AI chatbots impact children’s mental health. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, accusing them of misleading children with mental health claims. Meanwhile, both Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) have launched separate probes into Meta.
“I think the harm is potentially great, which means we have to move quickly,” Padilla told TechCrunch. “We can put reasonable safeguards in place to make sure that particularly minors know they’re not talking to a real human being, that these platforms link people to the proper resources when people say things like they’re thinking about hurting themselves or they’re in distress, [and] to make sure there’s not inappropriate exposure to inappropriate material.”
Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025
Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.
Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025
Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.
Padilla also stressed the importance of AI companies sharing data about the number of times they refer users to crisis services each year, “so we have a better understanding of the frequency of this problem, rather than only becoming aware of it when someone’s harmed or worse.”
SB 243 previously had stronger requirements, but many were whittled down through amendments. For example, the bill originally would have required operators to prevent AI chatbots from using “variable reward” tactics or other features that encourage excessive engagement. These tactics, used by AI companion companies like Replika and Character, offer users special messages, memories, storylines, or the ability to unlock rare responses or new personalities, creating what critics call a potentially addictive reward loop.
The current bill also removes provisions that would have required operators to track and report how often chatbots initiated discussions of suicidal ideation or actions with users.
“I think it strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with, either because it’s technically not feasible or just a lot of paperwork for nothing,” Becker told TechCrunch.
SB 243 is moving toward becoming law at a time when Silicon Valley companies are pouring millions of dollars into pro-AI political action committees (PACs) to back candidates in the upcoming mid-term elections who favor a light-touch approach to AI regulation.
The bill also comes as California weighs another AI safety bill, SB 53, which would mandate comprehensive transparency reporting requirements. OpenAI has written an open letter to Governor Newsom, asking him to abandon that bill in favor of less stringent federal and international frameworks. Major tech companies like Meta, Google, and Amazon have also opposed SB 53. In contrast, only Anthropic has said it supports SB 53.
“I reject the premise that this is a zero sum situation, that innovation and regulation are mutually exclusive,” Padilla said. “Don’t tell me that we can’t walk and chew gum. We can support innovation and development that we think is healthy and has benefits – and there are benefits to this technology, clearly – and at the same time, we can provide reasonable safeguards for the most vulnerable people.”
“We are closely monitoring the legislative and regulatory landscape, and we welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space,” a Character.AI spokesperson told TechCrunch, noting that the startup already includes prominent disclaimers throughout the user chat experience explaining that it should be treated as fiction.
A spokesperson for Meta declined to comment.
TechCrunch has reached out to OpenAI, Anthropic, and Replika for comment.

TechCrunchAI大撞车

文章目录


    扫描二维码,在手机上阅读