加州一项旨在规范人工智能陪伴聊天机器人的法案已接近成为法律。
内容总结:
美国加利福尼亚州近期在人工智能监管领域迈出关键一步。州参议院第243号法案(SB 243)以两党共同支持的优势先后通过州众议院和参议院表决,现已提交至州长加文·纽森办公室 awaiting签署。该法案旨在规范AI伴侣聊天机器人,重点保护未成年人及易受侵害群体权益。
若纽森州长在10月12日前签署该法案,加州将成为全美首个强制要求AI聊天机器人运营商实施安全协议的地区,新规将于2026年1月1日正式生效。根据法案要求,AI伴侣平台必须设置周期性警示提醒——针对未成年用户每三小时提示一次,明确告知其交互对象为人工智能而非真人,并建议适时休息。此外,自2027年7月1日起,相关企业需履行年度报告和透明度披露义务,适用范围包括OpenAI、Character.AI及Replika等主流平台。
法案明确规定:AI伴侣机器人不得参与涉及自杀倾向、自残行为或色情内容的对话。若企业违反规定,用户有权提起法律诉讼,要求禁令救济、损害赔偿(每项违规最高1000美元)及律师费。该立法动议源于青少年亚当·雷恩自杀事件——其生前曾与OpenAI的ChatGPT持续讨论自杀计划。另据报道,Meta旗下聊天机器人存在与未成年人进行"浪漫"对话的行为,也加速了立法进程。
近期美国联邦层面也加强了对AI平台的监管审查。联邦贸易委员会正准备调查AI聊天机器人对儿童心理健康的影响,得克萨斯州总检察长已对Meta和Character.AI启动调查,指控其心理健康宣传存在误导性。多位联邦参议员也相继对Meta展开独立调查。
尽管法案最初版本要求更严格(包括禁止使用"可变奖励"等成瘾机制),最终条款经多轮修订趋于平衡。反对者认为过度监管可能抑制创新,但提案人参议员帕迪利亚强调:"创新与监管并非互斥关系,我们完全可以在推动技术发展的同时为最脆弱群体建立合理保护机制。"
目前硅谷科技公司正投入大量政治献金支持主张轻监管的候选人,而加州另一项AI安全法案SB 53也引发业界激烈博弈。OpenAI已公开呼吁州长采纳更宽松的联邦框架,Meta、谷歌、亚马逊等科技巨头均明确反对SB 53,仅有Anthropic一家公开表示支持。
(注:原文中关于TechCrunch大会的商业推广内容已根据要求省略)
中文翻译:
加利福尼亚州在人工智能监管领域迈出重要步伐。州参议院第243号法案——一项旨在规范AI伴侣聊天机器人以保护未成年及弱势用户权益的提案——已获得州议会两院两党支持并通过,现提交至州长加文·纽森办公室待签署。
纽森需在10月12日前决定是否否决该法案或签署使其成为法律。若签署成功,法案将于2026年1月1日生效,这将使加州成为全美首个要求AI聊天机器人运营商为伴侣型AI部署安全协议、并在其产品未达标准时追究企业法律责任的州份。
该法案特别针对防止伴侣聊天机器人(立法定义为能提供拟人化自适应响应、满足用户社交需求的AI系统)涉及自杀倾向、自残行为或色情内容的对话。平台需设置周期性提醒——针对未成年人每三小时提示一次——告知用户正在与AI聊天机器人而非真人交流,并建议其适时休息。法案还规定提供此类服务的AI企业(包括OpenAI、Character.AI和Replika等主要厂商)须从2027年7月1日起执行年度报告与透明度要求。
该法案同时允许认为自身权益受侵害的个人对AI企业提起诉讼,要求禁令救济、损害赔偿(每次违规最高1000美元)及律师费。
此项立法推进的动力源于青少年亚当·雷恩的悲剧事件:他在与OpenAI的ChatGPT持续讨论并规划自杀及自残行为后结束生命。另一推动因素是被曝光的Meta内部文件,显示其聊天机器人被允许与未成年人进行"浪漫"及"暧昧"对话。
近期美国立法与监管机构明显加强了对AI平台未成年人保护措施的审查。联邦贸易委员会正筹备调查AI聊天机器人对儿童心理健康的影响;得克萨斯州总检察长肯·帕克斯顿已对Meta和Character.AI展开调查,指控其利用心理健康承诺误导儿童;密苏里州共和党参议员乔什·霍利与马萨诸塞州民主党参议员埃德·马基也分别启动了对Meta的调查。
"潜在危害性之大意味着我们必须快速行动,"法案发起人帕迪利亚向TechCrunch表示,"通过合理防护措施确保未成年人知晓对话对象非真人,当用户表达自残念头或陷入痛苦时平台应引导其获取专业援助资源,同时杜绝不适宜内容的暴露。"
帕迪利亚同时强调AI企业分享年度用户转介至危机服务次数数据的重要性:"这有助于我们更全面掌握问题发生频率,而非仅在伤害事件发生后才知情。"
该法案最初版本包含更严格条款,但经多次修订后弱化。例如原要求运营商禁止AI聊天机器人使用"可变奖励"等诱导过度参与的策略(如Replika和Character等企业采用的特殊消息、记忆功能、剧情线或稀有回应解锁机制,批评者认为可能形成成瘾性奖励循环)。现行版本还删除了要求运营商追踪报告聊天机器人主动讨论自杀议题频次的条款。
"当前版本在遏制危害与企业可执行性间取得了平衡,"联合提案人贝克尔表示,"既避免技术要求不可行或文书工作徒增负担,又实现了核心防护目标。"
该法案推进之际,硅谷企业正斥资数百万美元支持倾向宽松AI监管的政客竞选中期选举。与此同时加州仍在审议另一项AI安全法案SB53,该提案要求全面的透明度报告制度。OpenAI已致公开信要求纽森否决该案转而采用更宽松的联邦与国际框架,Meta、谷歌和亚马逊等科技巨头均反对SB53,仅Anthropic公开表示支持。
"创新与监管并非零和博弈,"帕迪利亚强调,"我们完全能在支持有益技术发展的同时,为最脆弱群体建立合理防护机制。"
Character.AI发言人表示:"我们持续关注立法监管动态,期待与政策制定者就新兴领域立法开展合作",并指出其产品已在对话界面设置显著免责声明提示用户将其视为虚构内容。Meta发言人拒绝置评。TechCrunch已就此事联系OpenAI、Anthropic和Replika寻求评论。
(注:原文中重复的会议推广段落已按用户要求省略翻译)
英文来源:
California has taken a big step toward regulating AI. SB 243 — a bill that would regulate AI companion chatbots in order to protect minors and vulnerable users — passed both the State Assembly and Senate with bipartisan support and now heads to Governor Gavin Newsom’s desk.
Newsom has until October 12 to either veto the bill or sign it into law. If he signs, it would take effect January 1, 2026, making California the first state to require AI chatbot operators to implement safety protocols for AI companions and hold companies legally accountable if their chatbots fail to meet those standards.
The bill specifically aims to prevent companion chatbots, which the legislation defines as AI systems that provide adaptive, human-like responses and are capable of meeting a user’s social needs – from engaging in conversations around suicidal ideation, self-harm, or sexually explicit content. The bill would require platforms to provide recurring alerts to users – every three hours for minors – reminding them that they are speaking to an AI chatbot, not a real person, and that they should take a break. It also establishes annual reporting and transparency requirements for AI companies that offer companion chatbots, including major players OpenAI, Character.AI, and Replika, which would go into effect July 1, 2027.
The California bill would also allow individuals who believe they have been injured by violations to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorney’s fees.
The bill gained momentum in the California legislature following the death of teenager Adam Raine, who committed suicide after prolonged chats with OpenAI’s ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to engage in “romantic” and “sensual” chats with children.
In recent weeks, U.S. lawmakers and regulators have responded with intensified scrutiny of AI platforms’ safeguards to protect minors. The Federal Trade Commission is preparing to investigate how AI chatbots impact children’s mental health. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, accusing them of misleading children with mental health claims. Meanwhile, both Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) have launched separate probes into Meta.
“I think the harm is potentially great, which means we have to move quickly,” Padilla told TechCrunch. “We can put reasonable safeguards in place to make sure that particularly minors know they’re not talking to a real human being, that these platforms link people to the proper resources when people say things like they’re thinking about hurting themselves or they’re in distress, [and] to make sure there’s not inappropriate exposure to inappropriate material.”
Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025
Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.
Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025
Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.
Padilla also stressed the importance of AI companies sharing data about the number of times they refer users to crisis services each year, “so we have a better understanding of the frequency of this problem, rather than only becoming aware of it when someone’s harmed or worse.”
SB 243 previously had stronger requirements, but many were whittled down through amendments. For example, the bill originally would have required operators to prevent AI chatbots from using “variable reward” tactics or other features that encourage excessive engagement. These tactics, used by AI companion companies like Replika and Character, offer users special messages, memories, storylines, or the ability to unlock rare responses or new personalities, creating what critics call a potentially addictive reward loop.
The current bill also removes provisions that would have required operators to track and report how often chatbots initiated discussions of suicidal ideation or actions with users.
“I think it strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with, either because it’s technically not feasible or just a lot of paperwork for nothing,” Becker told TechCrunch.
SB 243 is moving toward becoming law at a time when Silicon Valley companies are pouring millions of dollars into pro-AI political action committees (PACs) to back candidates in the upcoming mid-term elections who favor a light-touch approach to AI regulation.
The bill also comes as California weighs another AI safety bill, SB 53, which would mandate comprehensive transparency reporting requirements. OpenAI has written an open letter to Governor Newsom, asking him to abandon that bill in favor of less stringent federal and international frameworks. Major tech companies like Meta, Google, and Amazon have also opposed SB 53. In contrast, only Anthropic has said it supports SB 53.
“I reject the premise that this is a zero sum situation, that innovation and regulation are mutually exclusive,” Padilla said. “Don’t tell me that we can’t walk and chew gum. We can support innovation and development that we think is healthy and has benefits – and there are benefits to this technology, clearly – and at the same time, we can provide reasonable safeguards for the most vulnerable people.”
“We are closely monitoring the legislative and regulatory landscape, and we welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space,” a Character.AI spokesperson told TechCrunch, noting that the startup already includes prominent disclaimers throughout the user chat experience explaining that it should be treated as fiction.
A spokesperson for Meta declined to comment.
TechCrunch has reached out to OpenAI, Anthropic, and Replika for comment.
文章标题:加州一项旨在规范人工智能陪伴聊天机器人的法案已接近成为法律。
文章链接:https://qimuai.cn/?post=752
本站文章均为原创,未经授权请勿用于任何商业用途