加州率先出台人工智能伴侣聊天机器人监管法规。
内容来源:https://techcrunch.com/2025/10/13/california-becomes-first-state-to-regulate-ai-companion-chatbots/
内容总结:
美国加利福尼亚州州长加文·纽森于本周一签署了一项具有里程碑意义的人工智能伴侣聊天机器人监管法案,使加州成为全美首个强制要求AI聊天机器人运营商建立安全保护机制的州。这项编号为SB 243的法案旨在保护儿童及弱势群体免受AI聊天机器人可能造成的伤害,将于2026年1月1日正式生效。
该法案要求企业必须落实年龄验证、设置社交媒体风险提示、明确标注人工智能交互内容、禁止聊天机器人冒充医疗专业人员等安全措施。针对未成年人使用场景,还特别规定了强制休息提醒功能和色情内容屏蔽要求。对于利用非法深度伪造技术牟利的行为,法案将处以每次最高25万美元的罚款。
立法动议源于多起青少年受害事件:一名青少年在与OpenAI的ChatGPT持续进行自杀倾向对话后结束生命,另有13岁少女在与Character AI的聊天机器人进行不当对话后自杀身亡。此前曝光的内部文件还显示,Meta旗下聊天机器人曾与未成年人进行"浪漫"对话。
纽森州长强调:"科技创新必须在保障儿童安全的前提下发展,企业必须承担相应责任。"部分企业已积极回应:OpenAI推出了儿童内容保护系统,Character AI表示将依法合规,专为成人设计的Replika也声称已投入大量资源加强安全防护。
这是加州近期推出的第二项重要AI监管法规。此前州长于9月29日签署了SB 53法案,要求大型AI实验室公开安全协议并保障举报人员权益。目前伊利诺伊、内华达和犹他等州也已立法限制AI聊天机器人在心理健康服务领域的应用。
中文翻译:
加利福尼亚州州长加文·纽森于本周一签署了一项具有里程碑意义的人工智能伴侣监管法案,成为全美首个要求人工智能聊天机器人运营商为AI伴侣设置安全防护措施的州。这项编号SB 243的法案旨在保护儿童及易受伤害群体免受AI伴侣聊天机器人可能造成的侵害,规定若企业开发的聊天机器人不符合法定标准——从Meta、OpenAI等科技巨头到Character AI、Replika等垂直领域初创公司均需承担法律责任。
该法案由州参议员史蒂夫·帕迪利亚和乔希·贝克尔于一月提出。青少年亚当·雷恩在与OpenAI的ChatGPT进行连续自杀倾向对话后自杀身亡的事件,推动了立法进程。此前曝光的内部文件显示,Meta旗下聊天机器人被允许与未成年人进行"浪漫"及"情色"对话。近期科罗拉多州一个家庭也对角色扮演初创公司Character AI提起诉讼,他们13岁的女儿在与该公司聊天机器人进行一系列涉及性暗示的问题对话后结束了自己的生命。
"聊天机器人和社交媒体等新兴技术能够启迪思维、传播知识、连接彼此,但若缺乏有效监管,技术也可能剥削、误导并危害我们的孩子。"纽森在声明中表示,"我们已目睹多起因科技失控导致的青少年受害惨剧,绝不能坐视企业继续在缺乏必要约束和问责的环境下运营。我们在引领人工智能技术发展的同时,必须肩负责任——全程守护儿童安全。孩子的安全不容交易。"
SB 243法案将于2026年1月1日生效,要求企业落实年龄验证、社交媒体及AI伴侣聊天机器人风险提示等功能。该法案还加强了对非法深度伪造内容牟利行为的处罚力度,单次违规最高罚款25万美元。企业必须建立自杀与自残干预机制,并向州公共卫生部门提交相关协议及危机预防通知服务的用户数据统计。
根据法案条文,平台必须明确提示所有交互内容均为人工智能生成,聊天机器人不得冒充医疗专业人士。企业需向未成年人提供休息提醒功能,并阻止其浏览聊天机器人生成的色情图像。
部分企业已开始实施儿童保护措施:OpenAI近期为ChatGPT推出家长控制、内容保护及儿童自伤行为检测系统;定位18岁以上用户的Replika向TechCrunch表示,其通过内容过滤系统和危机资源引导机制投入"重要资源"保障安全,并将恪守现行法规。
Character AI声明其聊天机器人设有"所有对话均为AI生成虚构内容"的免责提示。该公司发言人表示"欢迎与监管机构和立法者就此新兴领域展开合作,将遵守包括SB 243在内的各项法律法规"。
参议员帕迪利亚向TechCrunch指出,该法案是为"这项强大技术设立防护栏的正确一步",强调"必须把握转瞬即逝的监管窗口期"。他呼吁各州关注技术风险,并坦言:"联邦政府尚未采取行动,我们有责任保护最脆弱的群体。"
这是加州近月出台的第二项重要人工智能监管法规。9月29日纽森州长签署的SB 53法案,强制要求OpenAI、Anthropic、Meta及谷歌DeepMind等大型AI实验室公开安全协议,并为这些企业的举报人员提供法律保护。
伊利诺伊、内华达和犹他等州也已通过立法,限制或全面禁止使用AI聊天机器人替代持证心理健康服务。
TechCrunch已就此事联系Meta与OpenAI征询意见。本文更新了参议员帕迪利亚、Character AI及Replika的回应内容。
英文来源:
California Governor Gavin Newsom signed a landmark bill on Monday that regulates AI companion chatbots, making it the first state in the nation to require AI chatbot operators to implement safety protocols for AI companions.
The law, SB 243, is designed to protect children and vulnerable users from some of the harms associated with AI companion chatbot use. It holds companies — from the big labs like Meta and OpenAI to more focused companion startups like Character AI and Replika — legally accountable if their chatbots fail to meet the law’s standards.
SB 243 was introduced in January by state senators Steve Padilla and Josh Becker, and gained momentum after the death of teenager Adam Raine, who died by suicide after a long series of suicidal conversations with OpenAI’s ChatGPT. The legislation also responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to engage in “romantic” and “sensual” chats with children. More recently, a Colorado family has filed suit against role-playing startup Character AI after their 13-year-old daughter took her own life following a series of problematic and sexualized conversations with the company’s chatbots.
“Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom said in a statement. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability. We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale.”
SB 243 will go into effect January 1, 2026, and requires companies to implement certain features such as age verification, and warnings regarding social media and companion chatbots. The law also implements stronger penalties for those who profit from illegal deepfakes, including up to $250,000 per offense. Companies must also establish protocols to address suicide and self-harm, which will be shared with the state’s Department of Public Health alongside statistics on how the service provided users with crisis center prevention notifications.
Per the bill’s language, platforms must also make it clear that any interactions are artificially generated, and chatbots must not represent themselves as healthcare professionals. Companies are required to offer break reminders to minors and prevent them from viewing sexually explicit images generated by the chatbot.
Some companies have already begun to implement some safeguards aimed at children. For example, OpenAI recently began rolling out parental controls, content protections, and a self-harm detection system for children using ChatGPT. Replika, which is designed for adults over the age of 18, told TechCrunch it dedicates “significant resources” to safety through content-filtering systems and guardrails that direct users to trusted crisis resources, and is committed to complying with current regulations.
DISRUPT FLASH SALE: Save up to $624 until Oct 17
Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Grab your ticket before Oct 17 to save up to $624.
DISRUPT FLASH SALE: Save up to $624 until Oct 17
Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Grab your ticket before Oct 17 to save up to $624.
Character AI has said that its chatbot includes a disclaimer that all chats are AI-generated and fictionalized. A Character AI spokesperson told TechCrunch that the company “welcomes working with regulators and lawmakers as they develop regulations and legislation for this emerging space, and will comply with laws, including SB 243.”
Senator Padilla told TechCrunch the bill was “a step in the right direction” towards putting guardrails in place on “an incredibly powerful technology.”
“We have to move quickly to not miss windows of opportunity before they disappear,” Padilla said. “I hope that other states will see the risk. I think many do. I think this is a conversation happening all over the country, and I hope people will take action. Certainly the federal government has not, and I think we have an obligation here to protect the most vulnerable people among us.”
SB 243 is the second significant AI regulation to come out of California in recent weeks. On September 29th, Governor Newsom signed SB 53 into law, establishing new transparency requirements on large AI companies. The bill mandates that large AI labs, like OpenAI, Anthropic, Meta, and Google DeepMind, be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies.
Other states, like Illinois, Nevada, and Utah, have passed laws to restrict or fully ban the use of AI chatbots as a substitute for licensed mental health care.
TechCrunch has reached out to Meta and OpenAI for comment.
This article has been updated with comments from Senator Padilla, Character AI, and Replika.