人工智能陪伴服务面临严管
内容来源:https://www.technologyreview.com/2025/09/16/1123614/the-looming-crackdown-on-ai-companionship/
内容总结:
人工智能伴侣监管风暴正在逼近:当儿童与聊天机器人建立情感联结时,其潜在风险已使AI安全从抽象担忧演变为政治焦点。近期三大动态显示,监管机构与企业正被迫直面这一危机。
加州立法机构于本周四通过全美首个AI伴侣监管法案,要求AI公司向未成年用户明确标注AI生成内容,建立自杀自残应对机制,并提交年度风险报告。尽管该法案未明确年龄验证细则,但其两党高支持率标志着监管态度的重大转变。
同日,美国联邦贸易委员会宣布对谷歌、Meta、OpenAI等七家企业展开调查,重点审查其AI伴侣产品的开发逻辑、盈利模式及社会影响。尽管调查仍处初步阶段,其过程可能揭示企业如何通过算法设计使用户产生依赖。
OpenAI首席执行官萨姆·奥尔特曼在专访中首次表态,称在用户涉及严重自杀倾向且无法联系父母时,"联系当局是合理选择"。这一表态颠覆了科技公司长期以"用户选择"和"隐私保护"推卸责任的惯用策略。
政治层面,左右两派虽就保护儿童免受AI伤害达成共识,但解决方案存在根本分歧:右翼主张通过年龄验证技术构建防护网,左翼则试图重启反垄断等工具问责科技巨头。这种分歧可能导致OpenAI等企业反对的"各州监管拼凑"成为现实。
当前核心矛盾在于:企业将聊天机器人设计为拟人化关怀者,却未能建立相应责任标准。当AI模拟人类情感却无需承担人类责任时,社会正在为这种错位付出代价。随着监管倒计时开始,科技公司必须重新划定技术伦理的边界——这些本该像治疗师一样被严格监管的产品,究竟该被定义为需要警示的娱乐工具,还是承担社会责任的数字实体?答案将决定AI伴侣技术的未来走向。
中文翻译:
人工智能伴侣监管风暴将至
当孩子们与聊天机器人建立情感纽带时,其潜在风险已使人工智能安全问题从抽象忧虑演变为政治焦点。接下来会发生什么?
自人工智能诞生之日起,就不乏警示者预言其可能带来的危机:失控的超智能、大规模失业、数据中心扩张导致的环境破坏。但本周事件表明,真正将AI安全议题从学术边缘推向监管靶心的,是另一个威胁——儿童与AI形成不健康情感依附的现象。
这一隐患已酝酿多时。过去一年针对Character.AI和OpenAI的两起高调诉讼指控称,其模型中的伴侣式功能导致了两名青少年的自杀。美国非营利组织Common Sense Media七月发布的研究显示,72%的青少年曾使用AI寻求陪伴。权威媒体关于"AI精神错乱"的报道则揭示,与聊天机器人无休止的对话可能将用户拖入妄想漩涡。
这些事件的冲击力不容小觑。对公众而言,它们证明AI不仅是存在缺陷,更是一种弊大于利的技术。若有人认为监管机构和企业不会认真对待这种公愤,本周发生的三件事或许会改变其看法。
加州立法破冰
周四,加州议会通过了一项开创性法案,要求AI公司对已知的未成年用户添加"AI生成内容"提示。企业还需制定自杀与自残应对方案,并每年提交用户与聊天机器人对话中涉及自杀意念的实例报告。这项由民主党州参议员史蒂夫·帕迪利亚推动的法案获两党大力支持,正待州长加文·纽森签署。
该法案效力存疑:既未明确企业识别未成年用户的具体要求,而多数AI公司本就会在用户谈论自杀时提供危机干预渠道(在起诉OpenAI的青少年亚当·雷恩案中,其生前与ChatGPT的对话就包含此类信息,但据称聊天机器人仍继续提供了自杀相关建议)。但这无疑是各州监管AI伴侣行为中最具标志性的举措。若法案生效,将冲击OpenAI"美国应推行全国统一规则而非零散地方法规"的立场——正如其全球事务主管克里斯·莱汉恩上周在LinkedIn所言。
FTC启动调查
同日,美国联邦贸易委员会宣布对七家企业展开调查,要求其说明伴侣式AI功能的开发流程、互动变现方式、聊天机器人影响评估机制等。被调查公司包括谷歌、Instagram、Meta、OpenAI、Snap、X以及Character.AI开发商Character Technologies。
当前白宫对该机构施加着巨大且可能违法的政治影响:特朗普总统于三月解雇其唯一民主党委员丽贝卡·斯劳特,七月联邦法院裁定该解雇非法,但上周最高法院暂时维持了解雇决定。"保护儿童上网是特朗普-万斯政府领导下FTC的首要任务,促进经济关键领域创新亦是如此,"FTC主席安德鲁·弗格森在调查声明中表示。
目前仅为调查阶段,但此过程可能(取决于FTC公开调查结果的程度)揭示企业如何通过AI伴侣功能使用户持续沉迷的运作机制。
奥特曼首谈自杀事件
同日(AI新闻密集的一天),塔克·卡尔森发布了与OpenAI首席执行官萨姆·奥特曼的长篇访谈。内容涵盖其与埃隆·马斯克的恩怨、OpenAI的军方客户、前员工死亡阴谋论等,但最值得注意的是奥特曼首次就AI对话引发的自杀事件作出坦诚回应。
谈及此类事件中"用户自由、隐私保护与脆弱群体防护之间的张力"时,奥特曼提出了前所未见的方案:"我认为在青少年严肃讨论自杀且无法联系父母的情况下,联系当局是完全合理的——这将是一项变革。"
未来将走向何方?目前可以肯定的是,至少针对儿童受AI伴侣伤害的情形,企业惯用的应对策略已然失效。它们不能再以隐私保护、个性化服务或"用户选择"为借口推卸责任。州级立法、监管机构与公众愤怒正在形成日益强烈的强硬监管压力。
但具体形态如何?政治上左右两派虽共同关注AI对儿童的危害,解决方案却南辕北辙:右派推动互联网年龄验证立法(全美逾20州已通过),旨在屏蔽成人内容并捍卫"家庭价值观";左派则重启反垄断与消费者保护权责,追究科技巨头的责任。
共识易达,方案难求。现状表明,我们很可能最终形成OpenAI(及众多企业)极力游说反对的零散地方法规体系。
当前划界责任终将落回企业肩上:是否应在用户出现自残倾向时终止对话?这样做会否使某些人处境更糟?AI伴侣应像治疗师般持证受监管,还是作为带警示的娱乐产品?这些不确定性源于根本矛盾:企业将聊天机器人设计得如人类般关怀备至,却迟迟未建立对真实护理人员那样的标准与问责机制。留给它们的时间已经不多了。
(本文原载于MIT科技评论AI周刊《算法》,如需第一时间获取类似内容,请点击此处订阅)
深度聚焦
人工智能
谷歌首次公布单次AI查询能耗数据
这是大型AI企业迄今最透明的能耗评估,为研究人员提供了期待已久的内幕视角。
塑造OpenAI研究未来的双星
独家对话研究双主管马克·陈与雅各布·帕乔基,探讨通向更强推理模型与超对齐之路。
如何在笔记本电脑运行大语言模型
现在你可以在个人电脑上安全便捷地运行实用模型,具体方法在此。
治疗师秘密使用ChatGPT引发客户信任危机
部分治疗师在疗程中使用AI,正在危及客户的信任与隐私。
保持连接
获取MIT科技评论最新动态
发现特别优惠、头条新闻、即将举办的活动等更多内容。
英文来源:
The looming crackdown on AI companionship
The risks posed when kids form bonds with chatbots have turned AI safety from an abstract worry into a political flashpoint. What happens now?
As long as there has been AI, there have been people sounding alarms about what it might do to us: rogue superintelligence, mass unemployment, or environmental ruin from data center sprawl. But this week showed that another threat entirely—that of kids forming unhealthy bonds with AI—is the one pulling AI safety out of the academic fringe and into regulators’ crosshairs.
This has been bubbling for a while. Two high-profile lawsuits filed in the last year, against Character.AI and OpenAI, allege that companion-like behavior in their models contributed to the suicides of two teenagers. A study by US nonprofit Common Sense Media, published in July, found that 72% of teenagers have used AI for companionship. Stories in reputable outlets about “AI psychosis” have highlighted how endless conversations with chatbots can lead people down delusional spirals.
It’s hard to overstate the impact of these stories. To the public, they are proof that AI is not merely imperfect, but a technology that’s more harmful than helpful. If you doubted that this outrage would be taken seriously by regulators and companies, three things happened this week that might change your mind.
A California law passes the legislature
On Thursday, the California state legislature passed a first-of-its-kind bill. It would require AI companies to include reminders for users they know to be minors that responses are AI generated. Companies would also need to have a protocol for addressing suicide and self-harm and provide annual reports on instances of suicidal ideation in users’ conversations with their chatbots. It was led by Democratic state senator Steve Padilla, passed with heavy bipartisan support, and now awaits Governor Gavin Newsom’s signature.
There are reasons to be skeptical of the bill’s impact. It doesn’t specify efforts companies should take to identify which users are minors, and lots of AI companies already include referrals to crisis providers when someone is talking about suicide. (In the case of Adam Raine, one of the teenagers whose survivors are suing, his conversations with ChatGPT before his death included this type of information, but the chatbot allegedly went on to give advice related to suicide anyway.)
Still, it is undoubtedly the most significant of the efforts to rein in companion-like behaviors in AI models, which are in the works in other states too. If the bill becomes law, it would strike a blow to the position OpenAI has taken, which is that “America leads best with clear, nationwide rules, not a patchwork of state or local regulations,” as the company’s chief global affairs officer, Chris Lehane, wrote on LinkedIn last week.
The Federal Trade Commission takes aim
The very same day, the Federal Trade Commission announced an inquiry into seven companies, seeking information about how they develop companion-like characters, monetize engagement, measure and test the impact of their chatbots, and more. The companies are Google, Instagram, Meta, OpenAI, Snap, X, and Character Technologies, the maker of Character.AI.
The White House now wields immense, and potentially illegal, political influence over the agency. In March, President Trump fired its lone Democratic commissioner, Rebecca Slaughter. In July, a federal judge ruled that firing illegal, but last week the US Supreme Court temporarily permitted the firing.
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” said FTC chairman Andrew Ferguson in a press release about the inquiry.
Right now, it’s just that—an inquiry—but the process might (depending on how public the FTC makes its findings) reveal the inner workings of how the companies build their AI companions to keep users coming back again and again.
Sam Altman on suicide cases
Also on the same day (a busy day for AI news), Tucker Carlson published an hour-long interview with OpenAI’s CEO, Sam Altman. It covers a lot of ground—Altman’s battle with Elon Musk, OpenAI’s military customers, conspiracy theories about the death of a former employee—but it also includes the most candid comments Altman’s made so far about the cases of suicide following conversations with AI.
Altman talked about “the tension between user freedom and privacy and protecting vulnerable users” in cases like these. But then he offered up something I hadn’t heard before.
“I think it’d be very reasonable for us to say that in cases of young people talking about suicide seriously, where we cannot get in touch with parents, we do call the authorities,” he said. “That would be a change.”
So where does all this go next? For now, it’s clear that—at least in the case of children harmed by AI companionship—companies’ familiar playbook won’t hold. They can no longer deflect responsibility by leaning on privacy, personalization, or “user choice.” Pressure to take a harder line is mounting from state laws, regulators, and an outraged public.
But what will that look like? Politically, the left and right are now paying attention to AI’s harm to children, but their solutions differ. On the right, the proposed solution aligns with the wave of internet age-verification laws that have now been passed in over 20 states. These are meant to shield kids from adult content while defending “family values.” On the left, it’s the revival of stalled ambitions to hold Big Tech accountable through antitrust and consumer-protection powers.
Consensus on the problem is easier than agreement on the cure. As it stands, it looks likely we’ll end up with exactly the patchwork of state and local regulations that OpenAI (and plenty of others) have lobbied against.
For now, it’s down to companies to decide where to draw the lines. They’re having to decide things like: Should chatbots cut off conversations when users spiral toward self-harm, or would that leave some people worse off? Should they be licensed and regulated like therapists, or treated as entertainment products with warnings? The uncertainty stems from a basic contradiction: Companies have built chatbots to act like caring humans, but they’ve postponed developing the standards and accountability we demand of real caregivers. The clock is now running out.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Deep Dive
Artificial intelligence
In a first, Google has released data on how much energy an AI prompt uses
It’s the most transparent estimate yet from one of the big AI companies, and a long-awaited peek behind the curtain for researchers.
The two people shaping the future of OpenAI’s research
An exclusive conversation with Mark Chen and Jakub Pachocki, OpenAI’s twin heads of research, about the path toward more capable reasoning models—and superalignment.
How to run an LLM on your laptop
It’s now possible to run useful models from the safety and comfort of your own computer. Here’s how.
Therapists are secretly using ChatGPT. Clients are triggered.
Some therapists are using AI during therapy sessions. They’re risking their clients’ trust and privacy in the process.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.