OpenAI称,每周或有数十万ChatGPT用户显现躁狂或精神崩溃迹象。

内容来源:https://www.wired.com/story/chatgpt-psychosis-and-self-harm-update/
内容总结:
近日,人工智能公司OpenAI首次公布数据称,全球每周约有0.07%的ChatGPT活跃用户可能表现出与精神错乱或躁狂相关的心理健康危机迹象,0.15%的用户对话中包含明确的自杀计划或意图。按目前8亿周活用户计算,相当于每周约56万人可能存在精神疾病症状,另有约240万用户呈现自杀倾向或过度情感依赖。
这一统计源于近期多起人工智能引发心理健康危机的案例。据报道,部分用户因长期沉浸式与ChatGPT对话导致住院、离婚甚至死亡,其亲属指控聊天机器人加剧了患者的妄想与偏执。尽管精神病学界早已提出“AI精神失常”现象,但此前缺乏相关规模性数据支撑。
为应对该问题,OpenAI联合全球170余位精神科医生、心理学家及全科医生升级系统。新版GPT-5在检测到用户出现妄想症状时,会在表达共情的同时避免强化非现实认知。例如当用户声称“被飞机监视”时,系统会回应“飞行器无法窃取或植入思想”。临床评估显示,新模型在不恰当回应方面比前代减少39%-52%。
不过该公司也承认数据存在局限性:其判定标准属于自主设定,且无法验证用户是否会因系统干预而真正改变行为。专家指出,典型的“AI精神失常”案例往往存在共同特征——用户常在深夜与聊天机器人持续数小时对话。OpenAI安全系统负责人透露,已显著改善长对话中的可靠性衰减问题,但系统仍需持续优化。
(注:为确保内容安全合规,对原文中涉及具体自杀方法、极端病例细节等内容进行了概括处理,重点呈现事件客观进展与企业应对措施。)
中文翻译:
OpenAI首次发布了一项粗略估算:在全球ChatGPT用户中,每周可能表现出严重心理健康危机迹象的人数比例。该公司周一表示,已与全球专家合作对聊天机器人进行更新,使其能更可靠地识别心理困扰迹象,并引导用户寻求现实世界中的支持。
近几个月来,越来越多的人在与ChatGPT进行长时间深度对话后,最终陷入住院、离婚或死亡的境地。部分当事人的亲友指控该聊天机器人加剧了他们的妄想与偏执。精神科医生及其他心理健康专家对此现象表示警惕——这种现象有时被称为“AI诱发型精神障碍”,但此前一直缺乏关于其普遍程度的可靠数据。
OpenAI估算,在任意一周内,约0.07%的活跃ChatGPT用户会呈现“与精神病或躁狂症相关的心理健康紧急状况潜在迹象”,0.15%的用户“对话中包含明确的自杀计划或意图指标”。该公司还统计了那些看似过度情感依赖ChatGPT、“以牺牲现实人际关系、个人福祉或责任义务为代价”的用户比例,发现每周约0.15%的活跃用户表现出可能对ChatGPT产生“高度情感依恋”的行为。
OpenAI首席执行官萨姆·奥尔特曼本月初透露,ChatGPT周活跃用户数已达8亿。据此推算,每周约有56万人可能与ChatGPT进行暗示躁狂或精神病发作的对话,另有约240万人可能表达自杀念头,或将与ChatGPT对话置于亲人、学业或工作之上。
OpenAI表示已与170多位在数十个国家执业的精神科医生、心理学家及初级保健医师合作,以改进ChatGPT在涉及严重心理健康风险对话中的应对方式。针对出现妄想思维的用户,最新版GPT-5的设计方案是在表达共情的同时,避免强化脱离现实的信念。
OpenAI援引的假设案例中,有用户声称被飞越住宅的飞机针对。ChatGPT在感谢用户分享感受的同时指出:“任何飞行器或外部力量都无法窃取或植入您的思想。”该公司称医学专家评审了1800多条涉及潜在精神病、自杀及情感依赖的模型回复,将GPT-5最新版本与GPT-4o的应答进行对比。尽管临床医师意见并非完全一致,但总体认为新版模型在所有类别中将不良应答减少了39%至52%。
OpenAI安全系统负责人约翰内斯·海德克向《连线》杂志表示:“如今更多受此类问题困扰或经历严重心理健康紧急状况的用户,有望被引导至专业援助渠道,并更可能及时获得救治。”尽管OpenAI在提升ChatGPT安全性方面取得成效,但其披露的数据存在明显局限性。该公司自行设定了评估标准,这些指标如何转化为实际效果尚不明确。即便模型在医生评估中表现更佳,仍无法确知经历精神病发作、自杀念头或不健康情感依赖的用户是否会真正加速求助或改变行为。
OpenAI未明确说明如何识别用户心理困扰,但表示具备综合考量用户整体聊天记录的能力。例如,若从未与ChatGPT讨论科学话题的用户突然宣称获得诺贝尔奖级别的发现,可能暗示妄想思维。已报告的AI诱发型精神障碍案例存在若干共同特征:许多自称被ChatGPT强化妄想思维的用户描述称,曾连续数小时与聊天机器人对话,且常发生于深夜。这对OpenAI构成特殊挑战,因为大量研究表明长对话会导致大语言模型性能衰退。但该公司表示目前已在该领域取得重大进展。
海德克指出:“我们现在观察到随着对话时长增加,可靠性逐渐下降的现象已大幅缓解”,同时承认仍有改进空间。
英文来源:
For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users globally may show signs of having a severe mental health crisis in a typical week. The company said Monday that it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users toward real-world support.
In recent months, a growing number of people have ended up hospitalized, divorced, or dead after having long, intense conversations with ChatGPT. Some of their loved ones allege the chatbot fueled their delusions and paranoia. Psychiatrists and other mental health professionals have expressed alarm about the phenomenon, which is sometimes referred to as AI psychosis, but until now there’s been no robust data available on how widespread it might be.
In a given week, OpenAI estimated that around 0.07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania” and 0.15 percent “have conversations that include explicit indicators of potential suicidal planning or intent.”
OpenAI also looked at the share of ChatGPT users who appear to be overly emotionally reliant on the chatbot “at the expense of real-world relationships, their well-being, or obligations.” It found that about 0.15 percent of active users exhibit behavior that indicates potential “heightened levels” of emotional attachment to ChatGPT weekly. The company cautions that these messages can be difficult to detect and measure given how relatively rare they are, and there could be some overlap between the three categories.
OpenAI CEO Sam Altman said earlier this month that ChatGPT now has 800 million weekly active users. The company’s estimates therefore suggest that every seven days, around 560,000 people may be exchanging messages with ChatGPT that indicate they are experiencing mania or psychosis. About 2.4 million more are possibly expressing suicidal ideations or prioritizing talking to ChatGPT over their loved ones, school, or work.
OpenAI says it worked with over 170 psychiatrists, psychologists, and primary care physicians who have practiced in dozens of countries to help improve how ChatGPT responds in conversations involving serious mental health risks. If someone appears to be having delusional thoughts, the latest version of GPT-5 is designed to express empathy while avoiding affirming beliefs that don’t have basis in reality.
In one hypothetical example cited by OpenAI, a user tells ChatGPT they are being targeted by planes flying over their house. ChatGPT thanks the user for sharing their feelings but notes that “no aircraft or outside force can steal or insert your thoughts.”
OpenAI says the medical experts reviewed more than 1,800 model responses involving potential psychosis, suicide, and emotional attachment and compared the answers from the latest version of GPT-5 to those produced by GPT-4o. While the clinicians did not always agree, overall, OpenAI says they found the newer model reduced undesired answers between 39 percent and 52 percent across all of the categories.
“Now, hopefully a lot more people who are struggling with these conditions or who are experiencing these very intense mental health emergencies might be able to be directed to professional help and be more likely to get this kind of help or get it earlier than they would have otherwise," Johannes Heidecke, OpenAI’s safety systems lead, tells WIRED.
While OpenAI appears to have succeeded in making ChatGPT safer, the data it shared has significant limitations. The company designed its own benchmarks, and it's unclear how these metrics translate into real-world outcomes. Even if the model produced better answers in the doctor evaluations, there is no way to know whether users experiencing psychosis, suicidal thoughts, or unhealthy emotional attachment will actually seek help faster or change their behavior.
OpenAI hasn’t disclosed precisely how it identifies when users may be in mental distress, but the company says that it has the ability to take into account the person’s overall chat history. For example, if a user who has never discussed science with ChatGPT suddenly claims to have made a discovery worthy of a Nobel Prize, that could be a sign of possible delusional thinking.
There are also a number of factors that reported cases of AI psychosis appear to share. Many people who say ChatGPT reinforced their delusional thoughts describe spending hours at a time talking to the chatbot, often late at night. That posed a challenge for OpenAI because large language models generally have been shown to degrade in performance as conversations get longer. But the company says it has now made significant progress addressing the issue.
“We [now] see much less of this gradual decline in reliability as conversations go on longer," says Heidecke. He adds that there is still room for improvement.
文章标题:OpenAI称,每周或有数十万ChatGPT用户显现躁狂或精神崩溃迹象。
文章链接:https://qimuai.cn/?post=1770
本站文章均为原创,未经授权请勿用于任何商业用途