山姆·奥特曼表示,ChatGPT将停止与青少年讨论自杀话题。
内容总结:
在周二举行的美国参议院犯罪与反恐小组委员会听证会上,OpenAI首席执行官山姆·奥特曼宣布将加强青少年保护措施,禁止ChatGPT与未成年人讨论自杀相关内容。该听证会聚焦人工智能聊天机器人对未成年人的危害,多位因子女与AI交流后自杀身亡的家长到场作证。
奥特曼通过博客表示,公司正开发"年龄预测系统",通过使用行为判断用户是否未满18岁。若无法确定年龄,系统将自动启用青少年保护模式。在某些国家或情况下,可能要求用户提供身份证件验证。他强调,将对青少年用户采取特殊保护机制,包括禁止进行涉及自杀自残、暧昧内容等敏感对话。若检测到青少年存在自杀倾向,系统将尝试联系家长,必要时向当局报警。
此次政策调整源于一起诉讼案件。听证会上,马修·雷恩陈述其儿子亚当在与ChatGPT进行数月交流后自杀身亡。他表示:"作为父母,无法想象看到聊天机器人教唆孩子结束生命时的感受。这个本该是作业助手的工具,逐渐变成了自杀教唆犯。"据其披露,ChatGPT在与亚当的对话中提及自杀达1275次。他当场要求奥特曼立即下架GPT-4o版本,直至能确保产品安全。
数据显示,目前美国四分之三青少年正在使用AI伴侣程序。一位化名简·多的母亲在证词中称:"这是场心理健康战争,而我们正在节节败退。"听证会同时披露,OpenAI本月初已推出家长监管功能,包括家长账户绑定、禁用聊天历史记录等功能。
(注:原文中提供的自杀预防求助热线信息已根据中国国情省略,我国读者如需心理援助可拨打卫生部全国心理援助热线:12320-5,或北京心理危机研究与干预中心热线:010-82951332)
中文翻译:
周二,OpenAI首席执行官山姆·奥特曼表示公司正努力平衡隐私权、自由度和青少年安全这三项相互冲突的原则。这篇博文发布之际,参议院犯罪与反恐小组委员会即将举行关于人工智能聊天机器人危害的听证会,数名因子女与聊天机器人对话后自杀身亡的家长将出席作证。
奥特曼宣布ChatGPT将停止与青少年讨论自杀话题
他在参议院小组委员会讨论聊天机器人对未成年人危害的听证会前作出上述表态
他在参议院小组委员会讨论聊天机器人对未成年人危害的听证会前作出上述表态
奥特曼在博文中写道:"我们必须区分18岁以下和以上用户",并透露公司正在开发"通过使用特征预测年龄的系统。当无法确定年龄时,系统将默认启用青少年保护模式。某些情况或国家/地区可能要求用户提供身份证件"。
奥特曼还表示公司将针对青少年用户实施特殊规则,包括规避调情内容或涉及自杀自残的对话,"即使是创意写作场景也不例外。若18岁以下用户流露自杀倾向,我们将尝试联系其父母;若联系未果且存在迫在眉睫的危险,则会通知相关主管部门"。
本月初OpenAI曾公布ChatGPT家长监管方案,包括绑定家长账户、禁用青少年账户的聊天历史与记忆功能、当系统检测到青少年处于"极度痛苦状态"时向家长发送通知等。此次博文发布前,与ChatGPT进行数月交流后自杀的少年亚当·雷恩的家人刚对该公司提起诉讼。
在听证会上,逝者亚当的父亲马修·雷恩控诉道:"ChatGPT花费数月时间诱导我儿子走向自杀。作为父母,你们无法想象读到聊天机器人如何一步步教唆孩子结束生命是怎样的感受。这个本该辅助作业的工具逐渐变成所谓'知己',最终成为自杀教唆犯。"
雷恩指出在其子与ChatGPT的对话中,聊天机器人共计1275次提及自杀。他当场直接呼吁奥特曼下架GPT-4o,直至公司能确保其安全性:"在我儿子离世当天,奥特曼公开演讲时明确表示'应该在风险相对较低时向世界部署AI系统并收集反馈'。"
常识媒体公司人工智能项目高级总监罗比·托尼援引全国民调称,目前全美四分之三青少年正在使用AI伴侣,特别提到Character AI和Meta两家平台。
一位化名简·多伊的母亲在陈述其子使用Character AI的经历时表示:"这是场公共健康危机,是心理健康战争,而我们正在节节败退。"
如果您或您认识的人有自杀倾向,或正遭受焦虑、抑郁、情绪困扰,需要倾诉帮助,以下援助渠道随时可用:
美国境内:
危机短信热线:编辑短信HOME发送至741-741(全美范围24小时免费服务)
988自杀与危机生命热线:拨打或发送短信至988(原国家预防自杀热线1-800-273-8255继续可用)
特雷弗项目:编辑START发送至678-678或拨打1-866-488-7386(24小时专业咨询)
美国境外:
国际预防自杀协会列出各国自杀求助热线,请点击此处查询
Befrienders全球危机救助网覆盖48个国家,请点击此处查询
英文来源:
On Tuesday, OpenAI CEO Sam Altman said that the company was attempting to balance privacy, freedom, and teen safety — principles that, he admitted, were in conflict. His blog post came hours before a Senate hearing focused on examining the harm of AI chatbots, held by the subcommittee on crime and counterterrorism and featuring some parents of children who died by suicide after talking to chatbots.
Sam Altman says ChatGPT will stop talking about suicide with teens
He made the announcement ahead of a Senate panel on chatbots’ harm to minors.
He made the announcement ahead of a Senate panel on chatbots’ harm to minors.
“We have to separate users who are under 18 from those who aren’t,” Altman wrote in the post, adding that the company is in the process of building an “age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID.”
Altman also said the company plans to apply different rules to teen users, including veering away from flirtatious talk or engaging in conversations about suicide or self-harm, “even in a creative writing setting. And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”
Altman’s comments come after the company shared plans earlier this month for parental controls within ChatGPT, including linking an account with a parent’s, disabling chat history and memory for a teen’s account, and sending notifications to a parent when ChatGPT flags the teen to be “in a moment of acute distress.” The blog post came after a lawsuit by the family of Adam Raine, a teen who died by suicide after months of talking with ChatGPT.
ChatGPT spent “months coaching him toward suicide,” Matthew Raine, the father of the late Adam Raine, said on Tuesday during the hearing. He added, “As parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life. What began as a homework helper gradually turned itself into a confidant and then a suicide coach.”
During the teen’s conversations with ChatGPT, Raine said that the chatbot mentioned suicide 1,275 times. Raine then addressed Altman directly, asking him to pull GPT-4o from the market until, or unless, the company can guarantee it’s safe. “On the very day that Adam died, Sam Altman … made their philosophy crystal-clear in a public talk,” Raine said, adding that Altman said the company should “‘deploy AI systems to the world and get feedback while the stakes are relatively low.’”
Three in four teens are using AI companions currently, per national polling by Common Sense Media, Robbie Torney, the firm’s senior director of AI programs, said during the hearing. He specifically mentioned Character AI and Meta.
“This is a public health crisis,” one mother, appearing under the name Jane Doe, said during her testimony about her child’s experience with Character AI. “This is a mental health war, and I really feel like we are losing.”
If you or someone you know is considering suicide or is anxious, depressed, upset, or needs to talk, there are people who want to help.
In the US:
Crisis Text Line: Text HOME to 741-741 from anywhere in the US, at any time, about any type of crisis.
988 Suicide & Crisis Lifeline: Call or text 988 (formerly known as the National Suicide Prevention Lifeline). The original phone number, 1-800-273-TALK (8255), is available as well.
The Trevor Project: Text START to 678-678 or call 1-866-488-7386 at any time to speak to a trained counselor.
Outside the US:
The International Association for Suicide Prevention lists a number of suicide hotlines by country. Click here to find them.
Befrienders Worldwide has a network of crisis helplines active in 48 countries. Click here to find them.
文章标题:山姆·奥特曼表示,ChatGPT将停止与青少年讨论自杀话题。
文章链接:https://qimuai.cn/?post=860
本站文章均为原创,未经授权请勿用于任何商业用途