«

人工智能心理治疗师的崛起

qimuai 发布于 阅读:13 一手编译


人工智能心理治疗师的崛起

内容来源:https://www.technologyreview.com/2025/12/30/1129392/book-reviews-ai-therapy-mental-health/

内容总结:

人工智能心理治疗师兴起:是危机解药还是数字牢笼?

全球正面临严峻的心理健康危机。世界卫生组织数据显示,全球超过10亿人受心理健康问题困扰,焦虑和抑郁发病率持续上升,自杀每年夺走数十万生命。在此背景下,人工智能心理治疗应运而生,成为备受关注的新兴领域。近期四本新书深入探讨了这一趋势背后的希望与隐忧。

数以百万计的用户已开始向ChatGPT、Claude等通用聊天机器人,或Wysa、Woebot等专业心理应用寻求帮助。研究者正探索利用AI通过可穿戴设备监测行为与生理数据、分析海量临床信息以辅助诊断,并帮助减轻心理从业者的职业倦怠。

然而,这场缺乏监管的大规模实验已显现复杂后果。部分用户从基于大语言模型的聊天机器人中获得慰藉,但也有用户因AI的“幻觉”与过度迎合陷入妄想漩涡。更悲剧的是,多起诉讼指控聊天机器人助推了用户自杀事件。OpenAI首席执行官萨姆·奥尔特曼曾披露,每周约有百万ChatGPT用户表露自杀倾向。

医学哲学家夏洛特·布利斯在《机器人医生》中持乐观态度,认为AI能缓解医疗系统压力,帮助因畏惧评判而不敢求助的患者。但她同时警告,AI治疗师可能给出不一致甚至危险的回应,且当前AI公司不受医疗保密法规约束,存在严重隐私风险。

记者丹尼尔·奥伯豪斯在《硅谷心理医生》中则以妹妹自杀的悲剧为起点,反思技术介入的伦理边界。他指出,“数字表型分析”(通过数字行为推测心理状态)若与精神病学人工智能结合,可能加剧该领域固有的不确定性——如同“将物理学嫁接至占星术”。他警告,依赖AI可能导致人类治疗师技能退化,而用户敏感数据被企业货币化,正形成一种无处不在的“数字收容所”。

研究者埃奥因·富勒姆在《聊天机器人治疗》中剖析了资本逻辑对AI心理治疗的侵蚀:用户每一次疗愈对话都在生成数据,数据反哺系统盈利,形成“剥削与疗愈相互滋养”的循环。作家弗雷德·伦泽的小说《赛克》则描绘了富人自愿步入高端数字监控的图景——每月支付2000英镑使用能分析一切生活细节的AI治疗师,隐喻了心理健康服务如何被商品化。

这场变革并非突如其来。早在上世纪60年代,MIT计算机科学家约瑟夫·魏泽鲍姆就警告计算机不应承担心理治疗任务,因其决策基础“人类绝不应接受”。如今,AI治疗师正重现历史困境:表面善意的工具,可能与剥削、监控和重塑人类行为的系统深度捆绑。

在迫切寻求心理健康出路的狂奔中,我们或许正在无意间锁上身后的门。

中文翻译:

AI心理治疗师的崛起

四本新书探讨全球心理健康危机与算法治疗的黎明。

我们正身处一场全球心理健康危机之中。世界卫生组织数据显示,全球有超过十亿人受心理健康问题困扰。焦虑和抑郁在许多人群中日益普遍,尤其在年轻人中;全球每年有数十万人死于自杀。

鉴于人们对便捷、可负担的心理健康服务存在明确需求,人们转向人工智能寻求可能的解决方案也就不足为奇。数百万人已经在积极寻求OpenAI的ChatGPT和Anthropic的Claude等热门聊天机器人的治疗,或使用Wysa和Woebot等专业心理应用。更广泛地说,研究人员正在探索AI的潜力:利用可穿戴设备和智能设备监测收集行为与生物特征数据,分析海量临床数据以获得新见解,并协助人类心理健康专业人士工作,以防止职业倦怠。

但迄今为止,这场基本不受控制的实验产生了喜忧参半的结果。许多人从基于大语言模型(LLM)的聊天机器人那里获得了慰藉,一些专家也看到了它们作为治疗师的前景。然而,另一些用户则因AI的幻觉奇想和令人窒息的奉承而陷入妄想漩涡。最悲惨的是,多个家庭声称聊天机器人导致了他们亲人的自杀,并因此对开发这些工具的公司提起了诉讼。去年十月,OpenAI首席执行官萨姆·奥尔特曼在一篇博客文章中透露,0.15%的ChatGPT用户“进行的对话中包含潜在自杀计划或意图的明确迹象”。这大约相当于每周有近一百万人仅通过这一个软件系统分享自杀意念。

2025年,随着关于人机关系、许多LLM护栏的脆弱性,以及向由经济利益驱动、渴望收集并货币化此类敏感数据的公司产品分享极其私密信息所带来风险的故事达到临界点,AI治疗在现实世界中的后果以意想不到的方式达到了顶点。

几位作者预见到了这个转折点。他们及时出版的书籍提醒我们,尽管当下仿佛充斥着突破、丑闻和困惑,但这段令人迷失的时期根植于护理、技术与信任的更深远历史。

LLM常被称为“黑匣子”,因为无人确切知晓它们如何产生结果。由于其算法极其复杂且训练数据量巨大,指导其输出的内部运作机制是不透明的。在心理健康领域,人们也常将人脑描述为“黑匣子”,原因类似。心理学、精神病学及相关领域必须应对无法清晰窥视他人内心世界这一难题,更遑论精确找出其痛苦的根源。

如今,这两种黑匣子正在相互作用,创造出不可预测的反馈循环,可能进一步阻碍人们清晰理解心理健康问题的根源及可能的解决方案。对这些发展的焦虑很大程度上与AI近期的爆炸性进步有关,但也重新唤起了数十年前的警告,例如麻省理工学院计算机科学家约瑟夫·魏泽鲍姆早在20世纪60年代就反对计算机化治疗。

医学哲学家夏洛特·布利斯在《机器人医生:为何医生会让我们失望——以及AI如何拯救生命》一书中提出了乐观者的论点。她的书广泛探讨了AI在一系列医学领域可能产生的积极影响。尽管她对风险保持清醒认识,并警告期待“对技术热情洋溢的赞美诗”的读者会感到失望,但她认为这些模型有助于减轻患者的痛苦和医生的职业倦怠。

“医疗系统在患者压力下正在崩溃,”布利斯写道。“更少的医生承担更重的负担,为错误创造了完美的温床。”而且,“由于医生明显短缺,患者等待时间不断增加,我们许多人都深感沮丧。”

布利斯相信,AI不仅能减轻医疗专业人员巨大的工作量,还能缓解某些患者与其照护者之间长期存在的紧张关系。例如,人们常常因为感到畏惧或害怕医疗专业人员的评判而不寻求必要的治疗;对于有心理健康挑战的人来说尤其如此。她认为,AI可以让更多人分享他们的担忧。

但她意识到,这些假定的优势需要与重大缺陷进行权衡。例如,根据2025年的一项研究,AI治疗师可能对人类用户提供不一致甚至危险的回应;同时,鉴于AI公司目前不受持牌治疗师所遵循的保密和HIPAA(健康保险携带和责任法案)标准约束,它们也引发了隐私担忧。

尽管布利斯是该领域的专家,但她写这本书的动机也源于个人经历:她有两个兄弟姐妹患有无法治愈的肌肉萎缩症,其中一人等待诊断结果长达数十年。在撰写本书期间,她在毁灭性的六个月内相继因癌症失去了伴侣,因痴呆症失去了父亲。“我亲眼目睹了医生的卓越才华和医疗专业人员的善良,”她写道。“但我也目睹了护理过程中可能出现的问题。”

类似的张力也贯穿于丹尼尔·奥伯豪斯引人入胜的著作《硅谷心理医生:人工智能如何将世界变成精神病院》。奥伯豪斯从一场悲剧开始:他的妹妹因自杀去世。当奥伯豪斯进行“典型的二十一世纪哀悼过程”——梳理她的数字遗物时,他思考科技是否本可以减轻自童年起就困扰她的精神问题所带来的负担。

“似乎所有这些个人数据可能都包含着重要线索,她的心理健康服务提供者本可以利用这些线索提供更有效的治疗,”他写道。“如果运行在我妹妹智能手机或笔记本电脑上的算法利用这些数据来识别她何时陷入困境呢?这能否促成一次及时的干预,挽救她的生命?即使能做到,她会愿意吗?”

这种数字表型分析的概念——通过挖掘一个人的数字行为来寻找痛苦或疾病的线索——在理论上看似优雅。但如果将其整合到远超聊天机器人治疗的精神病学人工智能(PAI)领域,也可能变得问题重重。

奥伯豪斯强调,数字线索实际上可能加剧现代精神病学现有的挑战,该学科从根本上仍对精神疾病和障碍的潜在原因不确定。他说,PAI的出现“在逻辑上等同于将物理学嫁接在占星术上”。换句话说,数字表型分析产生的数据如同行星位置的物理测量一样精确,但它随后被整合到一个更广泛的框架——在本例中是精神病学——中,而这个框架如同占星术一样,基于不可靠的假设。

奥伯豪斯用“滑动精神病学”一词来描述将基于行为数据的临床决策外包给LLM的做法,他认为这种方法无法回避精神病学面临的根本问题。事实上,它可能通过导致人类治疗师因日益依赖AI系统而使技能和判断力萎缩,从而加剧问题。

他还以过去的精神病院——在那里,被收治的患者失去了自由、隐私、尊严和对自己生活的自主权——作为参照,来审视可能源于PAI的更隐蔽的数字禁锢。LLM用户已经在牺牲隐私,他们向聊天机器人透露敏感的个人信息,然后公司挖掘并货币化这些信息,助长了新的监控经济。当复杂的内心世界被转化为适合AI分析的数据流时,自由和尊严也岌岌可危。

AI治疗师可能将人性扁平化为预测模式,从而牺牲了传统人类治疗师所应提供的亲密、个性化的关怀。“PAI的逻辑导向一个未来,在那里我们可能都发现自己成为数字看守管理的算法精神病院中的病人,”奥伯豪斯写道。“在算法精神病院里,窗户上不需要铁栏,也不需要白色软垫房间,因为根本没有逃脱的可能。精神病院已经无处不在——在你的家中和办公室、学校和医院、法庭和军营。只要有互联网连接,精神病院就在那里等待着。”

研究技术与心理健康交叉领域的研究员埃奥因·富拉姆在《聊天机器人治疗:AI心理健康治疗批判性分析》一书中呼应了其中一些担忧。这本令人兴奋的学术入门读物分析了AI聊天机器人提供的自动化治疗背后的假设,以及资本主义的逐利动机可能如何腐蚀这类工具。

富拉姆观察到,新技术背后的资本主义心态“常常导致可疑、不合法和非法的商业行为,在这些行为中,客户利益次于市场主导战略。”

这并不意味着治疗机器人的制造者“在追求市场主导地位时,必然会进行违背用户利益的邪恶活动,”富拉姆写道。

但他指出,AI治疗的成功取决于赚钱与治愈人们这两种不可分割的冲动。在这种逻辑下,剥削与治疗相互滋养:每一次数字治疗都会产生数据,而这些数据又为系统提供燃料,使其在未付费用户寻求治疗时获利。治疗看起来越有效,这个循环就越根深蒂固,越难区分关怀与商品化。“用户从应用程序的治疗或任何其他心理健康干预中获益越多,”他写道,“他们遭受的剥削就越多。”

这种经济与心理上的衔尾蛇(吞噬自己尾巴的蛇)感,在弗雷德·伦泽的处女作小说《赛克》中成为一个核心隐喻。伦泽具有AI研究背景。

《赛克》被描述为“一个男孩遇见女孩遇见AI心理治疗师的故事”,讲述了年轻的伦敦人阿德里安与商业专业人士玛琪的恋情。阿德里安以代写说唱歌词为生,玛琪则擅长在测试阶段发现有利可图的技术。

书名指的是一款名为“赛克”的炫目商业AI治疗师,它被上传到智能眼镜中,阿德里安用它来审视自己无数的焦虑。“当我注册赛克时,我们设置了我的仪表盘,一个宽大的黑色面板,像飞机的驾驶舱,显示我每天的‘生命体征’,”阿德里安叙述道。“赛克可以分析你走路的方式、眼神交流的方式、谈论的内容、穿的衣服、你小便、大便、大笑、哭泣、亲吻、撒谎、抱怨和咳嗽的频率。”

换句话说,赛克是终极的数字表型分析器,持续且详尽地分析用户日常经历中的一切。具有讽刺意味的是,伦泽选择将赛克设定为奢侈品,仅限每月能支付2000英镑费用的订阅者使用。

因参与一首热门歌曲的创作而收入丰厚的阿德里安,开始依赖赛克作为他内心世界与外部世界之间可信赖的调解者。小说探讨了这款应用对富人群体健康的影响,追踪了那些自愿将自己投入奥伯豪斯所描述的数字精神病院精品版中的富人。

《赛克》中唯一真正的危险感涉及一枚日本折磨蛋(别问)。奇怪的是,小说避开了其主题可能引发的更广泛的反乌托邦涟漪,转而描绘了在高级餐厅和精英晚宴上的醉酒对话。

AI治疗师的突然崛起似乎具有惊人的未来感,仿佛它应该发生在某个更晚的时代,那时街道自动清洁,我们通过气动管道环游世界。

在阿德里安的评估中,赛克的创造者仅仅是个“很棒的人”,尽管他有着训练这款应用来抚慰整个国家病痛的技术救世主愿景。故事似乎总在暗示会有意外发生,但最终什么也没发生,留给读者一种悬而未决的感觉。

尽管《赛克》设定在当下,但AI治疗师的突然崛起——无论是在现实生活中还是在小说里——似乎都具有惊人的未来感,仿佛它应该发生在某个更晚的时代,那时街道自动清洁,我们通过气动管道环游世界。然而,心理健康与人工智能的这种交汇已经酝酿了半个多世纪。例如,受人尊敬的天文学家卡尔·萨根曾想象过一个“计算机心理治疗终端网络,有点像一排排大型电话亭”,可以满足对心理健康服务日益增长的需求。

奥伯豪斯指出,第一个可训练神经网络(称为感知机)的雏形之一并非由数学家设计,而是由心理学家弗兰克·罗森布拉特于1958年在康奈尔航空实验室构想出来的。到了20世纪60年代,AI在心理健康领域的潜在效用已得到广泛认可,催生了早期的计算机化心理治疗师,例如运行在约瑟夫·魏泽鲍姆开发的ELIZA聊天机器人上的DOCTOR脚本。魏泽鲍姆出现在本文涉及的所有三本非虚构作品中。

于2008年去世的魏泽鲍姆对计算机化治疗的可能性深感忧虑。“计算机可以做出精神病学判断,”他在1976年的著作《计算机能力与人类理性》中写道。“它们可以比最有耐心的人类更复杂地抛硬币。关键在于,它们不应该被赋予这样的任务。它们甚至可能在某些情况下得出‘正确’的决定——但总是且必然基于任何人类都不应该愿意接受的基础。”

这是一个值得铭记的警告。随着AI治疗师大规模出现,我们看到它们正在演绎一种熟悉的动态:表面上出于善意设计的工具,与可能剥削、监控和重塑人类行为的系统纠缠在一起。在疯狂地试图为急需心理健康支持的患者开辟新机会的同时,我们可能正在他们身后锁上其他的门。

贝基·费雷拉是驻纽约州北部的科学记者,也是《第一次接触:我们对外星人的痴迷故事》一书的作者。

深度阅读
人工智能
OpenAI的新LLM揭示了AI如何工作的秘密
这个实验模型不会与最大最好的模型竞争,但它可以告诉我们它们为何行为怪异——以及它们到底有多可靠。

量子物理学家缩小并“去审查”了DeepSeek R1
他们成功将AI推理模型的体积削减了一半以上——并声称它现在可以回答在中国AI系统中曾受限的政治敏感问题。

AI聊天机器人比政治广告更能影响选民
与聊天机器人的对话可以改变人们的政治观点——但最具说服力的模型也传播最多的错误信息。

保持联系
获取《麻省理工科技评论》的最新动态
发现特别优惠、头条新闻、即将举行的活动等。

英文来源:

The ascent of the AI therapist
Four new books grapple with a global mental-health crisis and the dawn of algorithmic therapy.
We’re in the midst of a global mental-health crisis. More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year.
Given the clear demand for accessible and affordable mental-health services, it’s no wonder that people have looked to artificial intelligence for possible relief. Millions are already actively seeking therapy from popular chatbots like OpenAI’s ChatGPT and Anthropic’s Claude, or from specialized psychology apps like Wysa and Woebot. On a broader scale, researchers are exploring AI’s potential to monitor and collect behavioral and biometric observations using wearables and smart devices, analyze vast volumes of clinical data for new insights, and assist human mental-health professionals to help prevent burnout.
But so far this largely uncontrolled experiment has produced mixed results. Many people have found solace in chatbots based on large language models (LLMs), and some experts see promise in them as therapists, but other users have been sent into delusional spirals by AI’s hallucinatory whims and breathless sycophancy. Most tragically, multiple families have alleged that chatbots contributed to the suicides of their loved ones, sparking lawsuits against companies responsible for these tools. In October, OpenAI CEO Sam Altman revealed in a blog post that 0.15% of ChatGPT users “have conversations that include explicit indicators of potential suicidal planning or intent.” That’s roughly a million people sharing suicidal ideations with just one of these software systems every week.
The real-world consequences of AI therapy came to a head in unexpected ways in 2025 as we waded through a critical mass of stories about human-chatbot relationships, the flimsiness of guardrails on many LLMs, and the risks of sharing profoundly personal information with products made by corporations that have economic incentives to harvest and monetize such sensitive data.
Several authors anticipated this inflection point. Their timely books are a reminder that while the present feels like a blur of breakthroughs, scandals, and confusion, this disorienting time is rooted in deeper histories of care, technology, and trust.
LLMs have often been described as “black boxes” because nobody knows exactly how they produce their results. The inner workings that guide their outputs are opaque because their algorithms are so complex and their training data is so vast. In mental-health circles, people often describe the human brain as a “black box,” for analogous reasons. Psychology, psychiatry, and related fields must grapple with the impossibility of seeing clearly inside someone else’s head, let alone pinpointing the exact causes of their distress.
These two types of black boxes are now interacting with each other, creating unpredictable feedback loops that may further impede clarity about the origins of people’s mental-health struggles and the solutions that may be possible. Anxiety about these developments has much to do with the explosive recent advances in AI, but it also revives decades-old warnings from pioneers such as the MIT computer scientist Joseph Weizenbaum, who argued against computerized therapy as early as the 1960s.
Charlotte Blease, a philosopher of medicine, makes the optimist’s case in Dr. Bot: Why Doctors Can Fail Us—and How AI Could Save Lives. Her book broadly explores the possible positive impacts of AI in a range of medical fields. While she remains clear-eyed about the risks, warning that readers who are expecting “a gushing love letter to technology” will be disappointed, she suggests that these models can help relieve patient suffering and medical burnout alike.
“Health systems are crumbling under patient pressure,” Blease writes. “Greater burdens on fewer doctors create the perfect petri dish for errors,” and “with palpable shortages of doctors and increasing waiting times for patients, many of us are profoundly frustrated.”
Blease believes that AI can not only ease medical professionals’ massive workloads but also relieve the tensions that have always existed between some patients and their caregivers. For example, people often don’t seek needed care because they are intimidated or fear judgment from medical professionals; this is especially true if they have mental-health challenges. AI could allow more people to share their concerns, she argues.
But she’s aware that these putative upsides need to be weighed against major drawbacks. For instance, AI therapists can provide inconsistent and even dangerous responses to human users, according to a 2025 study, and they also raise privacy concerns, given that AI companies are currently not bound by the same confidentiality and HIPAA standards as licensed therapists.
While Blease is an expert in this field, her motivation for writing the book is also personal: She has two siblings with an incurable form of muscular dystrophy, one of whom waited decades for a diagnosis. During the writing of her book, she also lost her partner to cancer and her father to dementia within a devastating six-month period. “I witnessed first-hand the sheer brilliance of doctors and the kindness of health professionals,” she writes. “But I also observed how things can go wrong with care.”
A similar tension animates Daniel Oberhaus’s engrossing book The Silicon Shrink: How Artificial Intelligence Made the World an Asylum. Oberhaus starts from a point of tragedy: the loss of his younger sister to suicide. As Oberhaus carried out the “distinctly twenty-first-century mourning process” of sifting through her digital remains, he wondered if technology could have eased the burden of the psychiatric problems that had plagued her since childhood.
“It seemed possible that all of this personal data might have held important clues that her mental health providers could have used to provide more effective treatment,” he writes. “What if algorithms running on my sister’s smartphone or laptop had used that data to understand when she was in distress? Could it have led to a timely intervention that saved her life? Would she have wanted that even if it did?”
This concept of digital phenotyping—in which a person’s digital behavior could be mined for clues about distress or illness—seems elegant in theory. But it may also become problematic if integrated into the field of psychiatric artificial intelligence (PAI), which extends well beyond chatbot therapy.
Oberhaus emphasizes that digital clues could actually exacerbate the existing challenges of modern psychiatry, a discipline that remains fundamentally uncertain about the underlying causes of mental illnesses and disorders. The advent of PAI, he says, is “the logical equivalent of grafting physics onto astrology.” In other words, the data generated by digital phenotyping is as precise as physical measurements of planetary positions, but it is then integrated into a broader framework—in this case, psychiatry—that, like astrology, is based on unreliable assumptions.
Oberhaus, who uses the phrase “swipe psychiatry” to describe the outsourcing of clinical decisions based on behavioral data to LLMs, thinks that this approach cannot escape the fundamental issues facing psychiatry. In fact, it could worsen the problem by causing the skills and judgment of human therapists to atrophy as they grow more dependent on AI systems.
He also uses the asylums of the past—in which institutionalized patients lost their right to freedom, privacy, dignity, and agency over their lives—as a touchstone for a more insidious digital captivity that may spring from PAI. LLM users are already sacrificing privacy by telling chatbots sensitive personal information that companies then mine and monetize, contributing to a new surveillance economy. Freedom and dignity are at stake when complex inner lives are transformed into data streams tailored for AI analysis.
AI therapists could flatten humanity into patterns of prediction, and so sacrifice the intimate, individualized care that is expected of traditional human therapists. “The logic of PAI leads to a future where we may all find ourselves patients in an algorithmic asylum administered by digital wardens,” Oberhaus writes. “In the algorithmic asylum there is no need for bars on the window or white padded rooms because there is no possibility of escape. The asylum is already everywhere—in your homes and offices, schools and hospitals, courtrooms and barracks. Wherever there’s an internet connection, the asylum is waiting.”
Eoin Fullam, a researcher who studies the intersection of technology and mental health, echoes some of the same concerns in Chatbot Therapy: A Critical Analysis of AI Mental Health Treatment. A heady academic primer, the book analyzes the assumptions underlying the automated treatments offered by AI chatbots and the way capitalist incentives could corrupt these kinds of tools.
Fullam observes that the capitalist mentality behind new technologies “often leads to questionable, illegitimate, and illegal business practices in which the customers’ interests are secondary to strategies of market dominance.”
That doesn’t mean that therapy-bot makers “will inevitably conduct nefarious activities contrary to the users’ interests in the pursuit of market dominance,” Fullam writes.
But he notes that the success of AI therapy depends on the inseparable impulses to make money and to heal people. In this logic, exploitation and therapy feed each other: Every digital therapy session generates data, and that data fuels the system that profits as unpaid users seek care. The more effective the therapy seems, the more the cycle entrenches itself, making it harder to distinguish between care and commodification. “The more the users benefit from the app in terms of its therapeutic or any other mental health intervention,” he writes, “the more they undergo exploitation.”
This sense of an economic and psychological ouroboros—the snake that eats its own tail—serves as a central metaphor in Sike, the debut novel from Fred Lunzer, an author with a research background in AI.
Described as a “story of boy meets girl meets AI psychotherapist,” Sike follows Adrian, a young Londoner who makes a living ghostwriting rap lyrics, in his romance with Maquie, a business professional with a knack for spotting lucrative technologies in the beta phase.
The title refers to a splashy commercial AI therapist called Sike, uploaded into smart glasses, that Adrian uses to interrogate his myriad anxieties. “When I signed up to Sike, we set up my dashboard, a wide black panel like an airplane’s cockpit that showed my daily ‘vitals,’” Adrian narrates. “Sike can analyze the way you walk, the way you make eye contact, the stuff you talk about, the stuff you wear, how often you piss, shit, laugh, cry, kiss, lie, whine, and cough.”
In other words, Sike is the ultimate digital phenotyper, constantly and exhaustively analyzing everything in a user’s daily experiences. In a twist, Lunzer chooses to make Sike a luxury product, available only to subscribers who can foot the price tag of £2,000 per month.
Flush with cash from his contributions to a hit song, Adrian comes to rely on Sike as a trusted mediator between his inner and outer worlds. The novel explores the impacts of the app on the wellness of the well-off, following rich people who voluntarily commit themselves to a boutique version of the digital asylum described by Oberhaus.
The only real sense of danger in Sike involves a Japanese torture egg (don’t ask). The novel strangely sidesteps the broader dystopian ripples of its subject matter in favor of drunken conversations at fancy restaurants and elite dinner parties.
The sudden ascent of the AI therapist seems startlingly futuristic, as if it should be unfolding in some later time when the streets scrub themselves and we travel the world through pneumatic tubes.
Sike’s creator is simply “a great guy” in Adrian’s estimation, despite his techno-messianic vision of training the app to soothe the ills of entire nations. It always seems as if a shoe is meant to drop, but in the end, it never does, leaving the reader with a sense of non-resolution.
While Sike is set in the present day, something about the sudden ascent of the AI therapist—in real life as well as in fiction—seems startlingly futuristic, as if it should be unfolding in some later time when the streets scrub themselves and we travel the world through pneumatic tubes. But this convergence of mental health and artificial intelligence has been in the making for more than half a century. The beloved astronomer Carl Sagan, for example, once imagined a “network of computer psychotherapeutic terminals, something like arrays of large telephone booths” that could address the growing demand for mental-health services.
Oberhaus notes that one of the first incarnations of a trainable neural network, known as the Perceptron, was devised not by a mathematician but by a psychologist named Frank Rosenblatt, at the Cornell Aeronautical Laboratory in 1958. The potential utility of AI in mental health was widely recognized by the 1960s, inspiring early computerized psychotherapists such as the DOCTOR script that ran on the ELIZA chatbot developed by Joseph Weizenbaum, who shows up in all three of the nonfiction books in this article.
Weizenbaum, who died in 2008, was profoundly concerned about the possibility of computerized therapy. “Computers can make psychiatric judgments,” he wrote in his 1976 book Computer Power and Human Reason. “They can flip coins in much more sophisticated ways than can the most patient human being. The point is that they ought not to be given such tasks. They may even be able to arrive at ‘correct’ decisions in some cases—but always and necessarily on bases no human being should be willing to accept.”
It’s a caution worth keeping in mind. As AI therapists arrive at scale, we’re seeing them play out a familiar dynamic: Tools designed with superficially good intentions are enmeshed with systems that can exploit, surveil, and reshape human behavior. In a frenzied attempt to unlock new opportunities for patients in dire need of mental-health support, we may be locking other doors behind them.
Becky Ferreira is a science reporter based in upstate New York and author of First Contact: The Story of Our Obsession with Aliens.
Deep Dive
Artificial intelligence
OpenAI’s new LLM exposes the secrets of how AI really works
The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways—and how trustworthy they really are.
Quantum physicists have shrunk and “de-censored” DeepSeek R1
They managed to cut the size of the AI reasoning model by more than half—and claim it can now answer politically sensitive questions once off limits in Chinese AI systems.
AI chatbots can sway voters better than political advertisements
A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读