心理治疗师竟用ChatGPT看病,来访者心态崩了。
内容来源:https://www.technologyreview.com/2025/09/02/1122871/therapists-using-chatgpt-secretly/
内容总结:
心理咨询师秘密使用ChatGPT引发客户信任危机
近期,多起心理咨询师在未告知客户的情况下使用人工智能辅助诊疗的事件被曝光,引发业内对职业伦理和客户隐私保护的担忧。事件缘起于洛杉矶居民德克兰在线上诊疗时因技术故障意外发现其治疗师实时将对话内容输入ChatGPT并依据AI生成建议进行诊疗。更令人错愕的是,德克兰通过配合AI的预设回答"完美"完成了当次咨询,事后治疗师承认因遭遇诊疗瓶颈而寻求AI帮助。
类似案例并非个例。多位受访者表示,当发现治疗师使用AI生成回复时,普遍产生"被背叛感"和信任危机。值得注意的是,2025年《PLOS心理健康》研究显示,虽然AI生成回复在专业度评分上可能超越人类治疗师,但一旦使用者知晓回复来自AI,评价会显著降低。加州大学伯克利分校临床心理学家阿德里安·阿吉莱拉指出:"心理治疗的核心是真实性,使用AI可能传递'你不重视这段关系'的负面信号。"
除伦理问题外,患者隐私安全亦存在重大隐患。杜克大学计算机科学助理教授帕迪斯·埃米-纳伊尼强调,通用聊天机器人既未获得美国食品药品监督管理局批准,也不符合《健康保险携带和责任法案》(HIPAA)要求,治疗师擅自输入患者信息可能造成敏感数据泄露。2020年芬兰心理健康公司遭黑客攻击导致数万份治疗记录泄露的事件即为前车之鉴。
尽管包括Heidi Health、Upheal在内的多家公司正推广符合HIPAA标准的专业医疗AI工具,但斯坦福大学最新研究表明,聊天机器人可能存在盲目认同用户观点、强化妄想倾向等风险。美国心理咨询协会目前明确反对将AI用于心理健康诊断。
专家建议,治疗师若使用AI工具必须保持透明并获取知情同意。正如研究者玛格丽特·莫里斯所言:"节省几分钟时间的代价,可能是牺牲治疗中最珍贵的信任纽带。"
中文翻译:
心理治疗师正偷偷使用ChatGPT,来访者感到被冒犯。
部分心理治疗师在诊疗过程中使用人工智能技术,这种行为正在危及来访者的信任与隐私。若非一次技术故障,迪克兰永远不可能发现他的治疗师正在使用ChatGPT。在某次线上诊疗中,由于网络连接不稳定,迪克兰建议双方关闭视频画面。然而治疗师却意外开启了屏幕共享功能。
"突然间,我看到他正在使用ChatGPT。"现年31岁、居住在洛杉矶的迪克兰说道,"他将我的倾诉内容输入ChatGPT,然后对答案进行总结或选择性提取。"震惊之下他选择保持沉默,在后续诊疗中,他全程目睹了ChatGPT的分析结果在治疗师屏幕上的实时流动。当迪克兰开始模仿ChatGPT的回应方式,甚至抢先说出AI可能给出的建议时,整个场景变得愈发超现实。
"我成了最完美的患者,"他回忆道,"因为ChatGPT会说'您是否认为自己的思维方式可能过于非黑即白?',我就会接话'是啊,我觉得自己的思维方式确实太极端了',而治疗师则会附和'完全正确'。这绝对是他梦寐以求的诊疗效果。"
迪克兰脑海中不断盘旋着一个疑问:"这合法吗?"当他在下次诊疗中提及此事时——"场面极其尴尬,就像一场诡异的分手"—治疗师当场落泪。治疗师解释称感觉双方陷入了治疗瓶颈,因此开始寻求外部帮助。"那次诊疗我照样付了全款,"迪克兰苦笑着说。
过去几年大型语言模型(LLM)的爆发式发展给心理治疗领域带来了意想不到的连锁反应。一方面越来越多人选择用ChatGPT等工具替代人类治疗师,但另一方面,治疗师群体自身如何将AI融入实践却较少被讨论。与众多行业类似,生成式AI看似能显著提升效率,但其应用可能危及患者敏感数据,并破坏以信任为基石的医患关系。
怀疑的种子
迪克兰的遭遇并非个例。笔者本人曾收到治疗师一封比往常更冗长精致的邮件,初读时倍感欣慰——这封看似充满善意与认可的长信,仿佛表明对方认真思考了我邮件中所有(相当敏感的)内容。
但细察之下疑点丛生:陌生的字体、美式破折号的泛滥使用(尽管我们都是英国人)、标志性的非个人化行文风格,以及逐条回应原邮件内容的行文习惯。当意识到ChatGPT可能参与了邮件起草后(后经治疗师确认),我的欣慰感瞬间被失望与不信任取代。
尽管治疗师保证只是通过AI口述长邮件,我仍无法确定其中表达的情感有多少来自她本人,多少出自机器。更难以消除的疑虑是:她可能将我的高度私密邮件全文粘贴给了ChatGPT。
我在网络搜索中发现大量类似案例:许多人怀疑自己收到了治疗师用AI生成的通讯内容。包括迪克兰在内的众多受害者选择在Reddit论坛寻求情感支持与建议。
25岁的美国东海岸居民霍普也是如此。她因爱犬离世向治疗师发送私信后,很快收到回复。本该令人慰藉的回应——表达"此刻没有它陪伴在身边一定很艰难"的体贴——却被意外保留的AI提示语彻底破坏:"请生成更人性化、更真挚的版本,采用温柔对话语气。"
霍普表示"真的感到震惊与困惑","这是一种非常奇怪的感觉,随后产生了被背叛感...这绝对动摇了我对她的信任。"她补充道,这个问题尤其严重,因为"找她治疗的部分原因正是我的信任障碍问题"。她原本认为这位治疗师既专业又富有同理心,"从未怀疑她会需要借助AI"。面对质询时治疗师道歉解释,称因自己从未养过宠物,故寻求AI帮助表达合适的情感。
披露困境
无论是否构成背叛,AI可能帮助治疗师更好地与来访者沟通的观点确有可取之处。《PLOS心理健康》2025年的一项研究要求治疗师使用ChatGPT回应模拟患者问题的场景。不仅830人组成的评审小组无法区分人类与AI的回复,AI回复在符合治疗最佳实践方面评分更高。
然而当参与者怀疑回复来自ChatGPT时,评分显著降低。(被误认为是治疗师撰写的ChatGPT回复反而获得最高评分。)康奈尔大学2023年的研究同样发现:AI生成的信息能增强对话者间的亲密感与合作意愿,但前提是接收方不知晓AI的参与。哪怕只是怀疑使用了AI,都会迅速破坏善意。
"人们重视真实性,在心理治疗中尤其如此,"加州大学伯克利分校临床心理学家阿德里安·阿吉莱拉教授指出,"使用AI会让人感觉'你没有认真对待我们的关系'。我会用ChatGPT回复妻子或孩子吗?那样显得不够真诚。"
2023年生成式AI发展初期,在线诊疗服务Koko对用户进行了一项秘密实验,将GPT-3生成的回复与人类起草的回复混合使用。虽然发现用户对AI回复评价更积极,但实验曝光后引发了强烈愤慨。
在线诊疗平台BetterHelp同样面临治疗师使用AI起草回复的指控。摄影师布伦丹·基恩在Medium平台发文称,其BetterHelp治疗师承认在回复中使用AI,导致"强烈的背叛感",尽管对方保证数据安全,他仍持续担忧隐私泄露,最终终止了治疗关系。
BetterHelp发言人向我们表示,公司"禁止治疗师向第三方人工智能披露任何成员的个人或健康信息,也不允许使用AI编写可能直接或间接识别个人身份的信息。"
所有这些案例都涉及未公开的AI使用。阿吉莱拉认为时间紧张的治疗师可以使用LLM,但透明度至关重要。"我们必须坦诚告知人们'我将把此工具用于X、Y、Z用途'并说明理由,"他表示。这样接收AI生成信息时,人们会基于事先知情而非怀疑治疗师"试图偷懒"。
根据美国心理学会2023年的研究,心理学家经常处于超负荷工作状态,该行业职业倦怠率很高。这种背景使得AI工具的吸引力显而易见。但缺乏披露可能永久破坏信任。霍普决定继续接受治疗,不过后来因其他原因停止了治疗。"但每次见面时我都会想起AI事件,"她说。
患者隐私风险
临床心理学家、华盛顿大学附属教员玛格丽特·莫里斯指出,除透明度问题外,许多治疗师对使用LLM本身持谨慎态度。"这些工具对学习可能很有价值,"她表示治疗师应在职业生涯中持续发展专业知识,"但我们必须对患者数据格外谨慎。"莫里斯称迪克兰的经历"令人担忧"。
杜克大学计算机科学助理教授帕迪斯·埃米-纳伊尼指出,治疗师需要意识到ChatGPT等通用AI聊天机器人未获得美国食品药品监督管理局批准,也不符合《健康保险携带和责任法案》(HIPAA)要求。(HIPAA是保护个人敏感健康信息的一套美国联邦法规。)"如果患者的任何信息被披露或可被AI推断,就会对患者隐私构成重大风险,"她表示。
在最近一篇论文中,埃米-纳伊尼发现许多用户错误认为ChatGPT符合HIPAA标准,从而对该工具产生不应有的信任感。"我估计有些治疗师可能也有这种误解,"她说。
迪克兰自称性格相对开放,发现治疗师使用ChatGPT时并未完全崩溃。"个人而言我不觉得'天啊我有见不得人的秘密',"他表示。但这仍然感觉像是一种侵犯:"可以想象如果我当时有自杀倾向、或吸毒、或出轨女友...我绝不希望这些被输入ChatGPT。"
使用AI处理邮件时,"并非简单删除姓名地址等明显标识符就能解决问题,"埃米-纳伊尼指出,"敏感信息往往可以从看似不敏感的细节中推断出来。"她补充道:"识别和改写所有潜在敏感数据需要时间与专业知识,这可能与使用AI工具追求便利的初衷相悖。在任何情况下,治疗师都应向患者披露AI使用情况并获取同意。"
包括Heidi Health、Upheal、Lyssn和Blueprint在内的越来越多公司正在向治疗师推销专业工具,如AI辅助笔记、培训和转录服务。这些公司声称符合HIPAA标准,必要时使用加密和假名化技术安全存储数据。但许多治疗师仍然对隐私影响保持警惕——尤其是需要全程录音的服务。
"即使隐私保护得到改进,信息泄露或数据二次使用的风险始终存在,"埃米-纳伊尼表示。2020年芬兰心理健康公司遭黑客攻击,数万份客户治疗记录被窃取的事件就是警示。名单上的人遭到勒索,随后全部资料被公开披露,包含儿童虐待经历和成瘾问题等高度敏感细节。
治疗师可能失去的
除了侵犯数据隐私,心理治疗师代表来访者咨询LLM还涉及其他风险。研究发现尽管某些专业治疗机器人可以媲美人工干预,但ChatGPT等工具提供的建议可能弊大于利。
例如斯坦福大学最新研究发现,聊天机器人可能通过盲目肯定而非挑战用户来助长妄想与精神病态,还存在偏见问题与谄媚倾向。同样的缺陷使得治疗师代表来访者咨询聊天机器人变得危险。例如,它们可能无端肯定治疗师的直觉,或将他们引向错误方向。
阿吉莱拉表示曾在心理健康培训教学中试用ChatGPT等工具,比如输入假设症状要求AI聊天机器人做出诊断。该工具会列出多种可能病症,但分析相当肤浅。美国咨询协会目前建议不要将AI用于心理健康诊断。
2024年针对ChatGPT早期版本的研究同样发现,其诊断或制定治疗计划过于模糊笼统,无法真正实用,且存在严重偏见——总是建议人们寻求认知行为治疗,而非其他可能更合适的疗法。
哥伦比亚大学精神病学家兼神经科学家丹尼尔·金梅尔用ChatGPT进行实验,假装成有感情问题的来访者。他表示发现这款聊天机器人在"常规"治疗回应方面模仿得不错,比如正常化与验证、请求补充信息或强调某些认知情感关联。
然而"它没有深入挖掘,"他说,没有尝试"将表面无关的事物联系成有机整体...形成故事、观点或理论。""我对用它替您思考持怀疑态度,"他表示,思考应该是治疗师的工作。
莫里斯认为,使用AI技术或许能为治疗师节省时间,但这一好处应与患者需求相权衡:"也许您节省了几分钟时间。但您付出了什么代价?"
深度解析
人工智能
谷歌首次披露单次AI提示能耗
这是大型AI公司迄今最透明的能耗评估,为研究人员提供了期待已久的幕后一瞥。
塑造OpenAI研究未来的两位关键人物
独家对话OpenAI双研究主管马克·陈和雅各布·帕乔基,探讨通往更强推理模型与超对齐的发展路径。
如何在个人电脑上运行LLM模型
现在您可以在自己电脑的安全舒适环境中运行实用模型。具体方法如下。
GPT-5已问世。接下来如何?
这次备受期待的发布为ChatGPT用户体验带来多项升级。但距离通用人工智能仍很遥远。
保持联系
获取《麻省理工科技评论》最新动态
发现特别优惠、热门话题、即将举办的活动等信息。
英文来源:
Therapists are secretly using ChatGPT. Clients are triggered.
Some therapists are using AI during therapy sessions. They’re risking their clients’ trust and privacy in the process.
Declan would never have found out his therapist was using ChatGPT had it not been for a technical mishap. The connection was patchy during one of their online sessions, so Declan suggested they turn off their video feeds. Instead, his therapist began inadvertently sharing his screen.
“Suddenly, I was watching him use ChatGPT,” says Declan, 31, who lives in Los Angeles. “He was taking what I was saying and putting it into ChatGPT, and then summarizing or cherry-picking answers.”
Declan was so shocked he didn’t say anything, and for the rest of the session he was privy to a real-time stream of ChatGPT analysis rippling across his therapist’s screen. The session became even more surreal when Declan began echoing ChatGPT in his own responses, preempting his therapist.
“I became the best patient ever,” he says, “because ChatGPT would be like, ‘Well, do you consider that your way of thinking might be a little too black and white?’ And I would be like, ‘Huh, you know, I think my way of thinking might be too black and white,’ and [my therapist would] be like, ‘Exactly.’ I’m sure it was his dream session.”
Among the questions racing through Declan’s mind was, “Is this legal?” When Declan raised the incident with his therapist at the next session—“It was super awkward, like a weird breakup”—the therapist cried. He explained he had felt they’d hit a wall and had begun looking for answers elsewhere. “I was still charged for that session,” Declan says, laughing.
The large language model (LLM) boom of the past few years has had unexpected ramifications for the field of psychotherapy, mostly due to the growing number of people substituting the likes of ChatGPT for human therapists. But less discussed is how some therapists themselves are integrating AI into their practice. As in many other professions, generative AI promises tantalizing efficiency savings, but its adoption risks compromising sensitive patient data and undermining a relationship in which trust is paramount.
Suspicious sentiments
Declan is not alone, as I can attest from personal experience. When I received a recent email from my therapist that seemed longer and more polished than usual, I initially felt heartened. It seemed to convey a kind, validating message, and its length made me feel that she’d taken the time to reflect on all of the points in my (rather sensitive) email.
On closer inspection, though, her email seemed a little strange. It was in a new font, and the text displayed several AI “tells,” including liberal use of the Americanized em dash (we’re both from the UK), the signature impersonal style, and the habit of addressing each point made in the original email line by line.
My positive feelings quickly drained away, to be replaced by disappointment and mistrust, once I realized ChatGPT likely had a hand in drafting the message—which my therapist confirmed when I asked her.
Despite her assurance that she simply dictates longer emails using AI, I still felt uncertainty over the extent to which she, as opposed to the bot, was responsible for the sentiments expressed. I also couldn’t entirely shake the suspicion that she might have pasted my highly personal email wholesale into ChatGPT.
When I took to the internet to see whether others had had similar experiences, I found plenty of examples of people receiving what they suspected were AI-generated communiqués from their therapists. Many, including Declan, had taken to Reddit to solicit emotional support and advice.
So had Hope, 25, who lives on the east coast of the US, and had direct-messaged her therapist about the death of her dog. She soon received a message back. It would have been consoling and thoughtful—expressing how hard it must be “not having him by your side right now”—were it not for the reference to the AI prompt accidentally preserved at the top: “Here’s a more human, heartfelt version with a gentle, conversational tone.”
Hope says she felt “honestly really surprised and confused.” “It was just a very strange feeling,” she says. “Then I started to feel kind of betrayed. … It definitely affected my trust in her.” This was especially problematic, she adds, because “part of why I was seeing her was for my trust issues.”
Hope had believed her therapist to be competent and empathetic, and therefore “never would have suspected her to feel the need to use AI.” Her therapist was apologetic when confronted, and she explained that because she’d never had a pet herself, she’d turned to AI for help expressing the appropriate sentiment.
A disclosure dilemma
Betrayal or not, there may be some merit to the argument that AI could help therapists better communicate with their clients. A 2025 study published in PLOS Mental Health asked therapists to use ChatGPT to respond to vignettes describing problems of the kind patients might raise in therapy. Not only was a panel of 830 participants unable to distinguish between the human and AI responses, but AI responses were rated as conforming better to therapeutic best practice.
However, when participants suspected responses to have been written by ChatGPT, they ranked them lower. (Responses written by ChatGPT but misattributed to therapists received the highest ratings overall.)
Similarly, Cornell University researchers found in a 2023 study that AI-generated messages can increase feelings of closeness and cooperation between interlocutors, but only if the recipient remains oblivious to the role of AI. The mere suspicion of its use was found to rapidly sour goodwill.
“People value authenticity, particularly in psychotherapy,” says Adrian Aguilera, a clinical psychologist and professor at the University of California, Berkeley. “I think [using AI] can feel like, ‘You’re not taking my relationship seriously.’ Do I ChatGPT a response to my wife or my kids? That wouldn’t feel genuine.”
In 2023, in the early days of generative AI, the online therapy service Koko conducted a clandestine experiment on its users, mixing in responses generated by GPT-3 with ones drafted by humans. They discovered that users tended to rate the AI-generated responses more positively. The revelation that users had unwittingly been experimented on, however, sparked outrage.
The online therapy provider BetterHelp has also been subject to claims that its therapists have used AI to draft responses. In a Medium post, photographer Brendan Keen said his BetterHelp therapist admitted to using AI in their replies, leading to “an acute sense of betrayal” and persistent worry, despite reassurances, that his data privacy had been breached. He ended the relationship thereafter.
A BetterHelp spokesperson told us the company “prohibits therapists from disclosing any member’s personal or health information to third-party artificial intelligence, or using AI to craft messages to members to the extent it might directly or indirectly have the potential to identify someone.”
All these examples relate to undisclosed AI usage. Aguilera believes time-strapped therapists can make use of LLMs, but transparency is essential. “We have to be up-front and tell people, ‘Hey, I’m going to use this tool for X, Y, and Z’ and provide a rationale,” he says. People then receive AI-generated messages with that prior context, rather than assuming their therapist is “trying to be sneaky.”
Psychologists are often working at the limits of their capacity, and levels of burnout in the profession are high, according to 2023 research conducted by the American Psychological Association. That context makes the appeal of AI-powered tools obvious.
But lack of disclosure risks permanently damaging trust. Hope decided to continue seeing her therapist, though she stopped working with her a little later for reasons she says were unrelated. “But I always thought about the AI Incident whenever I saw her,” she says.
Risking patient privacy
Beyond the transparency issue, many therapists are leery of using LLMs in the first place, says Margaret Morris, a clinical psychologist and affiliate faculty member at the University of Washington.
“I think these tools might be really valuable for learning,” she says, noting that therapists should continue developing their expertise over the course of their career. “But I think we have to be super careful about patient data.” Morris calls Declan’s experience “alarming.”
Therapists need to be aware that general-purpose AI chatbots like ChatGPT are not approved by the US Food and Drug Administration and are not HIPAA compliant, says Pardis Emami-Naeini, assistant professor of computer science at Duke University, who has researched the privacy and security implications of LLMs in a health context. (HIPAA is a set of US federal regulations that protect people’s sensitive health information.)
“This creates significant risks for patient privacy if any information about the patient is disclosed or can be inferred by the AI,” she says.
In a recent paper, Emami-Naeini found that many users wrongly believe ChatGPT is HIPAA compliant, creating an unwarranted sense of trust in the tool. “I expect some therapists may share this misconception,” she says.
As a relatively open person, Declan says, he wasn’t completely distraught to learn how his therapist was using ChatGPT. “Personally, I am not thinking, ‘Oh, my God, I have deep, dark secrets,’” he said. But it did still feel violating: “I can imagine that if I was suicidal, or on drugs, or cheating on my girlfriend … I wouldn’t want that to be put into ChatGPT.”
When using AI to help with email, “it’s not as simple as removing obvious identifiers such as names and addresses,” says Emami-Naeini. “Sensitive information can often be inferred from seemingly nonsensitive details.”
She adds, “Identifying and rephrasing all potential sensitive data requires time and expertise, which may conflict with the intended convenience of using AI tools. In all cases, therapists should disclose their use of AI to patients and seek consent.”
A growing number of companies, including Heidi Health, Upheal, Lyssn, and Blueprint, are marketing specialized tools to therapists, such as AI-assisted note-taking, training, and transcription services. These companies say they are HIPAA compliant and store data securely using encryption and pseudonymization where necessary. But many therapists are still wary of the privacy implications—particularly of services that necessitate the recording of entire sessions.
“Even if privacy protections are improved, there is always some risk of information leakage or secondary uses of data,” says Emami-Naeini.
A 2020 hack on a Finnish mental health company, which resulted in tens of thousands of clients’ treatment records being accessed, serves as a warning. People on the list were blackmailed, and subsequently the entire trove was publicly released, revealing extremely sensitive details such as peoples’ experiences of child abuse and addiction problems.
What therapists stand to lose
In addition to violation of data privacy, other risks are involved when psychotherapists consult LLMs on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT can cause more harm than good.
A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy. The same flaws could make it risky for therapists to consult chatbots on behalf of their clients. They could, for example, baselessly validate a therapist’s hunch, or lead them down the wrong path.
Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, such as by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present.
A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy as opposed to other types of therapy that might be more suitable.
Daniel Kimmel, a psychiatrist and neuroscientist at Columbia University, conducted experiments with ChatGPT where he posed as a client having relationship troubles. He says he found the chatbot was a decent mimic when it came to “stock-in-trade” therapeutic responses, like normalizing and validating, asking for additional information, or highlighting certain cognitive or emotional associations.
However, “it didn’t do a lot of digging,” he says. It didn’t attempt “to link seemingly or superficially unrelated things together into something cohesive … to come up with a story, an idea, a theory.”
“I would be skeptical about using it to do the thinking for you,” he says. Thinking, he says, should be the job of therapists.
Therapists could save time using AI-powered tech, but this benefit should be weighed against the needs of patients, says Morris: “Maybe you’re saving yourself a couple of minutes. But what are you giving away?”
Deep Dive
Artificial intelligence
In a first, Google has released data on how much energy an AI prompt uses
It’s the most transparent estimate yet from one of the big AI companies, and a long-awaited peek behind the curtain for researchers.
The two people shaping the future of OpenAI’s research
An exclusive conversation with Mark Chen and Jakub Pachocki, OpenAI’s twin heads of research, about the path toward more capable reasoning models—and superalignment.
How to run an LLM on your laptop
It’s now possible to run useful models from the safety and comfort of your own computer. Here’s how.
GPT-5 is here. Now what?
The much-hyped release makes several enhancements to the ChatGPT user experience. But it’s still far short of AGI.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.