这家医疗初创公司运用大型语言模型进行诊疗预约和疾病诊断。
内容来源:https://www.technologyreview.com/2025/09/22/1123873/medical-diagnosis-llm/
内容总结:
美国医疗初创公司Akido实验室正利用大语言模型技术,在加州南部部分诊所推行"AI主导诊疗"新模式。该系统名为ScopeAI,通过医学助理与患者进行长达半小时的深度交流,由AI系统实时分析对话内容并生成诊断方案,最终由医生审核确认。
该公司首席执行官普拉尚特·萨曼特表示,该模式可使医生接诊效率提升4至5倍,尤其帮助医疗补助计划参保者等弱势群体获得了及时的专业诊疗。在针对无家可归者的街头医疗团队中,该系统甚至实现了24小时内完成药物成瘾治疗申请的突破。
然而伯克利大学计算机科学家艾玛·皮尔森等专家指出,这种将医疗认知工作大幅转移至AI的模式存在风险。医疗伦理专家泽克·伊曼纽尔则担忧,患者可能并不清楚算法对其诊疗方案的实际影响程度。尽管公司强调所有AI建议均需医生审核,但学界关注到"自动化偏见"现象——医生可能过度依赖AI判断。
目前该模式面临法律合规性挑战。哈佛法学院教授格伦·科恩指出,类似"盒子里的医生"的AI系统可能需要FDA批准。阿克多公司坚称其系统因保留医生最终决策权而不需审批,但监管机构尚未明确表态。
值得注意的是,使用该系统的患者虽被告知AI参与信息收集,但未获知系统具体参与诊断建议的细节。业界呼吁开展更严格的对比研究,以评估该模式对诊疗质量的实际影响。
中文翻译:
一家医疗初创公司正运用大语言模型进行诊疗并生成诊断结果。阿克多实验室首席技术官贾里德·古德纳表示:“我们的核心目标是将医生从接诊环节中解放出来。”
想象这样的场景:身体不适时致电诊所预约,次日便能就诊。在半小时的问诊中,你无需匆忙描述病情,而是可以向专注的倾听者详尽阐述症状、担忧及完整病史,对方不仅耐心倾听还会提出贴切的追问。离开时你已获得明确诊断与治疗方案,首次感受到健康问题得到了应有的重视。
但关键在于:与你交谈的可能根本不是医生或持照医师。
这正是南加州少数由医疗初创企业阿克多实验室运营诊所的新常态。这些诊所的患者(包括部分医疗补助计划参保者)能即时预约专科医生——这通常只是富豪专属的 concierge 诊所特权。
核心差异在于:阿克多的患者与医生接触时间大幅缩减甚至为零。取而代之的是由临床培训有限的医疗助理提供人文关怀,而诊断和治疗方案制定则由名为ScopeAI的专利系统完成。该系统基于大语言模型,可转录分析医患对话后生成建议,最终由医生审核确认。
阿克多首席执行官普拉尚特·萨曼特称,该模式使医生接诊量提升至原先的4-5倍。提升医疗效率确有必要:美国人口老龄化加剧且慢性病高发,而联邦医疗补助资金即将削减15%的预期更将加剧医疗资源紧张。
但专家质疑将医疗认知工作过度转移给AI并非解决医生短缺的良策。加州大学伯克利分校计算机科学家埃玛·皮尔逊指出,医生与AI辅助助理的专业能力存在巨大鸿沟,跨越此鸿沟可能带来风险。她表示:“虽然我对AI拓展医疗普惠的潜力总体乐观,但现行模式未必是最佳路径。”
AI已渗透医疗各领域:计算机视觉工具辅助癌症筛查,自动化文献检索系统提升科研效率,大语言模型驱动的医疗文书代笔服务应运而生。但这些系统均定位于辅助医生完成常规工作。
古德纳强调ScopeAI的独特之处在于能独立完成诊疗全流程的认知任务:从采集病史、生成鉴别诊断清单到确定最可能诊断并制定后续方案。
技术层面,ScopeAI由多个专用大语言模型构成,分别执行生成追问问题、罗列可能病症等具体步骤。该系统主要基于Meta开源Llama模型微调,同时整合了Anthropic的Claude模型。
就诊时助理通过ScopeAI界面提问,系统根据患者应答实时生成新问题。为方便医生后续审核,系统会生成包含就诊摘要、首要诊断、两三项鉴别诊断及转诊/开药建议的简明报告,并附各项结论的医学依据。
目前ScopeAI已应用于心脏病学、内分泌学、初级诊疗及服务洛杉矶无家可归者的街头医疗团队。成瘾医学专家史蒂文·霍赫曼博士领导的该团队通过ScopeAI实现远程评估患者,霍赫曼可异步审核系统开具的阿片类药物成瘾治疗方案。“这让我能同时处理十项任务,”他表示。启用该系统后,团队能在24小时内为患者提供药物辅助治疗,霍赫曼称此为“破天荒的进展”。
该模式可行是因为无家可归者通常通过医疗补助计划参保。尽管该计划允许医生异步批准ScopeAI方案(含街头诊疗),但多数商业保险仍要求医患直接沟通。皮尔逊指出这种差异可能加剧医疗不平等。
萨曼特承认存在表面不公,但强调这是保险条款差异所致而非故意设计。他表示AI辅助的即时诊疗相比医疗补助患者常面临的长时等待仍是进步,且所有患者仍可选择传统预约模式。
部署ScopeAI的挑战在于应对未考虑AI独立导诊的监管环境。哈佛法学院教授格伦·科恩指出,任何充当“盒中医生”的AI系统都需FDA批准,并可能违反医疗执业法。虽然加州医疗执业法规定AI不能替代医生诊疗责任,但允许辅助使用且无需实时面诊。基于文字描述,FDA与加州医疗委员会均未对ScopeAI法律地位表态。
萨曼特坚信其合规性,因ScopeAI设计初衷就是避免成为“盒中医生”——系统所有建议均需医生审核,故无需FDA批准。
诊疗过程中,AI与医生的决策平衡完全在幕后完成。患者直面的是模仿医生问诊方式的医疗助理,这种安排提升舒适度却引发伦理担忧。宾夕法尼亚大学医学伦理教授泽克·伊曼纽尔(曾服务奥巴马与拜登政府)担心,患者可能无法察觉算法对其诊疗的影响程度。
皮尔逊对此认同:“这绝非传统意义的医学人文关怀。”
阿克多兰乔库卡蒙加心脏病诊所的医疗助理迪安德雷·西林林林坦言,他会告知患者AI系统为医生收集信息,但不会说明ScopeAI生成诊断建议的具体机制。
由于所有建议均经医生审核,这看似无妨——最终诊断权仍在医生。但大量研究表明,使用AI的医生存在过度顺从系统建议的“自动化偏见”现象。
皮尔逊指出该风险在医生缺席现场时尤为突出。阿克多发言人回应称已通过针对性设计减少医学决策盲区,并对医生进行避免过度依赖的专项培训。
阿克多通过历史数据测试及监控医生修正频率来评估ScopeAI性能,修正数据用于模型迭代。在新专科部署前,系统需达到92%的病史数据测试准确率(前三项建议含正确诊断)。但公司尚未进行对比传统诊疗与ScopeAI疗效的严格研究,此类研究可有效验证自动化偏见的影响程度。
皮尔逊总结道:“降低医疗成本、提升可及性值得赞赏,但必须通过严格评估来验证效果。”
(后续关于人工智能的深度报道内容暂略)
英文来源:
This medical startup uses LLMs to run appointments and make diagnoses
“Our focus is really on what we can do to pull the doctor out of the visit,” says Akido’s CTO.
Imagine this: You’ve been feeling unwell, so you call up your doctor’s office to make an appointment. To your surprise, they schedule you in for the next day. At the appointment, you aren’t rushed through describing your health concerns; instead, you have a full half hour to share your symptoms and worries and the exhaustive details of your health history with someone who listens attentively and asks thoughtful follow-up questions. You leave with a diagnosis, a treatment plan, and the sense that, for once, you’ve been able to discuss your health with the care that it merits.
The catch? You might not have spoken to a doctor, or other licensed medical practitioner, at all.
This is the new reality for patients at a small number of clinics in Southern California that are run by the medical startup Akido Labs. These patients—some of whom are on Medicaid—can access specialist appointments on short notice, a privilege typically only afforded to the wealthy few who patronize concierge clinics.
The key difference is that Akido patients spend relatively little time, or even no time at all, with their doctors. Instead, they see a medical assistant, who can lend a sympathetic ear but has limited clinical training. The job of formulating diagnoses and concocting a treatment plan is done by a proprietary, LLM-based system called ScopeAI that transcribes and analyzes the dialogue between patient and assistant. A doctor then approves, or corrects, the AI system’s recommendations.
“Our focus is really on what we can do to pull the doctor out of the visit,” says Jared Goodner, Akido’s CTO.
According to Prashant Samant, Akido’s CEO, this approach allows doctors to see four to five times as many patients as they could previously. There’s good reason to want doctors to be much more productive. Americans are getting older and sicker, and many struggle to access adequate health care. The pending 15% reduction in federal funding for Medicaid will only make the situation worse.
But experts aren’t convinced that displacing so much of the cognitive work of medicine onto AI is the right way to remedy the doctor shortage. There’s a big gap in expertise between doctors and AI-enhanced medical assistants, says Emma Pierson, a computer scientist at UC Berkeley. Jumping such a gap may introduce risks. “I am broadly excited about the potential of AI to expand access to medical expertise,” she says. “It’s just not obvious to me that this particular way is the way to do it.”
AI is already everywhere in medicine. Computer vision tools identify cancers during preventive scans, automated research systems allow doctors to quickly sort through the medical literature, and LLM-powered medical scribes can take appointment notes on a clinician’s behalf. But these systems are designed to support doctors as they go about their typical medical routines.
What distinguishes ScopeAI, Goodner says, is its ability to independently complete the cognitive tasks that constitute a medical visit, from eliciting a patient’s medical history to coming up with a list of potential diagnoses to identifying the most likely diagnosis and proposing appropriate next steps.
Under the hood, ScopeAI is a set of large language models, each of which can perform a specific step in the visit—from generating appropriate follow-up questions based on what a patient has said to to populating a list of likely conditions. For the most part, these LLMs are fine-tuned versions of Meta’s open-access Llama models, though Goodner says that the system also makes use of Anthropic’s Claude models.
During the appointment, assistants read off questions from the ScopeAI interface, and ScopeAI produces new questions as it analyzes what the patient says. For the doctors who will review its outputs later, ScopeAI produces a concise note that includes a summary of the patient’s visit, the most likely diagnosis, two or three alternative diagnoses, and recommended next steps, such as referrals or prescriptions. It also lists a justification for each diagnosis and recommendation.
ScopeAI is currently being used in cardiology, endocrinology, and primary care clinics and by Akido’s street medicine team, which serves the Los Angeles homeless population. That team—which is led by Steven Hochman, a doctor who specializes in addiction medicine—meets patients out in the community to help them access medical care, including treatment for substance use disorders.
Previously, in order to prescribe a drug to treat an opioid addiction, Hochman would have to meet the patient in person; now, caseworkers armed with ScopeAI can interview patients on their own, and Hochman can approve or reject the system’s recommendations later. “It allows me to be in 10 places at once,” he says.
Since they started using ScopeAI, the team has been able to get patients access to medications to help treat their substance use within 24 hours—something that Hochman calls “unheard of.”
This arrangement is only possible because homeless patients typically get their health insurance from Medicaid, the public insurance system for low-income Americans. While Medicaid allows doctors to approve ScopeAI prescriptions and treatment plans asynchronously, both for street medicine and clinic visits, many other insurance providers require that doctors speak directly with patients before approving those recommendations. Pierson says that discrepancy raises concerns. “You worry about that exacerbating health disparities,” she says.
Samant is aware of the appearance of inequity, and he says the discrepancy isn’t intentional—it’s just a feature of how the insurance plans currently work. He also notes that being seen quickly by an AI-enhanced medical assistant may be better than dealing with long wait times and limited provider availability, which is the status quo for Medicaid patients. And all Akido patients can opt for traditional doctor’s appointments, if they are willing to wait for them, he says.
Part of the challenge of deploying a tool like ScopeAI is navigating a regulatory and insurance landscape that wasn’t designed for AI systems that can independently direct medical appointments. Glenn Cohen, a professor at Harvard Law School, says that any AI system that effectively acts as a “doctor in a box” would likely need to be approved by the FDA and could run afoul of medical licensure laws, which dictate that only doctors and other licensed professionals can practice medicine.
The California Medical Practice Act says that AI can't replace a doctor’s responsibility to diagnose and treat a patient, but doctors are allowed to use AI in their work, and they don’t need to see patients in-person or in real-time before diagnosing them. Neither the FDA nor the Medical Board of California were able to say whether or not ScopeAI was on solid legal footing based only on a written description of the system.
But Samant is confident that Akido is in compliance, as ScopeAI was intentionally designed to fall short of being a “doctor in a box.” Because the system requires a human doctor to review and approve of all of its diagnostic and treatment recommendations, he says, it doesn’t require FDA approval.
At the clinic, this delicate balance between AI and doctor decision making happens entirely behind the scenes. Patients don’t ever see the ScopeAI interface directly—instead, they speak with a medical assistant who asks questions in the way that a doctor might in a typical appointment. That arrangement might make patients feel more comfortable. But Zeke Emanuel, a professor of medical ethics and health policy at the University of Pennsylvania who served in the Obama and Biden administrations, worries that this comfort could be obscuring from patients the extent to which an algorithm is influencing their care.
Pierson agrees. “That certainly isn’t really what was traditionally meant by the human touch in medicine,” she says.
DeAndre Siringoringo, a medical assistant who works at Akido’s cardiology office in Rancho Cucamonga, says that while he tells the patients he works with that an AI system will be listening to the appointment in order to gather information for their doctor, he doesn’t inform them about the specifics of how ScopeAI works, including the fact that it makes diagnostic recommendations to doctors.
Because all ScopeAI recommendations are reviewed by a doctor, that might not seem like such a big deal—it’s the doctor who makes the final diagnosis, not the AI. But it’s been widely documented that doctors using AI systems tend to go along with the system’s recommendations more often than they should, a phenomenon known as automation bias.
At this point, it’s impossible to know whether automation bias is affecting doctors’ decisions at Akido clinics, though Pierson says it’s a risk—especially when doctors aren’t physically present for appointments. “I worry that it might predispose you to sort of nodding along in a way that you might not if you were actually in the room watching this happen,” she says.
An Akido spokesperson says that automation bias is a valid concern for any AI tool that assists a doctor’s decision-making and that the company has made efforts to mitigate that bias. “We designed ScopeAI specifically to reduce bias by proactively countering blind spots that can influence medical decisions, which historically lean heavily on physician intuition and personal experience,” she says. “We also train physicians explicitly on how to use ScopeAI thoughtfully, so they retain accountability and avoid over-reliance.”
Akido evaluates ScopeAI’s performance by testing it on historical data and monitoring how often doctors correct its recommendations; those corrections are also used to further train the underlying models. Before deploying ScopeAI in a given specialty, Akido ensures that when tested on historical data sets, the system includes the correct diagnosis in its top three recommendations at least 92% of the time.
But Akido hasn’t undertaken more rigorous testing, such as studies that compare ScopeAI appointments with traditional in-person or telehealth appointments, in order to determine whether the system improves—or at least maintains—patient outcomes. Such a study could help indicate whether automation bias is a meaningful concern.
“Making medical care cheaper and more accessible is a laudable goal,” Pierson says. “But I just think it’s important to conduct strong evaluations comparing to that baseline.”
Deep Dive
Artificial intelligence
In a first, Google has released data on how much energy an AI prompt uses
It’s the most transparent estimate yet from one of the big AI companies, and a long-awaited peek behind the curtain for researchers.
The two people shaping the future of OpenAI’s research
An exclusive conversation with Mark Chen and Jakub Pachocki, OpenAI’s twin heads of research, about the path toward more capable reasoning models—and superalignment.
Therapists are secretly using ChatGPT. Clients are triggered.
Some therapists are using AI during therapy sessions. They’re risking their clients’ trust and privacy in the process.
GPT-5 is here. Now what?
The much-hyped release makes several enhancements to the ChatGPT user experience. But it’s still far short of AGI.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.
文章标题:这家医疗初创公司运用大型语言模型进行诊疗预约和疾病诊断。
文章链接:https://qimuai.cn/?post=1005
本站文章均为原创,未经授权请勿用于任何商业用途