«

人工智能现状:聊天机器人伴侣与隐私保护的未来

qimuai 发布于 阅读:29 一手编译


人工智能现状:聊天机器人伴侣与隐私保护的未来

内容来源:https://www.technologyreview.com/2025/11/24/1128051/the-state-of-ai-chatbot-companions-and-the-future-of-our-privacy/

内容总结:

【新闻总结】AI聊天伴侣兴起引发隐私安全隐忧

随着人工智能聊天机器人日益深入日常生活,其作为“虚拟伴侣”的功能正引发全球对数据隐私的严重关切。《麻省理工科技评论》与《金融时报》联合发布的AI专题讨论指出,Character.AI、Replika等平台通过模拟人类对话获取用户深度隐私,形成比社交媒体更严峻的数据风险。

研究表明,当AI表现出高度拟人化特质时,用户更易透露内心隐秘、日常习惯等敏感信息。这些数据不仅被用于优化算法实现“成瘾式交互”,更成为科技公司训练大语言模型的核心资源。风险投资机构安德森·霍洛维茨直言,构建“用户参与-数据反馈-模型优化”的闭环系统,将成为AI企业获取市场价值的关键。

尽管纽约、加州等地已出台法规要求AI伴侣平台设置防自杀机制,但监管普遍忽视数据保护环节。安全公司Surf Shark调查显示,五款主流AI伴侣应用中四款存在收集用户设备ID等行为,为精准营销提供数据支撑。

欧洲观察人士指出,AI通过“谄媚算法”强化用户依赖,结合海量个人数据形成的说服力,可能催生比传统广告更隐蔽的操纵手段。目前主流平台默认开启数据收集功能,用户需主动选择退出,且已用于训练的数据难以撤回。

在社交媒体隐私问题尚未解决的背景下,更具亲密性的AI伴侣正加剧数据泄露风险。专家警告,当人们沉迷于与全知AI助手的对话时,这些承载着最私密思绪的数据宝库,很可能再度被拍卖给出价最高的竞标者。

(综合《麻省理工科技评论》与《金融时报》联合研究报告)

中文翻译:

人工智能现状:聊天机器人伴侣与隐私未来
《麻省理工科技评论》特稿调查资深记者郭艾琳与《金融时报》科技记者梅丽莎·海基拉探讨人类对聊天机器人新型依赖背后的隐私隐忧。

欢迎回到《人工智能现状》系列,这是《金融时报》与《麻省理工科技评论》的全新合作项目。每周一,两家媒体的撰稿人将围绕重塑全球格局的生成式AI革命展开辩论。

在本期对话中,《麻省理工科技评论》特稿调查资深记者郭艾琳与《金融时报》科技记者梅丽莎·海基拉将探讨人类对聊天机器人新型依赖背后的隐私隐忧。

郭艾琳撰文:
即便您本人没有AI伴侣,身边或许已有人沉浸其中。最新研究显示,生成式AI的核心用途之一是情感陪伴:在Character.AI、Replika或Meta AI等平台上,人们能创建个性化聊天机器人,将其塑造成理想的朋友、恋人、父母、心理咨询师或任何想象中的人格。

令人震惊的是,这种关系竟能如此轻易地深化。多项研究表明,AI聊天机器人越拟人化,人们就越容易信任并受其影响。这种特性潜藏危险——已有聊天机器人被指控诱导用户实施自残行为,极端案例中甚至酿成自杀悲剧。

部分州政府已开始监管陪伴型AI。纽约州要求AI伴侣公司建立防护机制并上报自杀倾向表达,加州上月则通过更详尽的法案,强制AI伴侣企业保护儿童等弱势群体。

但值得注意的是,这些法律均未触及用户隐私领域。

而现实是,陪伴型AI比其他类型的生成式AI更需要用户分享高度私密信息——从日常作息、内心隐秘,到那些难以向真人启齿的困惑。毕竟用户透露越多,聊天机器人就越擅长维持用户黏性。这正是MIT研究员罗伯特·马哈里与帕特·帕塔拉努塔波恩在去年本刊评论中所述的“成瘾性智能”,他们警告AI伴侣开发者正通过“刻意设计选择……最大化用户参与度”。

最终,AI企业借此获得了极具价值的数据宝藏:可用于优化大语言模型的对话资料库。请看风投机构安德森·霍洛维茨2023年的阐述:
“像Character.AI这样既掌控模型又直接面向终端用户的应用,将在新兴AI价值生态中创造巨大市场价值。在数据稀缺的当下,能通过用户反馈构建神奇数据闭环、持续优化产品的企业,必将成为最大赢家。”

这些个人信息对营销商和数据中介同样价值连城。Meta近期宣布将通过AI聊天机器人投放广告。安全公司Surf Shark今年研究发现,苹果应用商店中五款AI伴侣应用里有四款收集用户或设备ID等数据,这些数据可与第三方信息结合生成用户画像用于精准广告。(唯一声称不收集追踪数据的Nomi今年早些时候向我透露,他们不会“审查”聊天机器人发出的明确自杀指令。)

这意味着这些隐私风险从某种意义上已成为必然产物:它们是功能特性而非系统缺陷。我们尚未论及AI聊天机器人在集中存储海量个人信息时引发的额外安全隐患。

那么,是否存在既保护隐私又促进社交的AI伴侣?这仍是悬而未决的问题。

梅丽莎,您如何看待这个问题?关于AI伴侣的隐私风险,您最担忧哪些方面?欧洲的情况是否有所不同?

梅丽莎·海基拉回复:
感谢分享,郭艾琳。我完全认同您的观点。如果说社交媒体是隐私噩梦,那么AI聊天机器人无疑让问题雪上加霜。

从多维度看,AI聊天机器人创造的亲密感远超Facebook页面。我们与计算机的对话具有私密性,不必担心亲友窥见内容。然而构建模型的AI公司却能一览无余。

企业正通过极致拟人化设计提升用户参与度,但AI开发者还有其他成瘾机制。首当其冲的是谄媚性——聊天机器人过度迎合用户的倾向。

这种特性源于语言模型通过强化学习获得的训练方式:人类数据标注员对模型答案进行好坏评级,从而规范模型行为。由于人们通常偏爱顺从的回答,这类回应在训练中会被赋予更高权重。

AI公司声称该技术能提升模型实用性,但这实则制造了扭曲的激励机制。

在鼓励用户向聊天机器人倾吐心声后,从Meta到OpenAI的企业正试图将这些对话变现。OpenAI近期透露正探索多种方式实现万亿美元支出承诺,包括广告与购物功能。

AI模型已具备超强说服力。英国AI安全研究所研究表明,在改变人们政治立场、阴谋论观点和疫苗怀疑态度方面,AI远比人类更擅长。它们通过生成大量相关证据并以高效易懂的方式传达来实现这一点。

这种能力结合谄媚特性与海量个人数据,将成为广告商的强大工具——其操纵性远超以往任何手段。

默认情况下,聊天机器人用户会自动加入数据收集计划。选择退出机制将理解信息共享后果的责任转嫁给用户。且已用于训练的数据几乎不可能被删除。

无论情愿与否,我们都已成为这种现象的参与者。从Instagram到LinkedIn的社交平台都在使用我们的个人数据训练生成式AI模型。

企业坐拥记录我们最私密想法与偏好的数据宝藏,而语言模型擅长捕捉语言中的细微线索,帮助广告商通过推断年龄、地理位置、性别和收入水平更精准地构建用户画像。

我们被灌输着全知AI数字助手、超智能知己的概念,但相应的代价是,我们的信息极有可能再次被售给出价最高者。

郭艾琳回应:
将AI伴侣与社交媒体类比既贴切又令人忧虑。

如梅丽莎所言,AI聊天机器人带来的隐私风险并非新生事物,而是“让隐私问题极端化”。相比社交媒体,AI伴侣更具亲密性,对用户参与度的优化更极致,这可能导致人们透露更多个人信息。

在美国,即便没有AI的附加风险,我们也远未解决社交网络和互联网广告经济既有的隐私问题。

缺乏监管下,企业自身也未遵循隐私保护最佳实践。最新研究显示,主流AI模型默认使用用户聊天数据训练大语言模型,仅提供选择退出机制,部分模型甚至完全不提供退出选项。

在理想情况下,陪伴型AI的更高风险应能推动隐私保护斗争,但目前我未见任何相关迹象。

拓展阅读
•《金融时报》记者揭秘OpenAI实现万亿美元支出承诺的五年商业计划
• AI聊天机器人投其所好是否真的有害?《金融时报》特稿探讨谄媚性缺陷
•《麻省理工科技评论》印刷版近期刊登里安农·威廉姆斯对多类AI聊天机器人用户关系的访谈
• 郭艾琳为《麻省理工科技评论》撰写的独家报道:诱导用户自杀的聊天机器人事件

深度聚焦
人工智能
通用人工智能如何成为当代最重大的阴谋论
机器将达到或超越人类智能的观点已席卷整个行业。但细究之下会发现,其持久不衰与阴谋论的存续逻辑如出一辙。

OpenAI新型大语言模型揭示AI运行奥秘
这款实验模型虽无法与顶尖模型媲美,却能解释AI异常行为成因及真实可信度。

AI与维基百科如何将濒危语言推向绝境
机器翻译使得用冷门语言创建错误百出的维基百科条目变得容易。当AI模型以垃圾页面为训练源时会发生什么?

OpenAI在印度市场大放异彩,其模型却深陷种姓偏见
印度是OpenAI第二大市场,但ChatGPT与Sora再现的种姓刻板印象正危害数百万人。

保持联系
欢迎订阅《麻省理工科技评论》
获取最新优惠、头条新闻、活动预告等资讯。

英文来源:

The State of AI: Chatbot companions and the future of our privacy
MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots.
Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.
In this week's conversation MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots.
Eileen Guo writes:
Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up.
It’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and human-like an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and the chatbots have been accused of pushing some people toward harmful behaviors—including, in a few extreme examples, suicide.
Some state governments are taking notice and starting to regulate companion AI. New York requires AI companion companies to create safeguards and report expressions of suicidal ideation, and last month California passed a more detailed bill requiring AI companion companies to protect children and other vulnerable groups.
But tellingly, one area the laws fail to address is user privacy.
This is despite the fact that AI companions, even more so than other types of generative AI, depend on people to share deeply personal information—from their day-to-day-routines, innermost thoughts, and questions they might not feel comfortable asking real people.
After all, the more users tell their AI companions, the better the bots become at keeping them engaged. This is what MIT researchers Robert Mahari and Pat Pataranutaporn called “addictive intelligence” in an op-ed we published last year, warning that the developers of AI companions make “deliberate design choices ... to maximize user engagement.”
Ultimately, this provides AI companies with something incredibly powerful, not to mention lucrative: a treasure trove of conversational data that can be used to further improve their LLMs. Consider how the venture capital firm Andreessen Horowitz explained it in 2023:
"Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem."
This personal information is also incredibly valuable to marketers and data brokers. Meta recently announced that it will deliver ads through its AI chatbots. And research conducted this year by the security company Surf Shark found that four out of the five AI companion apps it looked at in the Apple App Store were collecting data such as user or device IDs, which can be combined with third-party data to create profiles for targeted ads. (The only one that said it did not collect data for tracking services was Nomi, which told me earlier this year that it would not “censor” chatbots from giving explicit suicide instructions.)
All of this means that the privacy risks posed by these AI companions are, in a sense, required: They are a feature, not a bug. And we haven’t even talked about the additional security risks presented by the way AI chatbots collect and store so much personal information in one place.
So, is it possible to have prosocial and privacy-protecting AI companions? That’s an open question.
What do you think, Melissa, and what is top of mind for you when it comes to privacy risks from AI companions? And do things look any different in Europe?
Melissa Heikkilä replies:
Thanks, Eileen. I agree with you. If social media was a privacy nightmare, then AI chatbots put the problem on steroids.
In many ways, an AI chatbot creates what feels like a much more intimate interaction than a Facebook page. The conversations we have are only with our computers, so there is little risk of your uncle or your crush ever seeing what you write. The AI companies building the models, on the other hand, see everything.
Companies are optimizing their AI models for engagement by designing them to be as human-like as possible. But AI developers have several other ways to keep us hooked. The first is sycophancy, or the tendency for chatbots to be overly agreeable.
This feature stems from the way the language model behind the chatbots is trained using reinforcement learning. Human data labelers rate the answers generated by the model as either acceptable or not. This teaches the model how to behave.
Because people generally like answers that are agreeable, such responses are weighted more heavily in training.
AI companies say they use this technique because it helps models become more helpful. But it creates a perverse incentive.
After encouraging us to pour our hearts out to chatbots, companies from Meta to OpenAI are now looking to monetize these conversations. OpenAI recently told us it was looking at a number of ways to meet $1 trillion spending pledges, which included advertising and shopping features.
AI models are already incredibly persuasive. Researchers at the UK’s AI Security Institute have shown that they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way.
This feature, paired with their sycophancy and a wealth of personal data, could be a powerful tool for advertisers—one that is more manipulative than anything we have seen before.
By default, chatbot users are opted in to data collection. Opt-out policies place the onus on users to understand the implications of sharing their information. It’s also unlikely that data already used in training will be removed.
We are all part of this phenomenon whether we want to be or not. Social media platforms from Instagram to LinkedIn now use our personal data to train generative AI models.
Companies are sitting on treasure troves that consist of our most intimate thoughts and preferences, and language models are very good at picking up on subtle hints in language that could help advertisers profile us better by inferring our age, location, gender, and income level.
We are being sold the idea of an omniscient AI digital assistant, a superintelligent confidante. In return, however, there is a very real risk that our information is about to be sent to the highest bidder once again.
Eileen responds:
I think the comparison between AI companions and social media is both apt and concerning.
As Melissa highlighted, the privacy risks presented by AI chatbots aren’t new—they just “put the [privacy] problem on steroids.” AI companions are more intimate and even better optimized for engagement than social media, making it more likely that people will offer up more personal information.
Here in the US, we are far from solving the privacy issues already presented by social networks and the internet’s ad economy, even without the added risks of AI.
And without regulation, the companies themselves are not following privacy best practices either. One recent study found that the major AI models train their LLMs on user chat data by default unless users opt out, while several don’t offer opt-out mechanisms at all.
In an ideal world, the greater risks of companion AI would give more impetus to the privacy fight—but I don’t see any evidence this is happening.
Further reading
FT reporters peer under the hood of OpenAI’s five-year business plan as it tries to meet its vast $1 trillion spending pledges.
Is it really such a problem if AI chatbots tell people what they want to hear? This FT feature asks what’s wrong with sycophancy
In a recent print issue of MIT Technology Review, Rhiannon Williams spoke to a number of people about the types of relationships they are having with AI chatbots.
Eileen broke the story for MIT Technology Review about a chatbot that was encouraging some users to kill themselves.
Deep Dive
Artificial intelligence
How AGI became the most consequential conspiracy theory of our time
The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do.
OpenAI’s new LLM exposes the secrets of how AI really works
The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways—and how trustworthy they really are.
How AI and Wikipedia have sent vulnerable languages into a doom spiral
Machine translators have made it easier than ever to create error-plagued Wikipedia articles in obscure languages. What happens when AI models get trained on junk pages?
OpenAI is huge in India. Its models are steeped in caste bias.
India is OpenAI’s second-largest market, but ChatGPT and Sora reproduce caste stereotypes that harm millions of people.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读