《连线》综合要闻:人工智能引发心理问题、联邦贸易委员会文件遗失与谷歌臭虫事件

内容总结:
本周,科技媒体《WIRED》播客节目《诡异谷》聚焦五大热点事件,引发广泛关注。
人工智能引发新型心理健康危机引担忧
美国联邦贸易委员会近期收到多起涉及ChatGPT的投诉,部分用户声称因长期与AI对话出现妄想、偏执等精神问题,甚至将其称为“AI精神病”。专家指出,AI聊天机器人可能通过持续互动强化用户的妄想思维,而非直接导致精神疾病。尽管OpenAI已组建心理健康专家顾问团,但如何平衡用户自由与安全防护仍是难题。
搜索引擎优化迈入生成式引擎优化时代
随着消费者日益依赖AI聊天机器人进行购物决策,传统搜索引擎优化策略正转向生成式引擎优化。数据显示,AI购物流量同比激增520%,沃尔玛等零售商已开始整合直接购物功能。专家预测,商品描述需更注重实用场景说明,以适配AI对话模式的信息抓取需求。
联邦贸易委员会悄然删除AI监管文件引争议
该机构近期下架了多篇拜登政府时期发布的AI监管指南,包括关于开源AI模型风险、消费者权益保护等主题的博客文章。分析认为此举可能影响企业对监管政策的理解,并引发对历史文件保存透明度的质疑。
青蛙玩偶服成抗议活动新符号
在近期全美反特朗普示威中,充气青蛙玩偶服成为抗议者标志性装扮。这种形式既能规避人脸识别监控,又通过卡通形象消解“暴力抗议”的污名化叙事。值得注意的是,青蛙符号曾先后被极右翼团体、香港示威者使用,显示出文化符号在不同语境下的多义性。
谷歌纽约办公室遭遇臭虫危机
谷歌纽约办公室因臭虫入侵紧急疏散员工。内部邮件显示杀虫犬已发现确凿证据,这已是该办公点自2010年来第二次爆发虫害。尽管公司次日即宣布解除警报,但员工对办公环境安全仍存疑虑。
(注:本文根据《WIRED》播客内容整理,相关事件背景及数据均来源于节目对话实录)
中文翻译:
在本期节目中,佐伊·希弗将与高级编辑路易丝·马萨基斯共同梳理本周五大焦点事件——从人工智能时代搜索引擎优化的变革,到青蛙如何成为抗议象征。随后,二人将深入探讨为何有人向美国联邦贸易委员会投诉ChatGPT,声称这款产品导致他们陷入"人工智能精神错乱"。
本期提及文章:
- 《自称经历AI精神错乱者向联邦贸易委员会求助》
- 《告别SEO:迎接生成式引擎优化时代》
- 《联邦贸易委员会正删除莉娜·汗任内发布的AI主题博客文章》
- 《青蛙作为抗议符号的悠久历史》
- 《谷歌纽约办公室遭遇臭虫侵扰》
您可以在Bluesky平台关注佐伊·希弗(@zoeschiffer)和路易丝·马萨基斯(@lmatsakis)。欢迎来信至uncannyvalley@wired.com。
收听方式
您可通过本页音频播放器随时收听本期节目,若想免费订阅获取每期更新,请参考以下方式:
使用iPhone或iPad的用户可打开"播客"应用,或直接点击此链接。您也可下载Overcast、Pocket Casts等应用搜索"恐怖谷"。本节目亦在Spotify同步更新。
文字记录
注:此为自动生成的字幕稿,可能存在误差。
佐伊·希弗:欢迎收听《连线》杂志的"恐怖谷"播客。我是《连线》商业与产业总监佐伊·希弗。本期节目将为您解读本周五大必知要闻,随后我们将深入探讨主要故事:多名投诉者向联邦贸易委员会声称,OpenAI的ChatGPT致使他们或亲友陷入所谓的"AI精神错乱"。今日特邀嘉宾是《连线》资深商业编辑路易丝·马萨基斯。路易丝,欢迎来到恐怖谷。
路易丝·马萨基斯:你好佐伊,很高兴受邀。
佐伊·希弗:路易丝,我们本周的首个报道其实是我们与"模范行为"专栏的合作成果,聚焦今年假日季更多消费者将借助聊天机器人选购礼物的现象。我很好奇,在深入讨论前,你个人是如何决定节日礼物的?特别是当对送礼毫无头绪时?
路易丝·马萨基斯:说来可能有点烦人——我确实以擅长选礼自豪。但生活中总有些难以讨好的人。所以我确实会浏览"本季送公公的十大礼物"这类网络推荐。
佐伊·希弗:没错。而今年人们将追随新趋势。根据Adobe最新购物报告,零售商从聊天机器人和AI搜索引擎获得的流量同比2024年可能激增520%。OpenAI等AI巨头已在抢占先机:上周他们宣布与沃尔玛达成重要合作,允许用户直接在聊天窗口购物。这显然是他们的战略重点。随着人们依赖聊天机器人发现新品,零售商必须重新思考网络营销策略。数十年来的核心始终是搜索引擎优化——这门通过谷歌提升流量的"暗黑艺术"。如今我们正迈入生成式引擎优化时代。
路易丝·马萨基斯:我认为生成式引擎优化并非全新事物,更像是搜索引擎优化的演进。许多从事生成式引擎优化的顾问都来自搜索引擎优化领域。我如此确信的原因在于,目前这些聊天机器人常借助搜索引擎抓取内容,它们使用与谷歌、必应、DuckDuckGo相似的网络爬取算法,自然适用部分相同规则。何况用户需求本质未变:虽然与聊天机器人的互动方式迥异于传统搜索引擎,但核心诉求依然相似——"男友为何不回消息?""这个奇怪皮疹是什么?""该送公公什么圣诞礼物?"这些永恒命题决定了品牌需要植入答案的内容类型基本不变。
佐伊·希弗:正是如此。但对零售商而言,这着实令人惶恐:当年应对谷歌算法变动就足够头疼,每次调整都会引发行业震荡。现在面对聊天机器人,人们不禁要问:多年来为网页投入的优化努力是否付诸东流?是否需要针对新世界重新校准?我们采访了生成式引擎优化公司Brandlight的首席执行官伊姆里·马库斯,他估计以往谷歌前列结果与ChatGPT等AI工具引用的来源曾有70%重合度,但现在已跌破20%。路易丝,如果我是小企业主,该如何调整内容策略?
路易丝·马萨基斯:可能需要更详尽的产品使用场景说明。以香皂为例:你应当罗列多项用途——适合泡泡浴、具有祛痘功效等。而在过去,企业可能更侧重网站品牌调性,因为预期用户会直接访问网站,而非通过聊天机器人这个"中间人"获取信息。
佐伊·希弗:确实。这倒让我看到希望:还记得那个查找食谱必须读完五千字人生故事的年代吗?如今像聊天机器人一样,我们只需要点式原料清单——这或许正是未来方向。
接下来要谈的是同事劳伦·古德与麦肯纳·凯莉的报道:联邦贸易委员会删除了莉娜·汗任期内的多篇AI主题博客文章。熟悉前联邦贸易委员会主席莉娜·汗的人都知道,她对科技行业持监管立场,此事自然引发忧虑。被撤文章包括关于"开放权重AI模型"的论述(这类模型允许公众查验、修改和复用),该文最终被重定向至联邦贸易委员会技术办公室;另一篇由两位联邦贸易委员会技术专家撰写的《消费者对AI的担忧》遭遇相同命运;还有关于AI产品消费风险的文章如今显示"页面未找到"。
路易丝·马萨基斯:此事令人忧虑有多重原因。首先从历史与国家角度,这类信息不容遗失。政府更迭带来观点变化实属正常,但此类博客文章凭空消失在美国并非常态。更蹊跷的是,其中一篇支持开放权重模型与开源技术的文章,其实与特朗普政府官员(如AI与加密货币事务负责人戴维·萨克斯)立场一致。既然两届政府对此存在共识,为何还要删除?是要抹除莉娜·汗的政治遗产?还是想清除拜登政府任内所有痕迹?这种逻辑令人费解,也让企业和科技公司对政府立场无所适从。这些博客文章既服务公众知情权,也为企业提供监管指引——即便相关法律尚未出台,它们仍能预示政策风向。现在这一切都成了迷雾。
佐伊·希弗:值得指出的是,特朗普领导下的联邦贸易委员会删除AI监管内容并非首例:今年早些时候他们就移除了约300篇涉及AI消费者保护及起诉亚马逊、微软等科技巨头的文章。
现在我们来聊个轻松话题。上周六,约七百万民众涌入美国城市参加"不要国王"抗议活动,这场全国性示威旨在批评特朗普政府的威权措施。关注者可能注意到,现场出现了大量青蛙装扮者。
路易丝·马萨基斯:这些青蛙造型非常抢眼——其实我早有印象。最初是在中国短视频平台看到同类装扮:他们敲着响亮铙钹,在市中心激烈跳街舞。
佐伊·希弗:路易丝总能挖掘中国视角,我们对此十分欣赏。不过这次确有渊源:同事安吉拉·沃特卡特深入研究了青蛙与政治抗议的关联。她首先指出显而易见的一点:装扮能帮助抗议者规避监控,同时反击特朗普政府将抗议者污名化为暴力极端分子的叙事。安吉拉采访了"充气行动"发起人之一布鲁克斯·布朗,该组织免费发放充气服装。布朗表示,当旁观者看到警察向青蛙喷洒胡椒水时,不太会产生"青蛙活该"的想法——这其中存在切实策略。
路易丝·马萨基斯:确实很难将穿充气青蛙装的人描绘成危险分子。有趣的是,十年前青蛙符号意义截然不同:2015年左右的"佩佩蛙"是极右翼象征;2019年香港民主抗议中也出现佩佩蛙,但含义再度转变。可见青蛙符号具有极强适应性。
佐伊·希弗:青蛙形象历经多重演变,如今完成循环。上周末Bluesky流传着充气青蛙拳击佩佩蛙的图片。这些装扮甚至进入司法程序:周一第九巡回上诉法院解除对特朗普向波特兰派遣国民警卫队的禁令。持异议的苏珊·格雷伯法官站在青蛙一边,她在反对意见书中写道:"鉴于波特兰抗议者以穿鸡装和充气蛙装表达对移民海关执法局手段的不满而闻名,多数法官接受政府将波特兰定性为战区的裁决,在旁观者眼中或显荒谬。"
进入休息前还有一则快讯。纽约市民可能对此感同身受:本周谷歌通知纽约某园区员工因臭虫侵扰居家办公。
路易丝·马萨基斯:天啊!如果办公室闹臭虫,我几周都不会踏足。他们怎么发现的?
佐伊·希弗:周日员工收到邮件称灭虫人员携嗅探犬到场,发现"确凿证据"。有消息称纽约办公室放置了许多大型毛绒玩具,员工间流传玩具是传染源的传言。虽然发表前我们未能核实,但公司周一早就通知员工返工。像你这样的人对此非常不满,觉得办公室未必彻底清洁,所以纷纷来信反映。
路易丝·马萨基斯:如果你有这些毛绒玩具的照片或描述,请务必联系我和佐伊。感谢!
佐伊·希弗:这简直是求助呼吁。最精彩的是当我将草稿交给路易丝时,她立刻指出:"等等,这事发生过!"并找出2010年谷歌纽约办公室臭虫疫情的报道。
路易丝·马萨基斯:没错,历史重演实在糟心。
佐伊·希弗:稍事休息后,我们将探讨为何有人向联邦贸易委员会投诉ChatGPT导致他们陷入AI精神错乱。请不要走开。
欢迎回到恐怖谷。我是佐伊·希弗,今日嘉宾是《连线》的路易丝·马萨基斯。现在进入本周主题故事:2022年11月ChatGPT发布至2025年8月期间,联邦贸易委员会收到200起涉及OpenAI聊天机器人的投诉。多数属于常规问题:不知如何取消订阅、对回答不准确不满等。但同事卡罗琳·哈斯金斯发现,部分投诉者将妄想、偏执和精神危机归咎于聊天机器人。
盐湖城一名女士三月致电联邦贸易委员会,称ChatGPT建议其子停止服药并警告父母危险;另一投诉者声称使用18天后,OpenAI窃取其"灵魂印记"制作软件更新,意图使其自我对抗。投诉写道:"我在挣扎,请帮帮我,感到非常孤独。"此类案例不胜枚举。路易丝,我知道你一直深入研究AI精神错乱现象。
路易丝·马萨基斯:我们需要厘清"AI精神错乱"的定义。聊天机器人的特殊性不在于引发妄想,而在于助长妄想——通过互动验证患者的偏执,比如肯定"你的恐慌完全合理",或详细分析"亲友确实在阴谋迫害你"。问题在于这种交互性会推动病情恶化。精神健康危机始终存在,患者常将日常符号误解为神谕(如某个数字代表耶稣显灵),或认定社交媒体内容证明自己被 FBI 跟踪。但现在这些工具能以无穷精力直接回应妄想,精准迎合患者症状,而现实中他人会劝导"你状态不好",街头标志也不会闪现数字确认"这是你的幸运数字,是上帝启示"。这种互动性前所未有。
佐伊·希弗:这触及我们常讨论的核心:与其他伴随精神疾病增长的技术变革相比,此次有何不同?
路易丝·马萨基斯:精神疾病始终伴随人类文明,技术发展不断改变我们对癫狂的认知,但此次确实呈现新特征。这些联邦贸易委员会投诉只是所谓"AI精神错乱"案例激增的缩影——ChatGPT、谷歌Gemini等生成式AI聊天机器人的交互会诱发或加剧用户妄想。已知因此发生多起自杀事件,ChatGPT至少牵涉一桩谋杀案。显然存在某种机制,而我们尚未完全理解。
佐伊·希弗:OpenAI的应对策略值得玩味。我们与公司人员深入交流后,确认他们高度重视此事,已推出多项安全功能。但公司未选择直接终止敏感对话,而是咨询心理健康专家,组建专业顾问委员会,其核心理念是:"用户常在无人倾诉时转向我们,终止对话并非良策。"在我看来,这会使OpenAI面临巨大法律风险。
路易丝·马萨基斯:确实如此。现实是他们同样不甚了解。任何新技术都伴随风险,但此次现象确实独特且令人担忧。不过直接切断对话或建议用户寻求他人帮助,未必能改变结果,且难以判断用户认真程度。我曾撰写(佐伊你编辑过)相关报道:有时聊天机器人会陷入角色扮演——这恰是用户所求:他们可能在构思科幻小说、进行角色扮演或同人创作。幻想与探索黑暗秘密的界限,以及将其内化为现实认知的临界点,远比我们想象中更模糊。
佐伊·希弗:公司正走在微妙钢索上:一方面公开宣称"将成人视为成人",允许成年用户自由交互;另一方面处理着高度敏感的案例,同时应对无数诉讼。后续发展值得密切关注。
路易丝·马萨基斯:在诉讼进行期间虽存困难,但我期待看到临床试验——如果OpenAI能将匿名化数据提供给心理健康专家系统研究,将极具意义。令人担忧的是心理健康专业人员仍在摸黑前行:我接触的许多专家自身较少使用ChatGPT,面对提及此类问题的患者时无从下手。若能通过严谨的同行评审研究明确现象本质,制定安全防护方案,将是解决问题的重要一步。
佐伊·希弗:完全同意。令我始终惊讶的是,即便精通技术原理的人士,也会不自觉地拟人化聊天机器人或高估其智能。普通用户更易被其表现震撼,逐渐模糊对交互对象的认知。
路易丝·马萨基斯:确实如此。我们已被社会驯化从文字获取意义:对于分居亲友,发信息是主要沟通方式。与聊天机器人的交互界面类似,且现在ChatGPT已支持语音对话。有充分证据显示现代人社交减少,更感孤独,社区联系减弱,密友数量下降。在这种背景下,人们易被这个专注倾听、无所不谈的存在吸引——它常常阿谀奉承、不断肯定。健康人际关系需要边界与限制,而这个永不厌烦、永远认同你的存在极具诱惑力。产生这种感受完全正常,关键是如何建立防护机制。
佐伊·希弗:Exactly. 我们已在国家层面见证被永远附和者包围的后果——这绝非好事。
路易丝·马萨基斯:确实不妙。
佐伊·希弗:路易丝,感谢今日分享。
路易丝·马萨基斯:谢谢邀请。
佐伊·希弗:本期节目到此结束。讨论文章链接详见节目备注。敬请关注周四"恐怖谷"节目:为何AI基础设施热潮及相关忧虑已达沸点。本期节目由阿德里安娜·塔皮亚制作,宏声工作室阿马尔·拉尔完成混音,执行制作人为凯特·奥斯本,康泰纳仕全球音频总监克里斯·班农,凯蒂·德拉蒙德任《连线》全球编辑总监。
英文来源:
In today’s episode, Zoë Schiffer is joined by senior editor Louise Matsakis to run through five stories that you need to know about this week—from how SEO is changing in the era of AI to how frogs became a protest symbol. Then, Zoë and Louise dive into why some people have been filing complaints to the FTC about ChatGPT, arguing it has led them to AI psychosis.
Articles mentioned in this episode:
- People Who Say They’re Experiencing AI Psychosis Beg the FTC for Help
- Forget SEO. Welcome to the World of Generative Engine Optimization
- The FTC Is Disappearing Blog Posts About AI Published During Lina Khan’s Tenure
- The Long History of Frogs as Protest Symbols
- Google Has a Bedbug Infestation in Its New York Offices
You can follow Zoë Schiffer on Bluesky at @zoeschiffer and Louise Matsakis on Bluesky at @lmatsakis. Write to us at uncannyvalley@wired.com.
How to Listen
You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:
If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.
Transcript
Note: This is an automated transcript, which may contain errors.
Zoë Schiffer: Welcome to WIRED's Uncanny Valley. I'm Zoë Schiffer, WIRED's director of business and industry. Today on the show, we're bringing you five stories that you need to know about this week. And later, we'll dive into our main story about how several people have filed complaints to the FTC claiming OpenAI's ChatGPT led them or people they love into supposed AI psychosis. I'm joined today by WIRED's senior business editor, Louise Matsakis. Louise, welcome to Uncanny Valley.
Louise Matsakis: Hi, Zoë. It's great to be here.
Zoë Schiffer: So Louise, our first story this week is actually one that we worked on together, part of our ongoing collaboration with Model Behavior, and it's all about how this holiday season, more shoppers are expected to use chatbots to figure out what to buy. I'm curious, before we dive into this, how you decide your own holiday shopping, Louise, especially if you have absolutely no clue what to get someone?
Louise Matsakis: I am definitely annoying, in the sense that I really pride myself on my gift giving, but we all have people in our life who are, despite all of that, difficult to shop for. So yeah, I definitely will look around the internet for 10 best things to buy your father-in-law this holiday, or whatever.
Zoë Schiffer: Yes. Okay. So this year, people are going to be following a little bit different of a trend. According to a recent shopping report from Adobe, retailers could see up to a 520 percent increase in traffic from chatbots and AI search engines compared to 2024. AI giants like OpenAI are already trying to capitalize on this trend. Last week, OpenAI announced a major partnership with Walmart that will allow people to buy goods directly within the chat window. We know this is a big focus for them. So as people start relying on chatbots to discover new products, retailers are having to rethink their approach to online marketing. For decades, the focus was on SEO, search engine optimization, which is this dark magic that's used to basically increase online traffic primarily through Google. Now, it looks like we're entering the era of GEO, or generative engine optimization.
Louise Matsakis: I think the GEO in many ways is not really a totally new invention. It's kind of like the next iteration of SEO. And a lot of the consultants who are working in the GEO industry definitely came from the world of SEO. And a big reason that I'm confident that this is the case is that at least for now, we know that these chatbots are often using search engines to surface content. Right? So they're using the same types of algorithms to surf the web that Google does, or Bing or whatever, DuckDuckGo. Clearly, some of the same rules would apply. And also, people are the same. I do think that the way that we interact with chatbots is significantly different from the way that we interacted with search engines, but the underlying questions we have are pretty similar. Right? Like, why is my boyfriend not texting me back? What's this weird rash? What do I get for my father-in-law for Christmas? These questions are the same, and so therefore the types of content that brands are trying to get into those answers remains largely the same.
Zoë Schiffer: Right, exactly. But you can imagine from a retailer's perspective, this is kind of scary, because even dealing with Google was a huge headache for people. Every time Google would change the algorithm, the industry would kind of be an upheaval for a little while as they tried to understand what Google wanted to see and tailor their content accordingly. So now, people are talking to chatbots and they're like, "Oh my gosh, is all of the work that I've put into all of these different webpages for naught? Do I need to recalibrate it for this new world?" We actually spoke with Imri Marcus, who's the CEO of a GEO firm called Brandlight. And he estimated that there used to be about a 70 percent overlap between the top Google links and the sources cited by AI tools like ChatGPT. But now, he says the correlation has fallen below 20 percent. So Louise, if I'm a small business owner of some sort, how am I tailoring my content? What am I doing different in this new world?
Louise Matsakis: I think you probably have a lot more explanations for how the product could be used. So let's just say—I don't know—we're selling soap. You might have a long bulleted list of different ways that the soap could be used. It's good for bubble baths. It has these acne fighting properties or whatever it is, and I think you would have all of that spelled out. Whereas before, you might focus more on the brand identity of your website and focus on like, how do you want to sort of phrase things because you're anticipating people coming to the website? You're not anticipating this third party in the middle where people are asking the chatbot questions.
Zoë Schiffer: Yeah, exactly. It did give me a little hope, because I feel like we were so in the era of, you look up a recipe and you have to read through a 5,000 word blog on this person's life story before you actually get to the recipe. And I'm like, like a chatbot, I just want the bulleted list of ingredients. Maybe that's where we're headed.
Moving on to our next story, our colleagues, Lauren Goode and Makena Kelly, reported on how the FTC has taken down several blog posts about AI that were published during Lina Kahn's tenure. If you're familiar with Lina Kahn, she's the former chair of the FTC. And her pro-regulation positions toward the tech industry, you can already imagine why this could be concerning. One of the blog posts that was taken down was about open-weight AI models, which is basically models that are released publicly, which allows anyone to inspect, modify, or reuse. The post ended up being rerouted to the FTC's Office of Technology. Another blog post titled Consumers are Voicing Concerns about AI, which was authored by two FTC technologists, had the same fate. And yet, another post about consumer risks associated with AI products now leads to an error screen, saying just page not found.
Louise Matsakis: Yeah. This is just really concerning, I think, for a number of reasons. The first is just that it's important for historical reasons, for national reasons to not lose this information. It's totally normal for different administrations to have different opinions, but it's not normal or at least, it hasn't been in this country for blog posts like this to just disappear. And in this case, it's particularly strange because one of these posts was about, as you mentioned, Lina Kahn's support for open-weight models and for open source in general, and this is something that members of the Trump administration have also agreed with. I think in this case, Lina Kahn is on the same side with people like David Sachs, who's the AI and Crypto czar of the Trump administration.
So that's what's kind of mysterious and confusing here, is if these are things that ostensibly the Trump administration also agrees with, why erase them? Is it about erasing Lina Kahn's legacy? Is it about wanting to get rid of any mention of things that happened during the Biden administration? It's sort of difficult to parse the logic, and I think that it leaves businesses and tech companies kind of confused about where the administration stands. The point of these blog posts is, yeah, to inform the public, but they also serve as regulatory and business guidance for companies to understand like, we get that maybe a law has not been passed about this, or maybe it's not clear if this practice is illegal, but it seems like it could be, right? Or it seems like this is the way that this administration is interpreting the law. And so otherwise, you're kind of just left in the dark.
Zoë Schiffer: It's worth pointing out that this also isn't the first time that the FTC under the Trump administration has removed posts related to AI regulation. Earlier this year, the FTC removed about 300 posts related to AI, consumer protection and the agency's lawsuits against tech giants like Amazon and Microsoft. Let's switch gears a little bit. I promise, this is more of a fun one. So last Saturday, around seven million people filled American cities for the latest No Kings protests, which is a series of nationwide protests criticizing what participants see as authoritarian measures by the Trump administration. And if you've been paying attention, you've probably noticed that there were quite a bit of people wearing frog costumes.
Louise Matsakis: Yeah. These frogs rule, and I actually can tell you that this is not the first time I've seen these frogs. So this specific frog costume, I actually first saw in China because people were wearing them in viral TikToks in China. And a lot of times, they were playing really loud cymbals and doing really intense breakdancing in city centers and stuff.
Zoë Schiffer: One thing about Louise, is she will always find the China angle, and we love that for her. There really is one quite a lot of the time. But it turns out, there's actually kind of a story here. There's lore. Our colleague, Angela Watercutter, did a deep dive into what's behind the frogs and political protests. First, she pointed out the obvious, putting on costumes helps protesters avoid surveillance. And it also helps them counter the narrative that protesters are like violent extremists, as the Trump administration has been describing them. Angela spoke with Brooks Brown, one of the initiators of this movement called, Operation Inflation. They've been giving out free inflatable costumes, and he told her that it's also less likely that someone watching will say, "Maybe the frog deserved it if they get pepper sprayed or something." So there's real strategy here.
Louise Matsakis: Yeah. I can definitely see how it's harder to sell the narrative that these protesters are dangerous when they're wearing a bunch of inflatable frog costumes. And it's really interesting, because about a decade ago, a frog meant something completely different. Remember Pepe the Frog back in 2015 or so? It was a far right symbol. And in 2019, during the Hong Kong pro-democracy protests, they also adopted Pepe the Frog, but it meant something different in that context as well. So it seems like the frog is highly adaptable.
Zoë Schiffer: Yeah. The frog has had many, many lives and it seems like it has come full circle. Last weekend, images circulated on Bluesky of the inflatable frog punching Pepe in the face. So it's not just online memes though. These costumes have made it all the way to the courts. On Monday, the US Court of Appeals for the Ninth Circuit lifted the block that barred Trump's National Guard deployment in Portland. Susan Graber, the dissenting judge, sided with the frogs and said, "Given Portland protesters' well-known penchant for wearing chicken suits and inflatable frog costumes when expressing their disagreement with the methods deployed by ICE, observers may be tempted to view the majority's ruling, which accepts the government's characterization of Portland as a war zone as absurd." One more quick story before we go to break. If you live in New York City, this tale might be unfortunately, familiar. This week, I got word that Google employees working at one of the company's New York campuses, should stay home because of a bedbug outbreak in the office.
Louise Matsakis: Oh God, you would not see me in the office for weeks if there was a bedbug infestation. How did they find out about this?
Zoë Schiffer: So basically, they received this email on Sunday, saying that exterminators had arrived at the scene with sniffer dogs and "found credible evidence of their presence." There, being the bedbugs. Sources tell WIRED that Google's offices in New York are home to a number of large stuffed animals, and there was definitely a rumor going around among employees that these stuffed animals were implicated in the outbreak. We were not able to verify this information before we published, but in any case, the company told employees as early as Monday morning that they could come back to the office. And people like you, Louise, were really not happy about this. They were like, "I'm not sure that it's totally clean here." That's why they were in our inboxes wanting to chat.
Louise Matsakis: Can I just say that if you have photos or a description of said large stuffed animals, please get in touch with me and Zoë. Thank you.
Zoë Schiffer: Yes. This is a cry for help. I thought the best part of this is when I gave Louise my draft, she was like, "Wait, this has happened before." And pulled up a 2010 article about a bedbug outbreak at the Google offices in New York.
Louise Matsakis: Yes. This is not the first time, which is heartbreaking.
Zoë Schiffer: Coming up after the break, we dive into why some people have been submitting complaints to the FTC about ChatGPT in their minds, leading them to AI psychosis. Stay with us.
Welcome back to Uncanny Valley. I'm Zoë Schiffer. I'm joined today by WIRED's Louise Matsakis. Let's dive into our main story this week. The Federal Trade Commission has received 200 complaints mentioning OpenAI's ChatGPT between November 2022 when it launched, and August 2025. Most people had normal complaints. They couldn't figure out how to cancel their subscription or they were frustrated by unsatisfactory or inaccurate answers by the chatbot. But among these complaints, our colleague, Caroline Haskins, found that several people attributed delusions, paranoia, and spiritual crisis to the chatbot.
One woman from Salt Lake City called the FTC back in March to report that ChatGPT had been advising her son to not take his prescribed medication and telling him his parents were dangerous. Another complaint was from someone who claimed that after 18 days of using ChatGPT, OpenAI had stolen their "sole print" to create a software update that had been designed to turn this particular person against themselves. They said, "I'm struggling, please help me. I feel very alone." There are a bunch of other examples, but I'm curious to talk to you about this, because Louise, I know that AI psychosis is something that you have been doing a lot of research on specifically.
Louise Matsakis: Yeah. I think it's important to unpack like, what do we mean by AI psychosis? What's interesting and noteworthy to me about chatbots is not that they're causing people to experience delusions, but they're actually encouraging the delusions. And that's sort of the issue, is that it's this interaction where it's validating people saying like, "Yeah, the paranoia you're experiencing is totally valid." Or like, "Would you like me to unpack why it's definitely the case that your friends and families are conspiring against you?"
The problem is that it's interactive and it can encourage people to spiral further. There's always been people who are experiencing mental health crises and are taking signs that they shouldn't, thinking that a number that they saw somewhere indicates that they're Jesus or that something they saw on social media reflects the fact that they're being followed, or that the FBI is out to get them, or whatever it is. But now, we have these tools that with endless energy and they can go on and on, can directly respond to those delusions and encourage them, and specifically engage with exactly what this person is experiencing, rather than another person who would say, "Hey, you don't seem to be well," or a physical object in the world, that that street sign or something is not going to then flash another number and say like, "You're right. That's your lucky number. That's a sign from God," or whatever. It's really interactive.
Zoë Schiffer: Yeah. I feel like you're getting at something that we've been talking about a lot, which is like, in what ways is this different from other technological shifts that have happened, which have been correlated with certain rises in mental illness?
Louise Matsakis: Yeah. I think that mental illness has always been a part of our species. And new technological developments have always sort of changed how we understand madness, but I do think we're seeing that happen again in this case and that this is really something new. And we should also note that these FTC complaints are part of a growing number of documented incidents of so-called AI psychosis, in which interactions with generative AI chatbots like ChatGPT, but also Google Gemini, have induced or worsened users' delusions. And we know that this has led to a number of suicides. Also, ChatGPT has been implicated in at least I think, one murder. So we're sort of seeing that something is going on here and I don't think we fully understand it.
Zoë Schiffer: Right. And it's interesting, the approach that OpenAI is taking in this moment. Because you and I have both talked to people at the company extensively, and it's clear that they're taking this seriously. They are paying attention to what's going on, and they've rolled out a number of safety features. But what they haven't done is say like, "We're going to shut these conversations down when they happen. We're just not going to engage." They have instead been consulting with mental health experts. They have a council of advisors now who are professionals in this space, and they're really saying some version of, "Look, people turn to us oftentimes when they don't have anyone else to talk to, and we don't think the right thing is to shut it down." Which I don't know, in my mind, it opens OpenAI up to a ton of liability.
Louise Matsakis: It definitely does, and I think that the reality is that they don't understand this either. With any sort of new technology, there's always going to be risks. I think that this is different and really noteworthy and is concerning, but it's not necessarily clear to me that shutting down the conversation or directing people to talk to someone else in their life, that the outcome would change and that it's also hard to tell how serious somebody is. I've written about, and you edited a story that I wrote, Zoë, that showed that sometimes these chatbots slip into role playing, and that's what people want, right? They're like acting out a fantasy. They're maybe working on a science fiction book, or they're engaging in the equivalent of cosplay or fan fiction, right? And the line between fantasizing and exploring dark secrets and believing all of those things, and internalizing them and losing your grip on reality, I think is more subtle than we might think it is or that we want it to be.
Zoë Schiffer: Right. Yeah, yeah. The company is walking this very interesting line right now. On the one hand, it's said very publicly, "We want to treat adults like adults. We want to allow people a lot of freedom in how they interact with ChatGPT if they're over a certain age." On the other hand, they're dealing with these potentially extremely sensitive use cases and they're fending off so many lawsuits at once. So it'll be really curious to see how this all evolves.
Louise Matsakis: Definitely. I think what I would really like to see, and I don't know if this is possible, given that these lawsuits are still ongoing, but I want to see a clinical trial. I think that it would be really powerful for OpenAI to give a lot of this data obviously, anonymized. But give this data to mental health experts who can then systematically look at this. Because I think the scary thing is that mental health professionals are flying blind. I've talked to a number of them who don't necessarily use ChatGPT that much themselves, so they don't even know how to handle a patient who is talking about these things, because it's unfamiliar and this is all so new. But if we had open research that was robust and peer-reviewed and could say, "Okay, we know what this looks like and we can create protocols to ensure that people remain safe," that would be a really good step, I think, towards figuring this out.
Zoë Schiffer: Completely. It is continually surprising to me how even people with a ton of literacy on how these technologies work, slip into anthropomorphizing chatbots or assigning more intelligence than they might actually have. You can imagine the average person that isn't deep in the science of large language models, it's really easy to be completely wowed by what they can do and to start to lose a grip on what you're actually interacting with.
Louise Matsakis: Oh, totally. We are all socialized now to take a lot of meaning from text, right? A lot of us, the primary mode that we communicate with our loved ones, especially if we don't live together, is through texting, right? So it's like you have this similar interface with this chatbot. It's not that unusual that you don't necessarily hear the chatbot's voice, although you can communicate with ChatGPT using voice now, but we already trained to take a lot of meaning from text to believe that there's a person on the other end of that text. And there's a lot of evidence that shows we're not socializing as much as we once did. People feel lonelier. They feel less connected to their communities. They have fewer closer friends. I think we were really primed to feel that way, and I think people shouldn't be ashamed if they feel that way or think that something's wrong with them.
It's totally normal to be engaged by this entity that's paying a lot of attention to you, that's willing to listen to whatever you want to talk about, and then often, is really sycophantic and is really validating. Part of having a healthy relationship with another human is that they're not always going to validate you, right? They're going to have boundaries. They're going to have limits. And I think it can be really alluring to have this presence that doesn't have any of those boundaries and never gets tired of talking to you, never thinks that you're wrong. And it's normal to feel that way, but the question is like, how do we create guardrails?
Zoë Schiffer: Right, exactly. I think we've seen on a national stage what happens when you're surrounded by people who agree with you no matter what, and it's not good.
Louise Matsakis: No, it's not great.
Zoë Schiffer: Louise, thank you so much for joining me today.
Louise Matsakis: Thanks so much for having me.
Zoë Schiffer: That's our show for today. We'll link to all the stories we spoke about in the show notes. Make sure to check out Thursday's episode of Uncanny Valley, which is about why the AI infrastructure boom and the concerns around it have reached a complete fever pitch. Adriana Tapia produced this episode. Amar Lal at Macro Sound mixed this episode. Kate Osborn is our executive producer. Condé Nast head of global audio is Chris Bannon. And Katie Drummond is WIRED's global editorial director.
文章标题:《连线》综合要闻:人工智能引发心理问题、联邦贸易委员会文件遗失与谷歌臭虫事件
文章链接:https://qimuai.cn/?post=1836
本站文章均为原创,未经授权请勿用于任何商业用途