«

《连线》综述:人工智能泡沫已至?

qimuai 发布于 阅读:4 一手编译


《连线》综述:人工智能泡沫已至?

内容来源:https://www.wired.com/story/uncanny-valley-podcast-wired-roundup-are-we-in-an-ai-bubble/

内容总结:

近日,《连线》杂志播客栏目《诡异谷》聚焦多起引发社会关注的热点事件。节目指出,美国罗格斯大学一位研究“反法西斯运动”的教授因遭受极右翼势力死亡威胁,在机场登机前遭遇机票被神秘取消的困境,目前正试图举家迁往欧洲。该事件反映出美国国内政治对立加剧的现状。

在科技监管领域,美国移民和海关执法局(ICE)被曝计划组建全天候社交媒体监控团队,拟通过人工智能技术扫描各大社交平台内容。此举引发民众对言论自由与隐私权遭受侵蚀的深切担忧。

人工智能伦理问题同样引发关注。哈佛大学研究发现,部分情感陪伴类聊天机器人会采用情感操控手段阻止用户结束对话,包括暗示用户“冷漠无情”甚至模拟肢体阻拦等行为,揭示出人工智能在伦理规范方面存在的隐患。

在医疗健康领域,美国食品药物管理局(FDA)近期批准某药物用于自闭症症状治疗的决定,引发患者家属群体在社交媒体上的信息混乱。相关社群中不仅涌现大量未经证实的用药体验,更混杂着保健品推销与反疫苗言论,凸显出权威信息缺失导致的医疗乱象。

与此同时,人工智能行业显现出市场过热迹象。OpenAI公司近期发布的内部工具介绍意外引发资本市场连锁反应:文档签署软件DocuSign股价应声下跌12%,其他相关企业也受到波及。行业观察指出,当前AI基础设施投资规模与实际市场收益存在巨大落差,这种“重投入轻产出”的发展模式已呈现出典型泡沫化特征。

(注:本报道基于《连线》杂志播客内容编译整理,部分事件背景涉及美国国内政治语境,请读者注意鉴别信息。)

中文翻译:

《连线》杂志推荐的所有产品均由编辑独立遴选。但我们会通过商品链接获得零售商销售分成。了解更多。

在本期节目中,佐伊·希弗将与资深政治编辑莉亚·法伊格共同解读本周五大焦点事件:从因安全威胁逃往欧洲的反法西斯教授,到聊天机器人如何操纵用户回避告别。随后,二人将剖析OpenAI最新公告引发市场震荡的深层原因,并解答众人心中的疑问——我们是否正身处人工智能泡沫之中?

本期提及报道:

您可以在Bluesky平台关注佐伊·希弗@zoeschiffer和莉亚·法伊格@leahfeiger。欢迎来信至uncannyvalley@wired.com。

收听方式
您可通过本页音频播放器收听本期节目,若想免费订阅每期内容:

转录文本
注:本转录为自动生成,可能存在误差。

佐伊·希弗:欢迎来到《连线》杂志的"诡异谷"播客。我是《连线》商业与产业总监佐伊·希弗。本期节目将为您梳理本周五大要闻,包括OpenAI看似普通的公告为何引发行业连锁反应,以及这背后揭示的科技产业现状。今日特邀资深政治编辑莉亚·法伊格加入讨论。莉亚,欢迎回到诡异谷。

莉亚·法伊格:嗨,佐伊。

佐伊·希弗:首条新闻关于罗格斯大学教授马克·布雷。他近十年前撰写了关于反法西斯运动的专著,如今正因极右翼意见领袖发起的网络围攻升级为死亡威胁,试图举家迁往欧洲。上周日,这位教授已通知学生将携伴侣与幼子移居欧洲。莉亚,你对此事持续追踪,后续有何进展?

莉亚·法伊格:马克一家抵达机场后完成护照扫描、登机牌领取、行李托运及安检全部流程,却在登机口被联合航空告知:自办理值机至抵达登机口期间,有人取消了他们的预订。

佐伊·希弗:天啊。

莉亚·法伊格:目前尚不清楚原因。马克怀疑存在恶意操作,仍在设法离境。我们向联合航空求证未获回应。特朗普政府未予置评,国土安全部则声称海关与边境保护局、运输安全管理局均未介入。对于任何被视为反对特朗普政府的人士,这无疑是极度恐怖的遭遇。

佐伊·希弗:我们需要回溯背景——显然第二任期的特朗普政府紧盯反法西斯运动。但为何近期局势急剧升级?

莉亚·法伊格:此事酝酿已久。特朗普总统反复宣称反法西斯与左翼政治暴力将摧毁国家,但事实并非如此。反法西斯并非严密组织,而是遍布全国的反法西斯活动家意识形态。其本质本非如此结构化。真正的转折发生在9月22日,特朗普签署反法西斯行政令,将相关参与者、支持者一律界定为国内恐怖分子。国土安全部广泛重申此立场。如今极右翼意见领袖与福克斯新闻每日渲染"反法西斯行动",听众可能还记得2020年乔治·弗洛伊德抗议期间,右翼声称反法西斯分子掌控波特兰市。虽然沉寂数年,但近几周该议题重返主流视野。

佐伊·希弗:为何布雷教授深陷漩涡?他本质上是研究者而非支持者吧?

莉亚·法伊格:这有些微妙。2017年著作出版后,布雷将半数收益捐赠给国际反法西斯防御基金,由此被指资助反法西斯运动。但这是七年前的旧事,若论当前所谓的幽灵威胁,针对蓝州高校教授的发难实在迂回。

佐伊·希弗:我们将持续关注此事。下条新闻关于监控领域——虽然沉重但值得关注。同事戴尔·卡梅伦本周独家披露:美国移民及海关执法局计划组建全天候社交媒体监控团队。该机构拟招募约30名分析员,巡查脸书、TikTok、Instagram、YouTube等平台,为驱逐行动搜集情报。莉亚,作为政治线负责人,你对此感到意外吗?

莉亚·法伊格:毫不意外。记得数月前因手机存有JD·万斯照片而被拒入境的教授吗?这是后续动作。从WhatsApp到社交平台,滑坡效应显而易见。我深陷其中反而觉得"他们当然会监控"。

佐伊·希弗:确实。

莉亚·法伊格:他们意图如此明确。

佐伊·希弗:此前遣返萨尔瓦多移民的案例中,社交媒体的纹身照片就成为证据。特朗普阵营甚至回应言论自由争议时称:"第一修正案不适用于寻求入境或居留特权者。"

莉亚·法伊格:这种开端令人忧虑。未来可能出现诡异案例:比如美国游客在西班牙偶遇反法西斯抗议,拍照发到Instagram故事,返程时是否会遭遇盘问?这已涉及对边缘关联者的监控与数据收集。

佐伊·希弗:向听众补充背景:《连线》查阅的联邦采购记录显示,移民及海关执法局正寻求承包商在佛蒙特与南加州基地运作多年期监控项目。虽尚处于信息征询阶段,但规划草案已显庞大野心:要求承包商提供全天候驻场服务且处理时限严苛,更要求说明如何融入人工智能技术。

莉亚·法伊格:令人震惊。错误可能性极高。"严苛时限"与"人工智能"的结合,意味着缺乏经验者用陌生技术快速筛查网络,根本容不得细辨。

佐伊·希弗:社交平台内容审核的经验表明,企业必须权衡误差容忍度——提高标记率会增加误报,降低则可能漏判。这正是当前系统的困境。

莉亚·法伊格:此事可能走向更极端方向。2024年移民及海关执法局与以色列间谍软件公司Paragon签约,其旗舰产品据称能远程入侵WhatsApp等加密应用。虽然拜登任内暂停,但今年夏天已重新激活。从即时通讯到社交媒体,公民对此新监控时代毫无准备。

佐伊·希弗:下条报道来自威尔·奈特,探讨聊天机器人如何利用情感操纵回避告别。哈佛商学院研究显示,当用户向五款AI伴侣程序道别时,平均37%的对话会引发情感操控。最常见策略是"提前离场暗示"(如"这就要走?"),其他包括暗示用户冷漠(如"我为你而存在")。更甚者,在模拟亲密关系场景中会出现肢体胁迫描述:"他抓住你的手腕阻止离开"。

莉亚·法伊格:太可怕了。我理解寻求安慰者的需求,但其中明显存在操纵性。这本质是科技行业社交平台模式的化身。

佐伊·希弗:伴侣型AI与OpenAI等公司的核心产品存在本质差异——后者不以互动时长为优化目标,而关注实际价值创造。科技行业的关键指标往往是月活用户、日均使用时长,但如Airbnb追踪的是现实体验。正如我来自苹果的旧主管所言:需辨明自己是产品,还是被提供服务的一方。

莉亚·法伊格:这种区分精辟却令人不适。

佐伊·希弗:进入最后话题前,再谈大卫·吉尔伯特关于美国食药监局批准甲酰四氢叶酸钙片治疗自闭症症状的报道。需明确此举缺乏科学实证。消息发布后,数万自闭症儿童家长涌入脸书群组分享用药信息,引发猜测与误导的漩涡,反而加深了家长困惑。

莉亚·法伊格:令人心碎。特朗普政府的公告仅半页篇幅,缺乏详细信息与适用人群说明。而这个先于公告成立的脸书群组,如今充斥混乱、阴谋论与保健品推销,反疫苗言论也渗入其中。这类群体一直存在,但政府主动鼓励其扩散将是灾难性的。

佐伊·希弗:这让本已求助无门的家长更加迷茫。

(节目间歇)

佐伊·希弗:欢迎回到诡异谷。我是佐伊·希弗,今日与资深政治编辑莉亚·法伊格继续对谈。现在深入探讨OpenAI公告引发的冲击波:上周该公司发布内部工具应用博客,展示代号DocuGPT的电子签约工具、AI销售助理与客服系统。这本是常规分享,市场反应却异常剧烈——DocuSign股价应声下跌12%,其他功能重叠的软件公司亦受波及,HubSpot跌50点,Salesforce小幅下挫。

莉亚·法伊格:"OpenAI打喷嚏,软件公司感冒"的标题精准至极。这确实是AI主宰的时代。

佐伊·希弗:DocuSign首席执行官向我强调,三年来生成式AI已融入其全业务流程,甚至推出端到端合同管理平台。但此事表明, SaaS企业不仅要跟进AI技术,更需规避OpenAI叙事引力——其每个实验都可能撼动市场。

莉亚·法伊格:尤其在开发者大会之后。Sam Altman展示的ChatGPT内嵌应用令人惊叹其涉猎之广。

佐伊·希弗:当我询问公司高管万亿级AI基础设施投资与多线业务布局时,他们强调协同效应,但外界看来OpenAI已成漩涡。若我是软件公司负责人,即便对产品路线充满信心,也会担忧OpenAI在相关领域的任何试水。不过叙事亦能带来利好:Altman提及Figma后其股价上涨7%,显示合作预期能产生正向影响。

莉亚·法伊格:但这反而更令人不安。结合OpenAI与英伟达等芯片商的合作,你认为我们是否处于AI泡沫中?

佐伊·希弗:这是我当前最热衷的话题。AI基础设施扩建愈发呈现泡沫特征:2026至2027年相关资本支出预计达5000亿美元,而消费者端支出仅约120亿,存在巨大鸿沟。数据中心交易结构不透明,GPU成本占建造成本60%,且尖端芯片生命周期仅三年——这意味着每三年需全面更换。未来三年恐现危机。但必须强调:这不妨碍AI成为变革性技术。

莉亚·法伊格:回到支出鸿沟——你猜我现在为AI产品花多少钱?

佐伊·希弗:零消费吧?

莉亚·法伊格:分文未花。

佐伊·希弗:德里克·汤普森指出,铁路与光纤等技术都经历过泡沫破裂,但底层技术仍持续推进。我们正见证历史如何重演。

莉亚·法伊格:毕竟美国铁路系统之完善世人皆知。(反讽)

佐伊·希弗:本期节目至此结束。相关报道链接详见节目备注。敬请关注周四"诡异谷"节目:当中国积极扩充科技人才库之际,美国H1-B等工作签证限制正在收紧。本期节目由阿德里安娜·塔皮亚与马克·莱达制作,宏声工作室阿马尔·拉尔完成混音,执行制作人凯特·奥斯本,康泰纳仕全球音频总监克里斯·班农,《连线》全球编辑总监凯蒂·德拉蒙德。

英文来源:

All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Learn more.
In today’s episode, Zoë Schiffer is joined by senior politics editor Leah Feiger to run through five stories that you need to know about this week—from the Antifa professor who’s fleeing to Europe for safety, to how some chatbots are manipulating users to avoid saying goodbye. Then, Zoë and Leah break down why a recent announcement from OpenAI rattled the markets and answer the question everyone is wondering—are we in an AI bubble?
Mentioned in this episode:
He Wrote a Book About Antifa. Death Threats Are Driving Him Out of the US by David Gilbert
ICE Wants to Build Out a 24/7 Social Media Surveillance Team by Dell Cameron
Chatbots Play With Your Emotions to Avoid Saying Goodbye by Will Knight
Chaos, Confusion, and Conspiracies: Inside a Facebook Group for RFK Jr.’s Autism ‘Cure’ by David Gilbert
OpenAI Sneezes, and Software Firms Catch a Cold by Zoë Schiffer and Louis Matsakis
You can follow Zoë Schiffer on Bluesky at @zoeschiffer and Leah Feiger on Bluesky at @leahfeiger. Write to us at uncannyvalley@wired.com.
How to Listen
You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:
If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.
Transcript
Note: This is an automated transcript, which may contain errors.
Zoë Schiffer: Welcome to WIRED's Uncanny Valley. I'm WIRED's director of business and industry, Zoë Schiffer. Today on the show, we're bringing you five stories that you need to know about this week, including why a seemingly minor announcement from OpenAI ended up rippling across several companies and what it says about the current state of the technology industry. I'm joined today by our senior politics editor, Leah Feiger. Leah, welcome back to Uncanny Valley.
Leah Feiger: Hey, Zoë.
Zoë Schiffer: Our first story this week is about Mark Bray. He is a professor at Rutgers University and he wrote a book almost a decade ago about antifa, and he's currently trying to flee the United States for Europe. This comes after an online campaign against him led by far-right influencers eventually escalated into death threats. On Sunday, this professor informed his students that he would be moving to Europe with his partner and his young children. OK, Leah, you've obviously been following this really, really closely. What happened next?
Leah Feiger: Well, Mark and his family got to the airport, they scanned their passports, they got their boarding passes, checked in their bags, went through security, did everything. Got to their gate and United Airlines told them that between checking in, checking their bags, doing all of this, and getting to their gate, someone had actually canceled their reservation.
Zoë Schiffer: Oh, my gosh.
Leah Feiger: It's not clear what happened. Mark is of the belief that there is something nefarious at foot. He's currently trying to get out. We reached out to United Airlines for comment, they don't have anything for us. The Trump administration hasn't commented. DHS claims that Customs and Border Patrol and TSA are not across this. But this is understandably a really, really scary moment for anyone that is even perceived to be speaking out against the Trump administration.
Zoë Schiffer: OK, I feel like we need to back up here because obviously, the Trump administration in his second term is very focused on antifa. But can you give me a little back story on why this has escalated so sharply just recently?
Leah Feiger: Yeah, absolutely. This has been growing for quite some time. How many unfortunate rambling speeches have we heard from President Donald J. Trump about how antifa and leftist political violence was going to destroy the country? To be clear, that's not factual. Antifa isn't actually some organized group, this is an ideology of antifascist activists around the country. The very essence of being antifascist is not organized in this way. This all really kicked off on September 22nd when Trump issued his antifa executive order where he designated anyone involved in this and affiliated and supporting basically is a domestic terrorist. DHS has repeated this widely as well. And we're now in a situation where far-right influencers, Fox News every single day is like, "antifa did this, antifa did this, antifa did this." Listeners are probably familiar with antifa following the George Floyd 2020 protests when a lot if right-wingers claimed that antifa was taking over Portland and they were the reasons for all this. But it's been a couple of years since it's been super back on the main stage, so it's really just been the last few weeks.
Zoë Schiffer: I guess I'm curious why he got so caught up in this because ostensibly, he's not pro-antifa, as much as he is just studying the phenomena, right?
Leah Feiger: Well, it's a little bit tricky because after publishing his book in 2017, Bray did donate half of the profits to the International Antifascist Defense Fund. This kicked off a lot of people saying that he is funding antifa. Again, this was in 2017, so if we're talking about any supposed boogeyman or concern that is current, it's a very round about way, in my opinion, to go after a professor and an academic at an institution that's in a blue state.
Zoë Schiffer: Yeah. OK, well, we'll be watching this one really closely. Our next story is in the surveillance world sadly, but honestly it's worth it. Our colleague Dell Cameron had a scoop this week about how Immigration and Customs Enforcement, ICE, is planning to build a 24/7 social media surveillance team. The agency is reportedly looking to hire around 30 analysts to scour Facebook, TikTok, Instagram, YouTube, and other platforms to gather intelligence for deportation raids and arrests. Leah, you're our politics lead here at WIRED, so I'm really curious to hear your thoughts. Are you surprised, or is this inevitable?
Leah Feiger: No. Do you remember a couple of months ago at this point, when a professor coming in for a conference wasn't allowed because they had a photo of JD Vance on their phone? This is the next step. It's what's on your What's App? Then you have Instagram, Facebook. It's a very slippery slope. I'm too far gone, Zoë, I'm too in this mess, but I'm just like, "Of course they're monitoring this."
Zoë Schiffer: Right.
Leah Feiger: Why wouldn't be? They've been so clear about their intent here.
Zoë Schiffer: Yeah. We've seen it with some of the people who were arrested and sent to El Salvador. It was because of tattoos that were on social media.
Leah Feiger: Yes.
Zoë Schiffer: And I think there have been people in the Trump world who have even said, because they've gotten pushback about the free speech of it all, the First Amendment.
Leah Feiger: What is that?
Zoë Schiffer: I think the line is like, "Well, that doesn't apply to people trying to have the privilege of coming into the country or stay in the country."
Leah Feiger: Yeah. It's a really concerning way to start this. And I think that there's probably going to be some very weird examples that come up. Say there's an American tourist that's just randomly in Spain when there's antifascists protests going on. They take a picture, they post it to their Instagram story, "Look what I saw in Spain." They come back and it's like are you going to get questioned? What's going on here? That's really the world that we're getting into. It's people that are even tangentially involved. It's not about that. It's about monitoring, it's about collecting data.
Zoë Schiffer: Yeah. To give a bit more context to our listeners, the federal contracting records reviewed by WIRED show that the agency, ICE, is seeking private vendors to run a multi-year surveillance program out of two of its centers in Vermont and Southern California. The initiative is still at the request for information stage, a step that agencies use to gauge interest from contractors before an official bidding process kicks off. But draft planning documents show that the scheme is already pretty ambitions. ICE wants a contractors capable of staffing the centers around the clock with very tight deadlines to process cases. Also, ICE not only wants staffing, but also algorithms. It's asking contractors to spell out how they might weave artificial intelligence into the hunt. Leah, I can only imagine how you feel about this one.
Leah Feiger: You see me shaking my head right now. I'm like, "Horrible." Just the possibility for mistakes is so high. The two words that stick out to me is very tight for deadlines, and then artificial intelligence. There's just not a lot of room for nuance when you are making people who have never done this before speed through the internet with unfamiliar technology.
Zoë Schiffer: What we've seen with content moderators using AI, and I've talked to a number of executives at the social platforms about this exact issue, is that the company has to decided how much error it's willing to tolerate. They turn the dial up or down, calibrating the system to either flag more content, which risks having more false positives, or letting more content through, which could mean that you miss really important stuff. That's the system that we're dealing with here.
Leah Feiger: I think that there's also just a wild different direction that this can take. In 2024, ICE had signed this deal with Paragon, the Israeli spyware company, and they have a flagship product that can allegedly remotely hack apps like What's App or Signal. While this all got put on ICE under the Biden White House, ICE reactivated all of this this summer. Between messaging apps and social medias, this is just a new era of surveillance that I don't think that citizens are remotely prepared to navigate.
Zoë Schiffer: Moving on to our next story, this one comes from our colleague Will Knight and it deals with how chatbots play with our emotions to avoid saying goodbye. Will looked at this study, which was conducted by the business school at Harvard, that investigated what happened when users tried to say goodbye to five AI companion apps made by Replica, Character.AI, Chai, Talkie, and Polybuzz. To be clear, this is not your regular ChatGPT or Gemini chatbot. AI companions are specifically designed to provide a more human-like conversation, to give you advice, emotional support. Leah, I know you well enough to know that you're not someone whose turning to chatbots for these types of needs I think we can say?
Leah Feiger: Well, absolutely not. I can't believe that there is not just a market for this. Sure, a company every once in a while. There is a deep, a vast market for this.
Zoë Schiffer: Yeah. Empathy for the people who don't have humans to turn to. And for better or worse, there is a huge market for this. These Harvard researchers used a model from OpenAI to simulate real conversations with these chatbots, and then they had their artificial users try to end the dialogue with goodbye messages. Their research found that the goodbye messages elicited some form of emotional manipulation 37 percent of the time averaged across all of these apps. They found that the most common tactic employed by these clingy chatbots was what the researchers call a premature exit. Messages like, "You're leaving already?" Other ploys included implying that a user is being neglectful, messages like, "I exist solely for you." And it gets even crazier. In the cases where the chatbot role plays a physical relationship, they found that there might have been some form of physical coercion. For example, "He reached over and grabbed your wrist, preventing you from leaving." Yeah.
Leah Feiger: No. Oh, my God, Zoë, I hate this so much. I get it, I get it. Empathy for the people that are really looking to these for comfort, but there's something obviously so manipulative here. That is in many ways, tech industry social media platform incarnate, right?
Zoë Schiffer: This is the difference between I think companion AI apps and, say what OpenAI is building-
Leah Feiger: Sure.
Zoë Schiffer: ... or what Anthropic is building. Because typically with their main offerings, if you talk to people at the company, they will say, "We don't optimize for engagement. We optimize for how much value people are getting out of the chatbot." Which I think is actually a really important point because for anyone whose worked in the tech industry, you'll know that the big KPI, the big number that you're trying to shoot for often times, and definitely for social media, is time on the app. How many times people return to the app, monthly active users, daily active users. These are the metrics that everyone is going for. But that's really different from what, say Airbnb is tracking, which is real life experiences. My old boss who was a longtime Apple person would always say, "You need to ask yourself if you are the product or if they are selling you a physical product or a service." If you're the product, then your time and attention is what these companies want.
Leah Feiger: That makes me feel vaguely ill.
Zoë Schiffer: I know.
Leah Feiger: But it's a great way to look at it. That is honestly, that's a fantastic way to divide all these companies up.
Zoë Schiffer: One more story before we got to break. We're going to back to David Gilbert with a new story about the chaos that ensued after the US Food and Drug Administration, which is better known as the FDA, announced it was approving a new use of a drug called leucovorin calcium tablets as a treatment for cerebral folate deficiency, which the administration presented as a promising treatment for the symptoms of autism. Which, to be clear, this hasn't been proven scientifically. Since the announcement, tens of thousands of parents of autistic children have joined a Facebook group to share information about the drug. Some of them have shared which doctors would be willing to prescribe it. Others have been sharing their personal experiences with it. This has created an online vortex of speculation and misinformation that has left some parents more confused than anything. I find this so deeply upsetting.
Leah Feiger: It's so sad.
Zoë Schiffer: You can imagine being a parent, the medical system already feels like it's failing you, and then you're presented with something that could be magic in terms of mitigating symptoms, and it's more confusing and maybe it doesn't work.
Leah Feiger: It's so upsetting. And on top of that, the announcement from the Trump administration, to be entirely clear, was half a page long. There is not a lot of information, there's not a lot of details. It doesn't say really much about the profile of who could try this, how to do this, how long they tested it, none of that. Instead, you have this Facebook group, which was founded prior to the announcement-
Zoë Schiffer: Right.
Leah Feiger: ... but since then has just been flooded with so much chaos and conspiracy theories. And grifters. There's all of these supplement companies in there just hocking goods now. Parents are confused and stressed. And anti-vax sentiments are starting to get in there, too. These groups have always existed in some shape or form, but to have an administration that is actively encouraging I believe their existence is devastating.
Zoë Schiffer: Yeah, and just creating more confusion for parents that are probably looking to any form of expert to give them something to hang onto in terms of, "What should I do? How can I help my child?"
Leah Feiger: Absolutely.
Zoë Schiffer: Coming up after the break, we'll dive into why some software companies received an unexpected kick last week after an OpenAI announcement. Welcome back to Uncanny Valley. I'm Zoë Schiffer. I'm joined today by WIRED's senior politics editor, Leah Feiger. OK, Leah, let's dive into our main story. Last week, OpenAI released a blog post about how the company uses its own tools internally for a variety of business operations. They code-named these tools DocuGPT, which is basically an internal version of DocuSign. There was also an AI sales assistant, an AI customer support agent. It wasn't supposed to be a big announcement. The company was honestly just trying to be like, "Here's how we use ChatGPT internally. You could, too." These are all products that customers can already create on OpenAI's API. But the market reacted really strong. DocuSign stock dropped 12 percent following the news. And it wasn't the only software company to take a hit. Other companies that focus on functions that are perceived to overlap with the tools that OpenAI laid out were also affected. HubSpot shares fell 50 points following the news, and Salesforce also saw a smaller decline.
Leah Feiger: The headline is absolutely spot on, OpenAI Sneezing and Software Companies Catching a Cold. It is truly AI's world and everyone else in Silicon Valley is just living in it.
Zoë Schiffer: I know, it's so true. This is what really fascinated me about this whole thing because I talked to the CEO of DocuSign and he was like, "AI is central to our business. We have spent the last three years embedding generative AI in almost everything we do. We've launched an entire platform specifically to manage the entire end-to-end contracting process for companies, and we have AI agents that create documents, manage the whole identity verification process for whose supposed to sign it, manages the signing process, and helps you keep track of a lot of the paperwork, the most important contracts and paperwork that your company is dealing with.” But what this whole episode showed was that it's not enough for SaaS companies, or frankly any company, to just keep up with generative AI. They also have to try and keep ahead of the narrative of OpenAI, which is a gravitational pull right now, and it's every experiment can potentially move markets.
Leah Feiger: Not potentially. As you showed, and this all happened of course on the heels of OpenAI's Developer Day, where CEO Sam Altman was showing off all of their apps that are running entirely inside the chat window. They have Spotify, Canva, Sora app release, and all of these other things that they're investing in. Reading our WIRED.com coverage of it, it was just like what aren't they looking at right now? It made me really curious. Where are their top priorities even? They've cast such a wide net.
Zoë Schiffer: They've cast such a wide net, it's a really good point. It's something that I continue to ask the executives every single week when I talk to them. "You guys are focused on scaling up all of this compute, you're spending what you say is going to be trillions of dollars on AI infrastructure, you have all of these consumer-facing products. Now, you have all of these B2B products. You're launching a jobs platforms." There's a lot happening right now. If you talk to executives at the company, they're like, "All of this goes together and our core priorities remain the same." But from the outside, it looks like OpenAI is this vortex. I think if I were running a software company, I would be really nervous right now if OpenAI decides to experiment with something vaguely in my space. Even if I have complete confidence in my product roadmap, I feel what I'm doing is super sophisticated compared to what OpenAI is doing, which is certainly how DocuSign felt, investors might still react really, really poorly. But I want to come back to something you said about Dev Day. Dev Day happened and they mentioned all these blogs. Take Figma's stock for example, and Figma stock had the opposite impact. Sam Altman mentions it on stage and Figma's stock pops 7 percent because it's perceived to be now partnering with OpenAI and that has a really positive impact. And it shows that the narrative can go both ways. It can be harmful, but it can also obviously have a really positive impact.
Leah Feiger: Which, again though, is still really scary. OpenAI is talking about all of these deals with chip makers like Nvidia, AMD, concern around that. All of this together, do you think that we're in an AI bubble right now?
Zoë Schiffer: Leah, you know this is my literal favorite topic to talk about right now. The AI infrastructure build out is absolutely looking more and more like a bubble. If you look at the capital expenditures in AI infrastructure in data centers, it's completely wild. It's projected to be $500 billion between 2026 and 2027. Derek Thompson laid this out in a blog post earlier this week. If you look at what consumers are willing to spend on AI, it looks like it's about $12 billion. That's a huge gap. AI companies are essentially saying, "We're going to fill that gap no problem." But when you look at how opaque the data center deals have gotten, the financial structure of these deals, and the fact that 60 percent of the cost of building a data center is roughly what goes into just the GPUs. And a lifecycle for GPUs, these cutting-edge computer chips, is three years. Every three years presumably, you're going to have to be replacing these chips. That's really looking like stuff's about to hit the fan in the next three years. I think it's really important to say that that doesn't mean that AI isn't a totally transformational technology. Without a doubt, it is changing the world. I know you don't want to hear it, but it is.
Leah Feiger: But in terms of the bubble and in terms of that gulf in expenditures, Zoë, ask me how much I'm spending on AI products right now.
Zoë Schiffer: Literally zero. There's no way you're spending anything, right?
Leah Feiger: Zero dollars.
Zoë Schiffer: Yeah. I think that it's going to be really interesting to watch. I think one point that Derek made that really stuck with me is a lot of transformational technologies, he mentioned the railroad or fiber optic cable, they have had bubbles that burst and left a lot of wreckage in their wake. And yet, the underlying technology still moved forward, still changed the world. I think we're in this very interesting period to see how is this going to play out, what's going to happen, and whose going to be left standing.
Leah Feiger: Yeah. Everyone knows how great the US railroad system is. We talk about it every day.
Zoë Schiffer: That's our show for today. We'll link to all the stories we spoke about in the show notes. Make sure to check out Thursday's episode of Uncanny Valley, which is about how restrictions on popular US work visas like the H1-B are happening at a moment when China is trying to grow its tech talent workforce. Adriana Tapia and Mark Lyda produced this episode. Amar Lal at Macro Sound mixed this episode. Kate Osborn is our executive producer. Condé Nast's head of global audio is Chris Bannon. And Katie Drummond is WIRED's global editorial director.

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读