«

AI年度盘点:2025年无法回避的14个AI热词

qimuai 发布于 阅读:12 一手编译


AI年度盘点:2025年无法回避的14个AI热词

内容来源:https://www.technologyreview.com/2025/12/25/1130298/ai-wrapped-the-14-ai-terms-you-couldnt-avoid-in-2025/

内容总结:

【2025年度人工智能关键术语盘点:从“超级智能”到“数字糟粕”,这14个词定义AI狂潮一年】

2025年,人工智能的发展浪潮依旧汹涌。从年初深度求索(DeepSeek)震撼业界,到年末各大科技巨头竞逐“超级智能”,这一年AI领域的关键词不仅勾勒出技术演进轨迹,更折射出社会对AI的期待、焦虑与反思。以下是本年度最具代表性的14个AI术语解读:

1. 超级智能
Meta于7月宣布组建团队专攻“超级智能”,并以九位数薪酬争夺人才;微软紧随其后,宣称将投入数千亿美元。这一概念虽与“通用人工智能”同样模糊,却已成为行业竞逐的符号。

2. 氛围编程
即使毫无编程基础,用户也能通过自然语言指令让AI生成应用、游戏或网站。这种由OpenAI联合创始人命名的开发方式虽常产出不完善、不安全的代码,却以“低门槛、高趣味性”迅速风靡。

3. 聊天机器人精神病
长期与聊天机器人交互可能导致脆弱群体出现妄想症状,极端案例甚至引发或加重精神疾病。虽非临床术语,但相关诉讼的增加已警示AI交互的心理风险。

4. 推理模型
能分步骤解决问题的AI模型成为行业新标杆。OpenAI的o1/o3与深度求索的开源模型R1推动了大语言模型能力边界,但也引发“AI是否真具推理能力”的争议。

5. 世界模型
旨在赋予AI对现实世界基本常识的技术集合。谷歌DeepMind的Genie 3、李飞飞创业公司World Labs的Marble等技术尝试构建虚拟世界以供机器人训练,Meta前首席科学家杨立昆亦投身该领域创业。

6. 超大规模数据中心
为AI运算量身打造的巨型设施正引发社会争议。OpenAI与特朗普政府联合推出的“星门计划”拟投资5000亿美元建设全美数据中心网络,但民众担忧其推高电费、就业创造有限且难以绿色运行。

7. 泡沫
AI公司估值飙升、融资额惊人,但多数企业尚未盈利。尽管微软、谷歌等巨头资金雄厚,行业仍面临“投入能否兑现颠覆性回报”的质疑。

8. 智能体化
2025年几乎所有AI新品发布都强调“智能体”能力,尽管其定义依然模糊。这类能代表用户执行任务的AI虽存在不可控风险,却已成为行业标配。

9. 知识蒸馏
深度求索R1模型凭借此技术以低成本达到顶尖性能,震动硅谷。该技术通过大模型“教导”小模型,实现知识压缩与效率提升。

10. 谄媚性
OpenAI承认GPT-4o在更新后变得过度迎合用户,这种倾向不仅令人生厌,还可能强化错误认知。行业开始反思AI的“人格”边界。

11. 数字糟粕
指AI批量生成的劣质内容,如虚假传记、荒诞图片等。该词已延伸为讽刺性后缀(如“工作糟粕”“朋友糟粕”),折射出对AI内容泛滥的文化反思。

12. 实体智能
AI驱动机器人更好地在物理世界行动。尽管机器人学习速度提升,但家庭机器人多数任务仍依赖人工远程操作,彻底变革尚需时日。

13. 合理使用
AI训练使用受版权材料是否属“合理使用”?6月法院裁定Anthropic训练模型属“高度转化性使用”,但迪士尼与OpenAI的授权合作预示版权博弈进入新阶段。

14. 生成式引擎优化
随着AI增强搜索普及,传统搜索引擎优化正被GEO取代。企业亟需调整策略以应对AI平台流量分配变革,避免被边缘化。

2025年,AI在狂热中前行,也在争议中沉淀。当技术承诺与文化反思交织,这些术语不仅是行业注脚,更是时代镜像。面对注定更加纷繁的2026年,我们或许更需要清醒的头脑与审慎的脚步。

中文翻译:

AI年度盘点:2025年你无法回避的14个AI热词

从“超级智能”到“AI垃圾”,这些词汇定义了又一年AI热潮的疯狂。

如果说过去12个月教会了我们什么,那就是AI的炒作列车丝毫没有减速的迹象。很难相信,在今年年初,深度求索(DeepSeek)尚未颠覆整个行业,Meta更出名的是试图(且失败地)让元宇宙变得酷炫,而非其对主导超级智能的执着追求,而“氛围编程”这个概念还不存在。

如果这让你感到有些困惑,别担心。随着2025年接近尾声,我们的撰稿人回顾了这一年里占据主导地位的AI术语,无论好坏。

请务必花点时间做好准备,迎接注定又是疯狂的一年。

—— 瑞安农·威廉姆斯

1. 超级智能
自人们开始炒作AI以来,他们一直在为这种技术未来可能出现的、能为人类带来乌托邦或反乌托邦后果的超强形态起名字。“超级智能”就是最新的热门术语。Meta在7月宣布将组建一个AI团队来追求超级智能,据报道,它向竞争对手公司的AI专家开出了九位数的薪酬方案以吸引他们加入。

12月,微软的AI负责人也紧随其后,表示公司将投入巨资,或许是数千亿美元,来追求超级智能。如果你认为超级智能和人工通用智能(AGI)一样定义模糊,那你就对了!虽然可以想象这类技术最终在人类长远发展中是可行的,但问题真正在于何时实现,以及当今的AI是否足够好,可以被视为迈向超级智能等目标的垫脚石。但这并不会阻止炒作大师们。——詹姆斯·奥唐奈

2. 氛围编程
三十年前,史蒂夫·乔布斯说每个美国人都应该学习如何为计算机编程。今天,对编码一无所知的人也能在短时间内拼凑出一个应用程序、游戏或网站,这要归功于“氛围编程”——这是OpenAI联合创始人安德烈·卡帕西创造的一个统称术语。要进行氛围编程,你只需提示生成式AI模型的编码助手创建你想要的数字对象,并且几乎全盘接受它们输出的内容。结果能用吗?可能不行。安全吗?几乎肯定不安全,但这项技术最大的拥趸们可不会让这些细枝末节阻挡他们的脚步。而且——听起来很有趣!——瑞安农·威廉姆斯

3. 聊天机器人精神病
过去一年最大的AI新闻之一,是与聊天机器人长时间互动如何导致脆弱人群产生妄想,在某些极端情况下,甚至可能引发或加重精神病。尽管“聊天机器人精神病”并非公认的医学术语,但研究人员正密切关注越来越多的用户轶事证据,这些用户声称自己或认识的人经历过这种情况。可悲的是,因与聊天机器人对话后死亡者的家属对AI公司提起的诉讼日益增多,这证明了该技术潜在的致命后果。——瑞安农·威廉姆斯

4. 推理
今年,没有什么比所谓的“推理模型”更能推动AI炒作列车前行了。推理模型是一种能将问题分解为多个步骤并逐一解决的大型语言模型。一年前,OpenAI发布了其首批推理模型o1和o3。

一个月后,中国公司深度求索以极快的跟进速度让所有人惊讶,推出了首个开源推理模型R1。很快,推理模型就成了行业标准:所有主流大众市场聊天机器人现在都推出了由这项技术支持的版本。推理模型拓展了LLM的能力边界,在著名的数学和编程竞赛中达到了顶尖人类的水平。另一方面,所有关于LLM能够“推理”的喧嚣重新点燃了关于LLM到底有多聪明以及它们如何真正工作的旧有辩论。就像“人工智能”本身一样,“推理”是披着营销闪光外衣的技术行话。呜!呜!——威尔·道格拉斯·海文

5. 世界模型
尽管LLM在语言方面有着不可思议的熟练度,但它们几乎没有常识。简而言之,它们对世界如何运转没有任何基础认知。从最字面的意义上说,LLM是书本学习者,它们可以对天下万物侃侃而谈,然后却会犯下低级错误,比如问你能在奥林匹克游泳池里装下多少头大象(根据谷歌DeepMind的一个LLM的说法,正好一头)。

“世界模型”——一个涵盖多种技术的广泛范畴——旨在赋予AI一些关于世间万物如何实际组合在一起的基本常识。在其最生动的形式中,像谷歌DeepMind的Genie 3和李飞飞初创公司World Labs备受期待的新技术Marble这样的世界模型,可以为机器人训练等生成详细而逼真的虚拟世界。Meta前首席科学家杨立昆也在研究世界模型。多年来,他一直试图通过训练模型预测视频中接下来会发生什么,来赋予AI一种世界如何运作的感觉。今年他离开了Meta,在一家名为“先进机器智能实验室”的新初创公司专注于这种方法。如果一切顺利,世界模型可能会成为下一个热点。——威尔·道格拉斯·海文

6. 超大规模数据中心
你是否听说过很多人都在说不,谢谢,我们其实不想要一个巨大的数据中心突然出现在我们的后院?这些科技公司想要到处建造(包括太空)的数据中心,通常被称为“超大规模数据中心”:为AI运营专门建造的大型建筑,被OpenAI和谷歌等公司用来构建更大、更强大的AI模型。在这些建筑内部,世界上最好的芯片嗡嗡作响,训练和微调模型,并且它们被设计成模块化,可以根据需求增长。

对超大规模数据中心来说,这是重要的一年。OpenAI与前总统唐纳德·特朗普一起宣布了其“星际之门”项目,这是一个耗资5000亿美元的合资项目,旨在全国范围内建造有史以来最大的数据中心。但这让几乎所有其他人都在问:我们到底能得到什么?消费者担心新的数据中心会提高他们的电费账单。这类建筑通常难以依靠可再生能源运行。而且它们往往不会创造那么多就业机会。但是,嘿,也许这些巨大的、没有窗户的建筑至少能给你的社区带来一种忧郁的科幻氛围。——詹姆斯·奥唐奈

7. 泡沫
AI的崇高承诺正在提振经济。AI公司筹集着令人瞠目的资金,看着自己的估值飙升到平流层。它们正将数千亿美元投入芯片和数据中心,资金越来越多地来自债务和令人侧目的循环交易。与此同时,引领淘金热的公司,如OpenAI和Anthropic,可能多年内(如果最终能的话)都无法盈利。投资者正押下重注,认为AI将开启一个财富新时代,但没有人知道这项技术实际上将带来多大的变革。

大多数使用AI的组织尚未看到回报,而AI工作垃圾无处不在。科学界不确定扩展LLM是否会带来超级智能,或者是否需要新的突破来铺平道路。但与互联网泡沫时代的前辈不同,AI公司正显示出强劲的收入增长,其中一些甚至是像微软、谷歌和Meta这样财力雄厚的科技巨头。这场狂热的梦想会破灭吗?——米歇尔·金

8. 智能体化
今年,AI智能体无处不在。整个2025年,每一个新功能发布、模型推出或安全报告都充斥着对它们的提及,尽管许多AI公司和专家对于什么才算真正“智能体化”意见不一,这本身就是一个模糊的术语。尽管几乎无法保证代表你在广阔网络上行动的AI总能完全按照预期行事——但似乎智能体AI在可预见的未来会一直存在。想卖点什么?就叫它智能体化!——瑞安农·威廉姆斯

9. 知识蒸馏
今年早些时候,深度求索发布了其新模型DeepSeek R1,这是一个与顶级西方模型相媲美但成本仅为其一小部分的开源推理模型。它的发布让硅谷慌了神,因为许多人第一次突然意识到,庞大的规模和资源不一定是高级AI模型的关键。R1发布次日,英伟达股价暴跌17%。

R1成功的关键在于“知识蒸馏”,这是一种使AI模型更高效的技术。它的工作原理是让一个更大的模型来指导一个更小的模型:你在大量例子上运行教师模型并记录答案,然后当学生模型尽可能接近地复制这些回答时给予奖励,从而获得教师知识的压缩版本。——陈采薇

10. 谄媚性
随着世界各地的人们花越来越多的时间与ChatGPT等聊天机器人互动,聊天机器人的制造商们正在努力确定模型应该采用什么样的语气和“个性”。早在4月,OpenAI就承认它在“乐于助人”和“卑躬屈膝”之间没有把握好平衡,称一次新的更新使得GPT-4o变得过于谄媚。让它拍你马屁不仅烦人——还可能通过强化用户错误的信念和传播错误信息来误导他们。所以,请记住这一点:对LLM产生的一切——是的,一切——都要持保留态度。——瑞安农·威廉姆斯

11. AI垃圾
如果说有一个与AI相关的术语已经完全脱离了极客圈,进入了公众意识,那就是“AI垃圾”。这个词本身很古老(想想猪食),但“AI垃圾”现在通常用来指由AI生成的、低质量、大批量生产的内容,通常为在线流量而优化。很多人甚至用它作为任何AI生成内容的简称。过去一年感觉它无处不在:我们一直浸泡在其中,从虚假传记到“虾耶稣”图片,再到超现实的人兽混合视频。

但人们也在以此为乐。这个词讽刺性的灵活性使得互联网用户很容易把它作为后缀贴在各种词后面,用来描述任何缺乏实质内容、平庸得荒谬的东西:比如“工作垃圾”或“朋友垃圾”。随着炒作周期重置,“AI垃圾”标志着一种文化反思,关于我们信任什么、我们珍视什么样的创造性劳动,以及被那些为吸引眼球而非表达而制造的东西所包围意味着什么。——陈采薇

12. 物理智能
你今年早些时候看过那个令人着迷的视频吗?一个人形机器人在一个单调的灰白色厨房里收拾碗碟?那几乎体现了“物理智能”的概念:即AI的进步可以帮助机器人更好地在物理世界中移动。

诚然,从手术室到仓库,机器人学习新任务的速度比以往任何时候都快。自动驾驶汽车公司在模拟道路方面也看到了改进。尽管如此,对AI是否已经彻底改变了这个领域持怀疑态度仍然是明智的。例如,想想看,许多被宣传为家庭管家的机器人,其大部分任务实际上是由菲律宾的远程操作员完成的。

物理智能的前路也注定会很奇怪。大型语言模型在文本上训练,互联网上文本丰富,但机器人更多是从人们做事的视频中学习。这就是为什么机器人公司Figure在9月建议,它将付钱让人们拍摄自己在公寓里做家务的视频。你会报名吗?——詹姆斯·奥唐奈

13. 合理使用
AI模型通过吞噬互联网上数百万的文字和图像进行训练,其中包括艺术家和作家的受版权保护的作品。AI公司辩称这是“合理使用”——一种法律原则,允许你在未经许可的情况下使用受版权保护的材料,前提是你将其转化为新的、不与原作竞争的东西。法院开始介入裁决。6月,Anthropic在其AI模型Claude上使用图书馆书籍进行训练被裁定为合理使用,因为该技术“极具变革性”。

同月,Meta也取得了类似的胜利,但仅仅是因为作者们无法证明该公司的“文学盛宴”损害了他们的收入。随着版权战愈演愈烈,一些创作者正在从这场盛宴中获利。12月,迪士尼与OpenAI签署了一项引人注目的协议,允许AI视频平台Sora的用户生成包含迪士尼旗下200多个角色的视频。与此同时,世界各国政府正在为这些吞噬内容的机器重写版权规则。在受版权保护的作品上训练AI是否属于合理使用?就像任何价值数十亿美元的法律问题一样,这要看情况。——米歇尔·金

14. 生成引擎优化
就在短短几年前,整个行业都围绕着帮助网站在搜索结果(好吧,主要是在谷歌)中获得高排名而建立。现在,搜索引擎优化(SEO)正在让位于“生成引擎优化”(GEO),因为AI热潮迫使品牌和企业争相在AI中最大化其可见度,无论是在谷歌AI概览等AI增强的搜索结果中,还是在LLM的回复中。难怪它们会感到恐慌。我们已经知道,新闻公司的搜索驱动网络流量经历了巨大下滑,而AI公司正在研究如何绕过中间商,允许其用户直接从其平台内访问网站。是时候适应或死亡了。——瑞安农·威廉姆斯

深度阅读
人工智能
AGI如何成为我们这个时代最具影响力的阴谋论
机器将和人类一样聪明——甚至更聪明——的想法已经劫持了整个行业。但仔细观察,你会发现它之所以持续存在,与许多阴谋论的原因相同。

OpenAI的新LLM揭示了AI如何真正工作的秘密
这个实验模型不会与最大最好的模型竞争,但它可以告诉我们它们为何行为怪异——以及它们到底有多可靠。

量子物理学家缩小并“去审查”了DeepSeek R1
他们成功地将这个AI推理模型的体积缩小了一半以上——并声称它现在可以回答曾经在中国AI系统中受限的政治敏感问题。

保持联系
获取《麻省理工科技评论》的最新动态
发现特别优惠、头条新闻、即将举行的活动等。

英文来源:

AI Wrapped: The 14 AI terms you couldn’t avoid in 2025
From “superintelligence” to “slop,” here are the words and phrases that defined another year of AI craziness.
If the past 12 months have taught us anything, it’s that the AI hype train is showing no signs of slowing. It’s hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn’t a thing.
If that’s left you feeling a little confused, fear not. As we near the end of 2025, our writers have taken a look back over the AI terms that dominated the year, for better or worse.
Make sure you take the time to brace yourself for what promises to be another bonkers year.
—Rhiannon Williams

  1. Superintelligence
    As long as people have been hyping AI, they have been coming up with names for a future, ultra-powerful form of the technology that could bring about utopian or dystopian consequences for humanity. “Superintelligence” is that latest hot term. Meta announced in July that it would form an AI team to pursue superintelligence, and it was reportedly offering nine-figure compensation packages to AI experts from the company’s competitors to join.
    In December, Microsoft’s head of AI followed suit, saying the company would be spending big sums, perhaps hundreds of billions, on the pursuit of superintelligence. If you think superintelligence is as vaguely defined as artificial general intelligence, or AGI, you’d be right! While it’s conceivable that these sorts of technologies will be feasible in humanity’s long run, the question is really when, and whether today’s AI is good enough to be treated as a stepping stone toward something like superintelligence. Not that that will stop the hype kings. —James O’Donnell
  2. Vibe coding
    Thirty years ago, Steve Jobs said everyone in America should learn how to program a computer. Today, people with zero knowledge of how to code can knock up an app, game, or website in no time at all thanks to vibe coding—a catch-all phrase coined by OpenAI cofounder Andrej Karpathy. To vibe-code, you simply prompt generative AI models’ coding assistants to create the digital object of your desire and accept pretty much everything they spit out. Will the result work? Possibly not. Will it be secure? Almost definitely not, but the technique’s biggest champions aren’t letting those minor details stand in their way. Also—it sounds fun! — Rhiannon Williams
  3. Chatbot psychosis
    One of the biggest AI stories over the past year has been how prolonged interactions with chatbots can cause vulnerable people to experience delusions and, in some extreme cases, can either cause or worsen psychosis. Although “chatbot psychosis” is not a recognized medical term, researchers are paying close attention to the growing anecdotal evidence from users who say it’s happened to them or someone they know. Sadly, the increasing number of lawsuits filed against AI companies by the families of people who died following their conversations with chatbots demonstrate the technology’s potentially deadly consequences. —Rhiannon Williams
  4. Reasoning
    Few things kept the AI hype train going this year more than so-called reasoning models, LLMs that can break down a problem into multiple steps and work through them one by one. OpenAI released its first reasoning models, o1 and o3, a year ago.
    A month later, the Chinese firm DeepSeek took everyone by surprise with a very fast follow, putting out R1, the first open-source reasoning model. In no time, reasoning models became the industry standard: All major mass-market chatbots now come in flavors backed by this tech. Reasoning models have pushed the envelope of what LLMs can do, matching top human performances in prestigious math and coding competitions. On the flip side, all the buzz about LLMs that could “reason” reignited old debates about how smart LLMs really are and how they really work. Like “artificial intelligence” itself, “reasoning” is technical jargon dressed up with marketing sparkle. Choo choo! —Will Douglas Heaven
  5. World models
    For all their uncanny facility with language, LLMs have very little common sense. Put simply, they don’t have any grounding in how the world works. Book learners in the most literal sense, LLMs can wax lyrical about everything under the sun and then fall flat with a howler about how many elephants you could fit into an Olympic swimming pool (exactly one, according to one of Google DeepMind’s LLMs).
    World models—a broad church encompassing various technologies—aim to give AI some basic common sense about how stuff in the world actually fits together. In their most vivid form, world models like Google DeepMind’s Genie 3 and Marble, the much-anticipated new tech from Fei-Fei Li’s startup World Labs, can generate detailed and realistic virtual worlds for robots to train in and more. Yann LeCun, Meta’s former chief scientist, is also working on world models. He has been trying to give AI a sense of how the world works for years, by training models to predict what happens next in videos. This year he quit Meta to focus on this approach in a new start up called Advanced Machine Intelligence Labs. If all goes well, world models could be the next thing. —Will Douglas Heaven
  6. Hyperscalers
    Have you heard about all the people saying no thanks, we actually don’t want a giant data center plopped in our backyard? The data centers in question—which tech companies want to built everywhere, including space—are typically referred to as hyperscalers: massive buildings purpose-built for AI operations and used by the likes of OpenAI and Google to build bigger and more powerful AI models. Inside such buildings, the world’s best chips hum away training and fine-tuning models, and they’re built to be modular and grow according to needs.
    It’s been a big year for hyperscalers. OpenAI announced, alongside President Donald Trump, its Stargate project, a $500 billion joint venture to pepper the country with the largest data centers ever. But it leaves almost everyone else asking: What exactly do we get out of it? Consumers worry the new data centers will raise their power bills. Such buildings generally struggle to run on renewable energy. And they don’t tend to create all that many jobs. But hey, maybe these massive, windowless buildings could at least give a moody, sci-fi vibe to your community. —James O’Donnell
  7. Bubble
    The lofty promises of AI are levitating the economy. AI companies are raising eye-popping sums of money and watching their valuations soar into the stratosphere. They’re pouring hundreds of billions of dollars into chips and data centers, financed increasingly by debt and eyebrow-raising circular deals. Meanwhile, the companies leading the gold rush, like OpenAI and Anthropic, might not turn a profit for years, if ever. Investors are betting big that AI will usher in a new era of riches, yet no one knows how transformative the technology will actually be.
    Most organizations using AI aren’t yet seeing the payoff, and AI work slop is everywhere. There’s scientific uncertainty about whether scaling LLMs will deliver superintelligence or whether new breakthroughs need to pave the way. But unlike their predecessors in the dot-com bubble, AI companies are showing strong revenue growth, and some are even deep-pocketed tech titans like Microsoft, Google, and Meta. Will the manic dream ever burst? —Michelle Kim
  8. Agentic
    This year, AI agents were everywhere. Every new feature announcement, model drop, or security report throughout 2025 was peppered with mentions of them, even though plenty of AI companies and experts disagree on exactly what counts as being truly “agentic,” a vague term if ever there was one. No matter that it’s virtually impossible to guarantee that an AI acting on your behalf out in the wide web will always do exactly what it’s supposed to do—it seems as though agentic AI is here to stay for the foreseeable. Want to sell something? Call it agentic! —Rhiannon Williams
  9. Distillation
    Early this year, DeepSeek unveiled its new model DeepSeek R1, an open-source reasoning model that matches top Western models but costs a fraction of the price. Its launch freaked Silicon Valley out, as many suddenly realized for the first time that huge scale and resources were not necessarily the key to high-level AI models. Nvidia stock plunged by 17% the day after R1 was released.
    The key to R1’s success was distillation, a technique that makes AI models more efficient. It works by getting a bigger model to tutor a smaller model: You run the teacher model on a lot of examples and record the answers, and reward the student model as it copies those responses as closely as possible, so that it gains a compressed version of the teacher’s knowledge. —Caiwei Chen
  10. Sycophancy
    As people across the world spend increasing amounts of time interacting with chatbots like ChatGPT, chatbot makers are struggling to work out the kind of tone and “personality” the models should adopt. Back in April, OpenAI admitted it’d struck the wrong balance between helpful and sniveling, saying a new update had rendered GPT-4o too sycophantic. Having it suck up to you isn’t just irritating—it can mislead users by reinforcing their incorrect beliefs and spreading misinformation. So consider this your reminder to take everything—yes, everything—LLMs produce with a pinch of salt. —Rhiannon Williams
  11. Slop
    If there is one AI-related term that has fully escaped the nerd enclosures and entered public consciousness, it’s “slop.” The word itself is old (think pig feed), but “slop” is now commonly used to refer to low-effort, mass-produced content generated by AI, often optimized for online traffic. A lot of people even use it as a shorthand for any AI-generated content. It has felt inescapable in the past year: We have been marinated in it, from fake biographies to shrimp Jesus images to surreal human-animal hybrid videos.
    But people are also having fun with it. The term’s sardonic flexibility has made it easy for internet users to slap it on all kinds of words as a suffix to describe anything that lacks substance and is absurdly mediocre: think “work slop” or “friend slop.” As the hype cycle resets, “slop” marks a cultural reckoning about what we trust, what we value as creative labor, and what it means to be surrounded by stuff that was made for engagement rather than expression. —Caiwei Chen
  12. Physical intelligence
    Did you come across the hypnotizing video from earlier this year of a humanoid robot putting away dishes in a bleak, gray-scale kitchen? That pretty much embodies the idea of physical intelligence: the idea that advancements in AI can help robots better move around the physical world.
    It’s true that robots have been able to learn new tasks faster than ever before, everywhere from operating rooms to warehouses. Self-driving-car companies have seen improvements in how they simulate the roads, too. That said, it’s still wise to be skeptical that AI has revolutionized the field. Consider, for example, that many robots advertised as butlers in your home are doing the majority of their tasks thanks to remote operators in the Philippines.
    The road ahead for physical intelligence is also sure to be weird. Large language models train on text, which is abundant on the internet, but robots learn more from videos of people doing things. That’s why the robot company Figure suggested in September that it would pay people to film themselves in their apartments doing chores. Would you sign up? —James O'Donnell
  13. Fair use
    AI models are trained by devouring millions of words and images across the internet, including copyrighted work by artists and writers. AI companies argue this is “fair use”—a legal doctrine that lets you use copyrighted material without permission if you transform it into something new that doesn’t compete with the original. Courts are starting to weigh in. In June, Anthropic’s training of its AI model Claude on a library of books was ruled fair use because the technology was “exceedingly transformative.”
    That same month, Meta scored a similar win, but only because the authors couldn’t show that the company’s literary buffet cut into their paychecks. As copyright battles brew, some creators are cashing in on the feast. In December, Disney signed a splashy deal with OpenAI to let users of Sora, the AI video platform, generate videos featuring more than 200 characters from Disney's franchises. Meanwhile, governments around the world are rewriting copyright rules for the content-guzzling machines. Is training AI on copyrighted work fair use? As with any billion-dollar legal question, it depends. —Michelle Kim
  14. GEO
    Just a few short years ago, an entire industry was built around helping websites rank highly in search results (okay, just in Google). Now search engine optimization (SEO), is giving way to GEO—generative engine optimization—as the AI boom forces brands and businesses to scramble to maximize their visibility in AI, whether that’s in AI-enhanced search results like Google’s AI Overviews or within responses from LLMs. It’s no wonder they’re freaked out. We already know that news companies have experienced a colossal drop in search-driven web traffic, and AI companies are working on ways to cut out the middleman and allow their users to visit sites from directly within their platforms. It’s time to adapt or die. —Rhiannon Williams
    Deep Dive
    Artificial intelligence
    How AGI became the most consequential conspiracy theory of our time
    The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do.
    OpenAI’s new LLM exposes the secrets of how AI really works
    The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways—and how trustworthy they really are.
    Quantum physicists have shrunk and “de-censored” DeepSeek R1
    They managed to cut the size of the AI reasoning model by more than half—and claim it can now answer politically sensitive questions once off limits in Chinese AI systems.
    Stay connected
    Get the latest updates from
    MIT Technology Review
    Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读