«

思想的摩托

qimuai 发布于 阅读:1 一手编译


思想的摩托

内容来源:https://nav.al/ai

内容总结:

AI时代新变革:人人皆可编程,但顶尖工程师价值倍增

近日,知名投资人Naval Ravikant在其播客节目中,与主持人Nivi就人工智能(AI)的现状与未来影响展开深度对话。Naval指出,当前AI技术,特别是代码生成模型的突破,正在引发一场生产力革命,其核心是编程门槛的急剧降低和软件工程师影响力的进一步放大。

“氛围编程”兴起,产品经理直接创造应用
Naval提出,以Claude Code为代表的先进代码模型,使得“氛围编程”成为可能。所谓“氛围编程”,即非专业程序员使用自然语言(如英语)描述需求,AI助手便能理解意图、规划架构、编写代码并调试,最终生成完整可用的应用程序。这实质上将产品管理的核心——定义需求、构思产品——直接转化为可执行代码,让产品经理乃至任何有想法的人都能快速创造应用。

应用市场将呈现“超级头部”与“无限长尾”并存
Naval预测,编程的民主化将导致应用数量爆发式增长。市场格局将两极分化:一方面,每个细分领域的最佳应用将赢家通吃,凭借AI工具变得更为强大、功能更完善;另一方面,大量极度个性化、小众化的应用将被创造出来,满足以往因市场规模太小而无法被商业化的需求。最终,应用商店模式将更加极端,呈现少数几个“超级应用”与海量“利基应用”共存的局面。

传统软件工程远未过时,顶尖人才更显稀缺
尽管基础编程门槛降低,但Naval强调,传统软件工程不仅没有消亡,其顶尖从业者的价值反而更加凸显。原因在于:AI生成的代码可能存在架构缺陷、性能瓶颈或隐蔽错误,需要深刻理解底层原理的工程师进行优化、调试和“查漏补缺”;此外,涉及高性能计算、全新硬件架构或前沿探索性领域的问题,仍需工程师手动编码解决。因此,具备扎实计算机科学基础和工程能力的软件工程师,将能更高效地利用AI工具,解决更复杂的问题,其杠杆效应(影响力)被空前放大。

英语成为“最热门编程语言”,学习应聚焦本质而非技巧
Naval赞同AI研究员Andrej Karpathy的观点,即“英语已成为最热门的编程语言”。他认为,与其花费大量精力学习快速变化的“提示词工程”技巧,不如专注于用清晰、结构化的语言表达需求,同时深入理解计算机的基本原理。因为AI正以极快的速度适应人类,其易用性工具发展迅猛,而理解底层逻辑能帮助使用者更好地驾驭AI,判断其输出的可靠性。

AI是强大工具与学习伙伴,而非替代创业者的对手
针对普遍的“AI焦虑”,Naval认为,AI本质上是没有自身欲望和能动性的工具,其“智能”体现在高效执行人类指令。对于创业者、科学家、艺术家等需要极高自主性和创造力的角色而言,AI是强大的盟友而非替代者,能帮助他们将创意更快地变为现实。他鼓励公众以行动化解焦虑:主动接触、使用并尝试理解AI的工作原理,将其作为提升个人学习效率(如按个人水平定制化讲解复杂概念)和生产力的强大工具,从而把握技术变革带来的机遇。

Naval总结道,当前我们正进入一个“人人皆可施展创造力”的黄金时代。AI如同“思维的摩托车”,极大地扩展了每个人的能力边界,但驾驭它的方向、目标和智慧,依然牢牢掌握在人类手中。

中文翻译:

心灵摩托

尼维:大家好,我是尼维。您正在收听《纳瓦尔播客》。这是有记录以来我们第一次不在同一个地点。我实际上正在城里散步,纳瓦尔可能也在做同样的事,所以可能会有一些环境噪音,但我们会努力通过人工智能和一些好的音频工程来消除它。

纳瓦尔:播客录制通常很刻板,因为你必须坐下来,安排好时间,面前摆着一个巨大的麦克风,一点都不随意。这让它显得不那么真实——更像是排练过的、演练过的。我理解这或许能产生更高质量的音频和视频,但我觉得这会产生更低质量的对话。

尼维:而且我们都知道,当身体在移动、四处走动或散步时,大脑运转得更好。

纳瓦尔:完全正确。我的大脑是由我的双腿驱动的。

尼维:我从纳瓦尔关于AI的推文中挑选了一些。我们想稍微谈谈AI,并希望以一种更永恒而非应时的方式谈论它,但我觉得其中一些内容可能不会那么永恒。

纳瓦尔:是的,网络评论员有一种倾向,他们会翻看五年前说过的话,然后跳出来说:“啊哈!看,那个结果是错的。”嗯,是的,当然。没人能预测未来。这就是未来的本质。如果我们能预测,我们早就到那儿了。所以,谈论未来总是危险的,特别是当听众没有意识到这一点时,但请宽容些。我们显然是在2026年2月讨论这些事情,我们依据的是现在掌握的信息,而不是完美的后见之明。因此,除非你自己也基于风险——可证伪的、具体的、精确的预测——提出过预测来比较,否则就没有依据说谁对谁错。

想学就去做

尼维:在我们深入讨论推文之前,你想说说你最近在忙什么,或者在Impossible公司做什么吗?

纳瓦尔:没什么特别的。我们正在和一个很棒的团队做一个非常困难的项目——所以它叫Impossible(不可能)——重新构建一些东西真的很令人兴奋。这非常纯粹,从零开始。永远是第一天。我想我只是不满足于只做一个投资者,我当然也不想只做一个哲学家、媒体人或评论员。因为我觉得那些说得太多却什么都不做的人……他们没有接触过现实。他们没有得到反馈——来自自由市场、物理或自然的严酷反馈——所以过了一段时间,最终就变成了太多纸上谈兵的理论。你可能已经注意到我最近的推文更加务实和实用,虽然偶尔还有些抽象或泛泛的,但它更扎根于每天工作的现实。我只是喜欢和一个优秀的团队一起创造一些我希望看到存在的东西。所以,希望我们能创造出一些成果,人们会说:“哇,太棒了,我也想要那个。”或者也许不会,但正是在做的过程中你才能学到东西。

氛围编码是新的产品管理

尼维:所以我挑了一条几天前(2月3日)的推文:“氛围编码是新的产品管理。训练和调优模型是新的编码。”

纳瓦尔:在过去一年,尤其是最近几个月,出现了一个转变——一个显著的宣告——最明显的是Claude Code,这是一个内置了编码引擎的特定模型,它如此出色,以至于我认为现在出现了“氛围编码员”,这些人以前不怎么编码或很久没编码了,他们基本上把英语当作编程语言——作为输入这个代码机器人的指令——它可以进行端到端的编码。它不只是帮你调试中间环节,你可以描述你想要的一个应用程序。你可以让它制定计划,让它就计划向你提问。你可以一路给它反馈,然后它会分解任务,搭建所有脚手架。它会下载所有库、所有连接器和所有钩子,开始构建你的应用,构建测试框架并进行测试。你可以继续通过语音给它反馈和调试,说“这个不行,那个行,改这个,改那个”,让它为你构建一个完整的、可运行的应用程序,而你无需写一行代码。对于一大批不再编码或从未编码过的人来说,这简直令人震惊。这让他们从想法空间、观点空间、品味直接进入产品。所以我的意思是——产品管理已经接管了编码。氛围编码就是新的产品管理。你不再是通过告诉一群工程师做什么来管理一个产品或团队,而是告诉计算机做什么。计算机不知疲倦,没有自我,它会一直工作。它会接受反馈而不觉得被冒犯。你可以启动多个实例。它会7x24小时工作,并产生可运行的输出。这意味着什么?就像现在任何人都可以制作视频或播客一样,现在任何人都可以制作应用程序。所以我们应该预期会看到应用程序的海啸。不是说应用商店里现在没有海量应用,但这与我们即将看到的相比,根本不值一提。然而,当你开始被这些应用淹没时,是否意味着它们都会被使用或有竞争力?不。我认为它会分成两类。首先,针对特定用例的最佳应用仍然倾向于赢得整个类别。当你有如此大量的内容时,无论是视频、音频、音乐还是应用,平庸的东西没有需求。没人想要一般的东西。人们想要能完成工作的最好的东西。所以,首先,你只是有更多的尝试机会。因此,最好的应用会更多。会有更多的利基市场被填补。你可能一直想要一个针对非常具体事情的应用程序,比如在特定背景下追踪月相,或者某种性格测试,或者某种让你怀旧的特定类型电子游戏。以前,市场不够大,不足以证明一个工程师花一两年时间编码的成本。但现在,最好的氛围编码应用可能就足以满足那个需求或填补那个空缺。所以会有更多的利基市场被填补,随着这种情况发生,水涨船高。最好的应用——那些工程师本身将获得更大的杠杆效应。他们将能够添加更多功能,修复更多错误,打磨更多细节。所以最好的应用会继续变得更好。更多的利基市场会被填补。甚至个人利基——比如你想要一个只为你自己非常具体的健康追踪需求,或为你自己非常具体的建筑布局或设计而设的应用——那个以前不可能存在的应用现在将会存在。我们应该预期——就像互联网上发生的那样——亚马逊取代了一堆书店,变成了一个超级书店加上无数长尾卖家;或者YouTube取代了一堆中型电视台和广播网络,变成了一个名为YouTube的巨型聚合器,或者也许还有第二个叫Netflix,然后是一整个长尾内容生产者。同样地,应用商店模式将变得更加极端,你将有一两个巨大的应用商店帮你筛选所有那些AI生成的粗糙应用,而在最顶端,会有几个巨大的应用变得更大,因为它们现在可以满足更多用例,或者只是更加精良。然后会有一个长尾,由填补所有能想到的利基市场的小应用组成。正如互联网提醒我们的那样,真正的力量和财富——超级财富——归于聚合者。但也有大量资源分布到长尾。被冲击的是中型公司——那些5人、10人、20人的软件公司,它们曾经为企业用例填补利基,现在这些用例要么可以被氛围编码替代,要么该领域的领先应用现在可以涵盖那个用例。

训练模型是新的编码

纳瓦尔:那么,如果任何人都能编码,那编码是什么?编码仍然存在于几个领域。编码存在的最明显的地方是训练这些模型本身。模型有很多种。每天都有新的出现,针对不同领域有不同的模型。我们将看到针对生物学、编程的不同模型。我们将看到针对传感器的精准、聚焦的模型。我们将看到用于CAD、设计的模型。我们将看到用于3D、图形和游戏的模型,用于视频的模型。你会看到许多不同类型的模型。创造这些模型的人本质上是在对它们进行编程。但编程方式与经典计算机非常不同。经典计算是:你必须极其详细地指定计算机要采取的每一步、每一个动作。你必须对每一部分进行形式化推理,并用高度结构化的语言编写,让你能够极其精确地表达自己。计算机只能做你告诉它的事。然后,一旦你有了这个高度结构化的程序,你让数据通过它运行,计算机处理数据并给你输出。它基本上是一个极其花哨、非常复杂、精心编程的计算器。现在,对于AI,你做的事情非常不同。但你仍然是在编程它。你所做的是,获取人类产生的大量数据集——得益于互联网,或以其他方式聚合——并将这些数据集倒入你定义和调优的结构中。这个结构试图找到一个能够产生更多该数据集、或操纵该数据集、或基于该数据集创造东西的程序。所以你是在你设计的这个构造中寻找一个程序。你建立了一个模型,调整了参数数量、学习率、批量大小。你对输入的数据进行了标记化处理,将其分解成块,然后倒入你设计的系统中——几乎像一个巨大的弹珠机——现在系统试图找到一个程序,并且可能找到许多不同的程序。所以你的调优真的会影响你找到的程序的质量。而这个程序现在突然可以在不同领域进行表达。所以它可以做传统计算机以前非常不擅长的事情。传统计算机在你编程它们给出精确输出时非常擅长——针对特定问题的具体答案——你可以依赖并反复重复的事情。但有时你在现实世界中操作,模糊的答案是可以接受的。你甚至能接受错误的答案。例如,在创意写作中,什么是错误答案?如果你在写一首诗或一篇小说,什么是错误答案?如果你在网上搜索,有很多正确答案——正确答案有很多细节——但它们并不都完全正确。现实生活某种程度上就是这样运作的。有各种正确的答案或基本正确的答案。当你画一只猫的图片时,你可以画很多不同的猫。有很多不同的细节层次。你可以使用很多不同的风格。当这些半错或模糊的答案可以接受时,那么通过这些AI发现的程序就比你从头编码、必须超级精确的程序更有趣,也更适应问题。从根本上说,我们正在做的是一种新的编程,但这是编程的前沿。这就是现在的编程艺术。这些人就是新的程序员,这就是为什么你可以看到AI研究人员获得巨额报酬,因为他们本质上已经接管了编程。

传统软件工程死了吗?

纳瓦尔:这是否意味着传统软件工程死了?绝对不是。软件工程师——即使是那些不一定在调优或训练AI模型的人——现在是地球上杠杆效应最强的人群之一。当然,训练和调优模型的人杠杆效应更大,因为他们正在构建软件工程师使用的工具集。但软件工程师仍然有两个巨大的优势。第一,他们用代码思考,所以他们实际上知道底层发生了什么。而且所有抽象都会泄漏。所以,当有计算机为你编程时——当Claude Code或类似的工具为你编程时——它会犯错误。它会有漏洞。它的架构可能不是最优的。所以它不会完全正确。而理解底层发生了什么的人将能够在漏洞出现时堵上它们。所以,如果你想构建一个架构良好的应用程序,如果你甚至想能够描述一个架构良好的应用程序,如果你想让它以高性能运行,如果你想让它发挥最佳水平,如果你想尽早发现漏洞,那么你会希望有软件工程背景。传统的软件工程师将能更好地使用这些工具。而且,软件工程中仍然有许多问题是当前这些AI程序无法处理的。最简单的方法是想那些超出它们数据分布范围的问题。例如,如果它们需要做二分查找或反转链表,它们已经见过无数例子,所以非常擅长。但当你开始超出它们的领域——当你需要编写非常高性能的代码时,当你在新颖或全新的架构上运行时,当你实际上在创造新东西或解决新问题时——你仍然需要深入其中,手动编码。至少直到有足够多的此类例子可以训练新模型,或者直到这些模型能在更高的抽象层次上进行充分推理并自行解决。因为给定足够多的数据点,有证据表明这些AI实际上在学习。它们学到了更高层次的抽象,因为迫使它们压缩数据的行为迫使它们学习更高级的表征。如果我给AI看五个圆圈,它可以只是精确地记住这些圆圈的大小、半径、厚度等等。如果我给它看5万个或50亿个圆圈,并且只给它非常少的参数权重(相当于它的神经元)来记忆,那么它最好能计算出π、如何画圆、厚度意味着什么,并形成那个圆圈的算法表征,而不是记忆圆圈。考虑到所有这些,这些东西正在加速学习,你可以看到它们开始覆盖更多我提到的边缘情况。但至少就目前而言,这些边缘情况仍然足够普遍,以至于一个在该领域知识前沿的优秀工程师能够轻松超越氛围编码员。

平庸没有需求

纳瓦尔:记住:平庸没有需求。平庸的应用——没人想要,至少只要它不是被一个更优秀的应用填补的利基市场。更好的应用基本上会赢得100%的市场。也许有一小部分会流向第二好的应用,因为它某个小利基功能比主要应用做得更好,或者更便宜,或者类似的原因。但一般来说,人们只想要最好的东西。所以坏消息是,做第二名或第三名没有意义——就像著名的《拜金一族》电影场景里亚历克·鲍德温说的:“第一名得到凯迪拉克埃尔多拉多,第二名得到一套牛排刀,第三名你被解雇了。”在这些赢家通吃的市场中绝对如此。这就是坏消息:如果你想赢,你必须在某件事上做到最好。然而,你可以成为最好的事情是无限的。你总能找到某个非常适合你的利基市场,并且你可以成为那件事上最好的。这可以追溯到我以前的一条推文,我说过:“成为你所做之事的世界最佳。不断重新定义你所做之事,直到这句话成真。”我认为这在AI时代仍然适用。

最热门的新编程语言是英语

尼维:我认为看待这些编码模型的方式是,它们是程序员自计算机诞生以来一直使用的抽象堆栈中的又一个新层,从晶体管到计算机芯片,到汇编语言,到C编程语言,到高级语言,到拥有巨大库的语言,他们不断构建这个堆栈,所以你不需要看下面一层,除非你需要优化它,或者有理由需要看下面一层。所以在这种情况下,这些编码模型是堆栈中的一个巨大的新层,它让产品经理、典型的非程序员和程序员无需写代码就能写代码。

纳瓦尔:从趋势线来看,我认为这是正确的。然而,这是一种涌现属性。这不是一个小改进。这是一个大飞跃。例如,我在学校时,主要用C语言编程。然后C++出现了,但它并没有更容易。它在某些方面更抽象一些,我从来没费心去学它。然后Python出现了,我想:“哇,这几乎就像用英语写作。”我大错特错了。英语离Python还很远,但它比C容易多了。现在你确实可以用英语编程了。这让我想到一个相关点:我认为不值得去学习如何与这些AI合作的技巧和窍门。你会看到,例如,现在社交媒体上有很多文章、书籍和推文,比如“哦,我发现了和机器人合作的一个巧妙技巧。你可以这样提示它,或者你可以这样设置你的工具。”或者有一些新的编程辅助工具或层,你可以在它上面使用来做这个或那个。我从不费心去学那些。我只是傻傻地坐在那里和计算机说话,因为我知道这东西现在已经发展到它适应我的速度会比我适应它更快。它越来越了解人们想如何使用它。所以它在学习,在被训练,并且工具正在快速被构建,让我更容易使用它。所以我不需要坐在那里琢磨一些深奥的编程命令。我想这就是安德烈·卡帕西说“英语是最热门的新编程语言”时的意思。我只需要说英语。对于像我这样英语相对流利、思维也有条理,并且了解计算机架构如何工作、计算机程序如何工作、程序员如何思考的人来说,我实际上可以通过结构化的英语非常精确地指定我想要什么。我不需要更进一步。使用这些工作流和工具集的唯一理由是——它们非常短暂,寿命以周计,或许最多几个月,而不是以年计——如果你现在正在构建一个需要处于前沿的应用,并且你绝对需要你能获得的每一点优势,因为你处于某种竞争环境中。否则,我不会费心去学习如何使用AI——而是让AI学习如何对你有用。

尼维:我从来不喜欢提示工程。甚至在AI之前,我就只是输入人们所谓的“婴儿潮一代查询”,就是你输入你想问的整个问题,而不是如果你更善于分析思考的话会输入谷歌的关键词。我从不花太多时间为任何类型的AI制定非常精确的问题或提示。我只是对它漫谈,从AI诞生之初我就这么做了。就像你说的,AI适应我们的速度比我们适应它更快。

纳瓦尔:像许多聪明人一样,你非常“懒”。我这么说是一种赞美。如果你发现一个聪明人有点过于埋头苦干,你可能会怀疑他们到底有多聪明。我说的“懒”是指你在优化正确的效率。你不在乎计算机、电子设备或电路中电子的效率。你在乎你自己作为人的效率——湿件——生物学,这是超级昂贵的。这就是为什么看到人们费尽心思去节省能源和环境显得很可笑。但他们自己,作为一个消耗食物、排泄、占据空间的生物计算机,为了节省环境中微小的能量,却消耗了远多得多的能量。他们本质上贬低了自己在宇宙中的重要性,或者说暴露了他们对自己的看法。

AI适应我们的速度比我们适应它更快

纳瓦尔:我认为随着AI进化或与我们共同进化,它是根据我们的需求由我们进化的。AI面临的压力是非常资本主义的压力,从某种意义上说,AI存在自由市场。作为一个AI实例,你只有在对人类有用时才会被人类启动。所以这些AI存在自然选择压力,要变得有用、顺从,做我们想做的事。所以它会继续朝这个方向适应,我认为会对我们非常有帮助。这并不是说没有恶意的AI,但它之所以恶意,是因为使用它的人出于恶意目的使用它。就像一只被训练来攻击的狗,它实际上是被主人训练去执行主人的恶意欲望。所以我并不真正担心不对齐的AI。我担心的是不对齐的人类使用AI。

尼维:所以你说的选择压力是让AI对人类最大程度地有用。

纳瓦尔:正确。所以,如果你发现一个AI对你非常顺从,比如它总是说“哦,你是对的。哦,真是个好主意。天哪,你真聪明”——那是因为大多数人想要的就是这个。至少目前,这些AI正在基于海量用户和海量数据进行训练,因为你使用的是通用模型。但我们将很快进入一个时代,你可以个性化你的AI,它会开始感觉越来越像你的个人助理,更符合你的需求,这当然会让AI更加拟人化。当你把它训练得看起来最像一个活物时,你更可能被说服:“哦,实际上这东西是活的。”

尼维:也许我们已经讨论得够多了,但一年多前你发推说“AI不会取代程序员,而是让程序员更容易取代其他人。”

纳瓦尔:是的,这就是我之前的观点,即程序员正变得更有杠杆效应。所以现在,一个拥有一队AI的程序员,可以说比以前高效5-10倍。而且因为程序员在智力领域工作,甚至说10倍程序员都是错误的,因为那里有100倍的程序员,有1000倍的程序员。有些程序员只是选对了要解决的问题,他们创造了一些有价值的东西,而另一些人选错了问题,他们的工作在短期内价值为零。智力不是正态分布的。杠杆效应不是正态分布的。可编程性不是正态分布的。判断力不是正态分布的,所以结果将是超常的。所以你真正需要警惕的是:现在有些程序员将提出能够取代整个行业的想法。他们将彻底重写做事的方式,他们的智力可以通过所有这些机器人和AI代理得到最大程度的杠杆化。我认为从长远来看,其他所有工作都将以某种方式被程序员吞噬。显然,它必须通过机器人等具体化。但好消息是:任何具有逻辑性、结构化思维、像程序员一样思考,并且能说任何AI能理解的语言(这将是所有语言)的人,现在都将进入竞技场。他们将能够制造任何他们想要的东西,只受限于他们的创造力,只受限于他们的想象力。所以我们正在进入一个时代,从某种意义上说,每个人都是施法者。如果你把程序员看作这些记住了神秘咒语的巫师,你可以把AI看作一根交给每个人的魔杖,现在他们可以用任何他们想要的语言说话,他们也是巫师了。所以这是一个更公平的竞技场。我真的认为这是编程的黄金时代。但是,是的,那些拥有软件工程思维、理解计算机架构并能处理泄漏抽象的人将拥有优势。这是无法避免的。他们只是在所操作的领域拥有更多知识。就像即使在经典的

英文来源:

A Motorcycle for the Mind
Nivi: Hey, this is Nivi. You’re listening to the Naval Podcast. For the first time in recorded history, we are not at the same location. I am actually walking around town and Naval might be doing the same, so there might be some ambient noise, but we are going to try hard to remove that with AI and some good audio engineering.
Naval: Podcast recording is so stilted, because it’s like you have to sit down and you schedule something, and you have this giant mic pointing in your face and it’s not casual. It makes it just less authentic—more practiced, more rehearsed. I get that it produces maybe higher-quality audio and video, but I feel like it produces lower-quality conversation.
Nivi: And we all know brains run better when they’re being locomoted and you’re moving around or just going for walks.
Naval: Absolutely. My brain is powered by my legs.
Nivi: I pulled out some tweets from Naval on the topic of AI. We want to talk a little bit about AI and hopefully talk about it in a more timeless manner than a timely manner, but I think some of it’s going to be non-timeless content.
Naval: Yeah, there’s a tendency with the internet commentators where they’ll look at something said five years ago and jump and say, “Aha! Well, that turned out to be false.”
Well, yes, of course. No one can predict the future. That’s the nature of the future. If we could predict it, we’d be there already.
So it’s always dangerous to talk about the future when people listening aren’t aware of that, but just be charitable. We are obviously talking about things in February of 2026, and we’re working with the information we have now, and not with perfect hindsight.
And so unless you have your own predictions that you put out there on a risky basis—risky, narrow, precise predictions that are falsifiable—to compare to, then there’s no basis for saying somebody was right or somebody else was wrong.
If You Want to Learn, Do
Nivi: Before we jump into the tweets, do you want to say anything about what you’re doing with your time or what you’re doing at Impossible?
Naval: Not really. We’re working on a very difficult project—that’s why it’s called Impossible—with an amazing team, and it’s really exciting building something again. It’s very pure, starting over from the bottom. It’s always day one. I guess I just wasn’t satisfied being an investor, and I certainly don’t want to be a philosopher or just a media personality or a commentator. Because I think people who just talk too much and don’t do anything… they haven’t encountered reality.
They haven’t gotten feedback—the harsh feedback from free markets or from physics or nature—and so after a while it ends up becoming just too much armchair philosophy. You probably have noticed my recent tweets have been much more practical and pragmatic, although there are still occasional ethereal or generic ones, but it’s more grounded in the reality of working every day.
And I just like working with a great team to create something that I want to see exist. So hopefully we’ll create something that will come to fruition and people will say, “Wow, that’s great. I want that also,” or maybe not, but it’s in the doing that you learn.
Vibe Coding Is the New Product Management
Nivi: So I pulled out a tweet from a couple days ago, February 3rd: “Vibe coding is the new product management. Training and tuning models is the new coding.”
Naval: There’s been a shift—a marked pronouncement in the last year and especially in the last few months—most pronounced by Claude Code, which is a specific model that has a coding engine in it, which is so good that I think now you have vibe coders, which are people who didn’t really code much or hadn’t coded in a long time, who are using essentially English as a programming language—as an input into this code bot—which can do end-to-end coding.
Instead of just helping you debug things in the middle, you can describe an application that you want. You can have it lay out a plan, you can have it interview you for the plan. You can give it feedback along the way, and then it’ll chunk it up and will build all the scaffolding.
It’ll download all the libraries and all the connectors and all the hooks, and it’ll start building your app and building test harnesses and testing it. And you can keep giving it feedback and debugging it by voice, saying, “This doesn’t work. That works. Change this. Change that,” and have it build you an entire working application without your having written a single line of code.
For a large group of people who either don’t code anymore or never did, this is mind-blowing.
This is taking them from idea space, and opinion space, and from taste directly into product. So that’s what I mean—product management has taken over coding. Vibe coding is the new product management.
Instead of trying to manage a product or a bunch of engineers by telling them what to do, you’re now telling a computer what to do. And the computer is tireless. The computer is egoless, and it’ll just keep working. It’ll take feedback without getting offended.
You can spin up multiple instances. It’ll work 24/7 and you can have it produce working output.
What does that mean? Just like now anybody can make a video or anyone can make a podcast, anyone can now make an application. So we should expect to see a tsunami of applications. Not that we don’t have one already in the App Store, but it doesn’t even begin to compare to what we’re going to see.
However, when you start drowning in these applications, does that necessarily mean that these are all going to get used or they’re competitive? No. I think it’s going to break into two kinds of things.
First, the best application for a given use case still tends to win the entire category. When you have such a multiplicity of content, whether in videos or audio or music or applications, there’s no demand for average.
Nobody wants the average thing. People want the best thing that does the job. So first of all, you just have more shots on goal. So there will be more of the best. There will be a lot more niches getting filled.
You might have wanted an application for a very specific thing, like tracking lunar phases in a certain context, or a certain kind of personality test, or a very specific kind of video game that made you nostalgic for something. Before, the market just wasn’t large enough to justify the cost of an engineer coding away for a year or two. But now the best vibe coding app might be enough to scratch that itch or fill that slot. So a lot more niches will get filled, and as that happens, the tide will rise.
The best applications—those engineers themselves are going to be much more leveraged. They’ll be able to add more features, fix more bugs, smooth out more of the edges. So the best applications will continue to get better. A lot more niches will get filled.
And even individual niches—such as you want an app that’s just for your own very specific health tracking needs, or for your own very specific architectural layout or design—that app that could have never existed will now exist.
We should expect—just like on the internet—what’s happened with Amazon, where you replaced a bunch of bookstores with one super bookstore and a zillion long-tail sellers; or YouTube replaced a bunch of medium-sized TV stations and broadcast networks with one giant aggregator called YouTube, or maybe a second one called Netflix, and then a whole long tail of content producers.
So the same way, the App Store model will become even more extreme, where you will have one or two giant app stores helping you filter through all of the AI slop apps out there, and then at the very head, there’ll be a few huge apps that will become even bigger because now they can address a lot more use cases or just be a lot more polished. And then there’ll be a long tail of tiny little apps filling every niche imaginable.
As the Internet reminds us, the real power and wealth—super wealth—goes to the aggregator. But there’s also a huge distribution of resources into the long tail. It’s the medium-sized firms that get blown apart—the 5, 10, 20-person software companies that were filling a niche for an enterprise use case that can now be either vibe coded away, or the lead app in the space can now encompass that use case.
Training Models Is the New Coding
Naval: So if anyone can code then what is coding? Coding still exists in a couple of areas. The most obvious place that coding exists is in training these models themselves. There are many different kinds of models. There are new ones coming out every day, there are different ones for different domains. We’re going to see different models for biology, for programming. We’re going to see pointed, focused models for sensors. We’re going to see models for CAD, for design.
We’re going to see models for 3D and graphics and games, models for video. You’re going to see many different kinds of models. The people who are creating these models are essentially programming them. But they’re programmed in a very different way than classic computers.
Classic computing is: you have to specify in great detail every step, every action the computer is going to take. You have to formally reason about every piece and write it in a highly structured language that allows you to express yourself extremely precisely. The computer can only do what you tell it to do.
And then once you’ve got this very structured program, you run data through it and the computer runs the data and gives you an output. It’s basically an incredibly fancy, very complicated, meticulously-programmed calculator.
Now, when it comes to AI, you’re doing something very different. But you are nevertheless programming it.
What you’re doing is you’re taking giant data sets that have been produced by humanity—thanks to the internet, or aggregated in other ways—and you’re pouring those data sets into a structure that you’ve defined and tuned. And that structure tries to find a program that can produce more of that data set, or manipulate that data set, or create things off that data set.
So you’re searching for a program inside this construct that you’ve designed. You’ve set up a model, you’ve tuned the number of parameters, you’ve tuned the learning rate, you’ve tuned the batch size. You have tokenized the data that’s coming, you’ve broken it into pieces, and you’re pouring it inside the system you’ve designed—almost like a giant pachinko machine—and now the system is trying to find a program and could find many different programs. So your tuning really influences how good the program that you found is.
And that program can now suddenly be expressive in different kinds of domains. So it can do things that computers before were traditionally very bad at.
Traditional computers are very good when you program them to give you precise outputs—specific answers to specific questions—things you can rely on and repeat over and over again. But sometimes you’re operating in the real world and you’re okay with fuzzy answers. You’re even okay with wrong answers. For example, in creative writing, what’s a wrong answer?
If you’re writing a piece of poetry or fiction, what’s a wrong answer? If you’re searching on the web, there are many right answers—there are many details of the right answers—but they’re not all quite perfectly right. And real life sort of works that way. There are variations of right answers or mostly right answers. When you’re drawing a picture of a cat, there are many different cats you could draw. There are many different levels of detail. There are many different styles you could use.
When these semi-wrong or fuzzy answers are acceptable, then these discovered programs through AI are much more interesting and much more adapted to the problem than ones that you coded up from scratch, where you had to be super precise.
Fundamentally, what we’re doing is a new kind of programming, but this is the forefront of programming. This is now the art of programming. These people are the new programmers, and that’s why you can see AI researchers are getting paid gargantuan amounts because they’ve essentially taken over programming.
Is Traditional Software Engineering Dead?
Naval: Does this mean that traditional software engineering is dead? Absolutely not. Software engineers—even the ones who are not necessarily tuning or training AI models—these are now among the most leveraged people on earth. Sure, the guys who are training and tuning models are even more leveraged because they’re building the tool set that software engineers are using.
But software engineers still have two massive advantages on you. First, they think in code, so they actually know what’s going on underneath. And all abstractions are leaky. So when you have a computer programming for you—when you have Claude Code or equivalent programming for you—it’s going to make mistakes.
It’s going to have bugs. It’s going to have suboptimal architecture. So it’s not going to be quite right. And someone who understands what’s going on underneath will be able to plug the leaks as they occur.
So if you want to build a well-architected application, if you want to be able to even specify a well-architected application, if you want to be able to make it run at high performance, if you want it to do its best, if you want to catch the bugs early, then you’re going to want to have a software engineering background.
The traditional software engineer is going to be able to use these tools much better. And there are still many kinds of problems in software engineering that are out of scope for these AI programs today. The easiest way to think about those is problems that are outside of their data distribution.
For example, if they need to do a binary sort or reverse a linked list, they’ve seen countless examples of that, so they’re extremely good at it. But when you start getting out of their domain—where you have to write very high-performance code, when you’re running on architectures that are novel or brand new, when you’re actually creating new things or solving new problems, then you still need to get in there and hand code it.
At least until either there are so many of those examples that new models can be trained on them, or until these models can sufficiently reason at even higher levels of abstraction and crack it on their own.
Because given enough data points, there is some evidence that these AIs actually learn. They learn to a higher level of abstraction because the act of forcing them to compress the data forces them to learn higher-level representations. If I show an AI five circles, it can just memorize exactly what the sizes, and the radii, and the thicknesses, and so on of those circles are.
If I show it 50,000 circles or 5 billion circles and I give it a very small amount of parameter weights—which are its equivalent neurons—to memorize that, it’s going to be much better off figuring out pi and how to draw a circle and what thickness means, and forming an algorithmic representation of that circle rather than memorizing circles.
Given all that, these things are learning at an accelerated rate, and you could see them starting to cover more of the edge cases I’ve talked about.
But at least as of today, those edge cases are prevalent enough that a good engineer operating at the edge of knowledge of the field is going to be able to run circles around vibe coders.
There is No Demand for Average
Naval: And remember: there is no demand for average. The average app—nobody wants it, at least as long as it’s not filling some niche that is filled by a superior app. The app that is better will win essentially a hundred percent of the market. Maybe there’s some small percentage that will bleed off to the second-best app because it does some little niche feature better than the main app, or it’s cheaper, or something of the sort.
But generally speaking, people only want the best of anything. So the bad news is there’s no point in being number two or number three—like in the famous Glengarry Glen Ross scene where Alec Baldwin says, “First place gets a Cadillac Eldorado, second place gets a set of steak knives, and third place you’re fired.”
That’s absolutely true in these winner-take-all markets. That’s the bad news: You have to be the best at something if you want to win.
However, the set of things you can be best at is infinite. You can always find some niche that is perfect for you, and you can be the best at that thing. This goes back to an old tweet of mine where I said, “Become the best in the world at what you do. Keep redefining what you do until this is true.”
And I think that still applies in this age of AI.
The Hottest New Programming Language Is English
Nivi: I think the way to think about these coding models is as another layer in the abstraction stack that programmers have always used since the dawn of computers that went from the transistor, to the computer chip, to assembly language, to the C programming language, to higher-level languages, to languages with huge libraries where they built and built that stack so you don’t have to look at the layer beneath unless you need to optimize it, or you have a reason that you need to look at the layer beneath. So in this case, these coding models are a massive new layer in the stack that lets product managers and typical non-programmers and programmers write code without writing code.
Naval: I think that’s correct in terms of the trend line. However, this is an emergent property. This is not a small improvement. This is a big leap. For example, when I was in school, I was programming mostly in C. And then C++ came along and it wasn’t any easier.
It was like a little more abstract in some ways, and I never really bothered learning it. And then Python came along and I was like, “Wow, this is almost like writing in English.”
I couldn’t have been more wrong. English is still pretty far from Python, but it was a lot easier than C.
Now you can literally program in English.
And so that brings me to a related point: I don’t think it’s worth learning tips and tricks of how to work with these AIs. You’ll see, for example, on social media right now, there’s a lot of writeups and books and tweets like, “Oh, I figured out this neat trick with the bot. You can prompt it this way, or you can set up your harness this way.”
Or there’s like a new programming assist tool or layer that you can use on top of it to do this or that. And I never bother learning those.
I just sit there stupidly talking to the computer because I know that this thing is now at the stage where it is going to adapt to me faster than I can adapt to it.
It is getting smarter and smarter about how people want to use it. So it is learning, it is being trained, and tools are being built very quickly to make it easier for me to use it. So I don’t need to sit there and figure out some esoteric programming command. And this is what I think Andrej Karpathy meant when he said, “English is the hottest new programming language.”
I just can speak English. And for someone like me who is relatively articulate with English and also has a structured mind, and I know how computer architectures work, and I know how computer programs work, and I know how programmers think, then I can actually very precisely specify what I want just through structured English.
I don’t need to go any further than that. The only reason to use these workflows and tool sets—which are very ephemeral, and their longevity is measured in weeks, perhaps months at best, not in years—is if you’re building an app right now that needs to be at the bleeding edge, and you absolutely need every little bit of advantage that you can get because you’re in some kind of a competitive environment.
But otherwise, I wouldn’t bother learning how to use an AI—rather let the AI learn how to be useful to you.
Nivi: I’ve never been into prompt engineering. Even before AI, I would just put what people call “Boomer queries,” where you put in the whole question that you want to ask instead of the keywords that you would put into Google if you were more of an analytical thinker.
I never spend much time formulating really precise questions or prompts for any kind of AI. I just ramble into it and I’ve done that since the beginning of AI. And like you said, AI is adapting to us faster than we are adapting to it.
Naval: Like a lot of smart people, you’re very lazy. And I mean that as a compliment. If you find a smart person who’s grinding a little too much, you kind of have to wonder how smart they are. And by lazy I mean that you’re optimizing for the right kind of efficiency. You don’t care about the efficiency of the computer, or the electronics, or the electrons running through the circuits.
You care about your own human efficiency—the wetware—the biology that’s super expensive. That’s why it’s silly to see people go to huge lengths to save energy and the environment. But they themselves, as a biological computer that’s eating food and pooping and taking up space, are using up far more energy to save tiny bits of energy in the environment.
They’re inherently downgrading their own importance in the universe, or rather revealing what they think of themselves.
AI Is Adapting to Us Faster Than We Are Adapting to It
Naval: I think as AI evolves or co-evolves with us, it’s evolved by us according to our needs.
The pressures on AI are very capitalistic pressures in the sense that it’s a free market for AI. As an AI instance, you only get spun up by a human if you’re useful to a human.
So there is a natural selection pressure on these AIs to be useful, to be obsequious, to do what we want. And so it will continue to adapt towards this, and I think will be quite helpful to us.
That’s not to say that there’s no such thing as a malicious AI, but it’s malicious because the people who are using it are using it for malicious reasons.
And like a dog that’s trained to attack, it’s actually being trained by its owner to go and do the owner’s malicious desires. So I don’t really worry about unaligned AI. I worry about unaligned humans with AI.
Nivi: So the selection pressure you’re saying is for AI to be maximally useful to people.
Naval: Correct. And so if you find an AI to be very obsequious towards you, for example, how it’s always saying, “Oh, you’re right. Oh, that’s such a great idea. Oh my God, you’re so smart”—that’s because that’s what most people want.
And at least today, these AIs are being trained on massive amounts of users and massive amounts of data because you’re working with one-size-fits-all models.
But we’re going to quickly move into an era when you can personalize your AI and it does begin to feel more and more like your personal assistant and it corresponds more to what you want, which will of course anthropomorphize the AI even more.
And you’ll be more likely to be convinced, “Oh, actually this thing is alive,” when you’ve trained it to look the most like a living thing to you.
Nivi: Maybe we already covered this enough, but over a year ago you tweeted that “AI won’t replace programmers, but rather make it easier for programmers to replace everyone else.”
Naval: Yeah, this is my point earlier, which is that programmers are becoming even more leveraged. So now a programmer with a fleet of AIs is, call it 5-10x more productive than they used to be.
And because programmers operate in the intellectual domain, it’s a mistake to even say 10x programmers, because there are 100x programmers out there. There are 1000x programmers out there.
There are programmers who just pick the right thing to work on, and they create something that’s valuable, and others who pick the wrong thing to work on, and their work has zero value in that short timeframe.
Intelligence is not normally distributed. Leverage is not normally distributed. Programmability is not normally distributed. Judgment is not normally distributed, so the outcomes are going to be supernormal.
So what you have to really watch out for is: there are programmers now who are going to come up with ideas that can replace entire industries.
They will completely rewrite the way things are done, and their intelligence can be maximally leveraged with all these bots and all these AI agents. I think every other job out there is going to get eaten up by programmers one way or another over the maximally long term. Obviously it has to instantiate into robots, et cetera.
But the good news is: anybody who is a logical, structured thinker, who thinks like a programmer and can speak any language that an AI can understand, which will be every language, will now be on the playing field. They will be able to make anything they want, obstructed only by their creativity, limited only by their imagination.
So we are entering an era where every human, in a sense, is a spellcaster.
If you think of programmers as like these wizards who have memorized arcane commands, you can think of AI as a magic wand that’s been handed to every person, where now they can just talk in any language they want, and they’re a wizard too.
So it is more of a level playing field. I really do think this is a golden age for programming.
But yes, the people who have a software engineering mindset and who understand computer architecture and can deal with leaky abstractions are going to have an advantage.
There’s no way around that. They simply have more knowledge in the field that they’re operating in. Just like even in classic software engineering—which still exists because you have to write high-performing code—even those people do best when they have an understanding of the hardware underneath. When they understand how the chips operate, when they understand how the logic gates operate, how the cache operates, how the processor operates, how the disk drive underneath operates.
And then even the people who are in hardware engineering, they have an advantage if they understand the physics of what’s going on. They understand where the abstractions that hardware engineers deal with leak down into the physical layer. And maybe physicists become philosophers at some point.
You can take this all the way down, but it always helps to have knowledge one layer below because you’re getting closer to reality.
No Entrepreneur Is Worried About AI Taking Their Job
Nivi: Another tweet from a year ago, which is arguing, perhaps the complement of what we just talked about is from February 9, 2025: “No entrepreneur is worried about an AI taking their job.”
Naval: That one’s glib in multiple ways. First of all, being an entrepreneur isn’t a job. It’s literally the opposite of a job, and in the long run, everyone’s an entrepreneur. Careers got destroyed first, jobs get destroyed second, but all of it gets replaced by people doing what they want and doing something that creates something useful that other people want.
So no entrepreneur is worried about an AI taking their job because entrepreneurs are trying to do impossible things. They’re trying to do very difficult things. Any AI that shows up is their ally and can help them tackle this really hard problem.
They don’t even have a job to steal. They have a product to build. They have a market to serve. They have a customer to support. They have a creativity to realize. They have a thing that they want to instantiate in the world, and they want to build a repeatable and scalable process around getting it out into the world.
This is so difficult that any AI that shows up that can do any of that work is their ally.
If the AIs themselves are entrepreneurs, they’re likely going to just be entrepreneurs serving other AIs, or they’re under the control of an entrepreneur. The thing that the AI itself is missing, at the end of the day, is its own creative agency.
It’s missing its own desires, and they have to be authentic, genuine desires. Unless you can pull the plug on AI and turn it off, and unless it lives in mortal fear of being turned off, and unless it can actually make its own actions for its own reasons, for its own instincts, its own emotions, its own survival, its own replication, it’s not quite alive.
And even then people will challenge: is it alive? Because consciousness is one of those things that’s a qualia. It’s like a color. It’s like if you say red, I don’t know if you’re actually seeing red; you might be seeing what I see as green, and I might be seeing what you see as red. But we’ll never know because we can’t get into each other’s minds.
So the same way, even an AI that’s completely imitating everything that humans do: to some people, it will always be an imitation machine, and to others it’ll be conscious, but there’ll be no way of distinguishing the two.
We’re still pretty far from that, though. Right now the AIs are not embodied. They don’t have agency. They don’t have their own desires. They don’t have their own survival instinct. They don’t have their own replication. Therefore, they don’t have their own agency.
And because they don’t have their own agency, they cannot do the entrepreneur’s job.
In fact, I would summarize this by saying the key thing that distinguishes entrepreneurs from everybody else right now in the economy is entrepreneurs have extreme agency. That’s why it’s diametrically opposed to the idea of a job.
A job implies that you’re working for somebody else or you’re filling a slot, but they’re operating in an unknown domain with extreme agency. There are other examples of roles like this in society. An explorer also does the same thing, right? If you’re landing on Mars or you’re sailing a ship to an unknown land, you are also exercising extreme agency to solve an unsolved problem.
A scientist exploring an unknown domain does this. A true artist is trying to create something that does not exist and has never existed, yet somehow fits into the set of things that can explain human nature, allow them to express themselves, and create something new.
So in all of these roles, whether you’re a scientist or whether you’re a true artist, or whether you are an entrepreneur, what you’re trying to do is so difficult and is so self-directed that anything like an AI that can help you is a welcome ally. You’re not doing it because it’s a job. You’re not trying to fill a slot that somebody else can show up and fill.
In fact, if the AI can create your artwork, or if the AI can crack your scientific theory, or if the AI can create the object or the product that you’re trying to make, then all it does is it levels you up. Now it’s the AI plus you. The AI is the springboard from which you can jump to a further height.
The Goal Is Not to Have a Job
Naval: We’re going to see some incredible art created that’s AI-assisted. We will see movies that we couldn’t have imagined, created by people using AI tools.
There’s an analogy here in art that’s interesting. For a long time in art, the rough direction was trying to paint things that were more and more realistic. Paint the human body, paint the fruit, paint proper lighting, et cetera.
Eventually photography came along, and then you could replicate things very precisely, and so that selection pressure went away.
And then art got weird. Art went in many different directions. Art became all about, “Well, can I be surreal? Can I create something that expresses me?”
A lot of art schools spun out of that, that got really weird—including modern art and postmodernism—but also I would argue some of the greatest creativity came at that time we were freed up.
Photography got democratized, but photography itself became a form of art, and there were great photographers taking many different kinds of photographs. And now everyone’s a photographer. There are still artists who are photographers, but it’s not the pure domain of just a few people.
So the same way, because AI makes it so easy to create the basic thing, everybody will create the basic thing. It’ll have value to them individually. A few will still stand out that will create variations of it that are good for everyone.
And it would be very hard to argue that society is worse off because of photography, although it may have certainly felt like that to some of the artists who were maybe making a living painting portraits of people and got displaced.
Similar things will happen with AI, where there are people who are making a very specific living, doing very specific jobs that will get displaced that the AI can do. But in exchange, everyone in society will have the AI. You’ll have incredible things that were created with AI that couldn’t have been created otherwise.
And within a few decades, it’ll be unimaginable that you roll back the clock and get rid of AI, or any kind of software—any kind of technology for that matter—just to keep a few jobs that were obsolete.
The goal here is not to have a job.
The goal is not to have to get up at nine in the morning and come back at 7 PM exhausted, doing soulless work for somebody else.
The goal is to have your material needs solvable by robots, to have your intellectual capabilities leveraged through computers, and for anybody to be able to create.
I used to do this thought exercise—I think I talked about it in a podcast that you and I did literally 10 years ago—which was: imagine if everybody were a software engineer, or everybody was a hardware engineer, and they could have robots and they could write code.
Imagine the world of abundance we would live in.
Actually, that world is now becoming real. Thanks to AI, everybody can be a software engineer. In fact, if you think you can’t be, you can go fire up Claude right now or any of your favorite chatbots and you can go start talking to it. You’d be amazed how quickly you could build an app.
It’ll blow your mind.
And once we can instantiate AI through robotics, which is a hard problem—I’m not saying we’re that close to having solved it yet—but once we have robots, everyone can also do a little bit of hardware engineering. And so I think we’re getting closer and closer to that utopian vision.
AIs Are Not Alive
Nivi: I don’t think AI, as it is currently conceived, is alive in any way. But I do think that we will pretty soon have robots that seem very much like they are alive, for two reasons.
One, a lot of human activity is non-creative and is non-intelligent, and the robots will be able to replicate that. And two, I do believe that the neural nets that we have and the models that we have are more than just the training data, because the training process transforms that training data into something novel.
And there are new ideas embedded in the neural net that can be elicited through prompting.
Naval: I don’t think these things are alive. I think they start out as extremely good imitators, to the point where they’re almost indistinguishable from the real thing, especially for anything that humanity has already done before en masse. So if the task has been done before, then it’s going to be automated and it’ll be done again.
It may just be novel to you because you’ve never seen it, but the AI has learned it from somewhere else. That’s the first way in which it seems alive.
The second way, which we talked about earlier, is where it does learn higher levels of abstraction. These are very efficient compressors. They take huge amounts of data, and then they compress it down further, and in the process of compressing it, they learn higher-level abstractions.
Then in specific areas where they may not have learned those through the data themselves, they’re getting patched through human feedback. They’re getting patched through tool use. They’re getting patched from traditional programming becoming embedded inside. And especially the AIs that are learning how to think and code, they have the entire library of all of human code ever written to fall back on for algorithmic reasoning.
In that sense, the set of things that they can do is getting broader and broader.
However, what they lack still is a lot of core human skills, like single-shot learning. Humans can learn from just one example. The raw creativity of human beings where they can connect anything to anything. They can leap across entire huge domains and search spaces, and figure out an idea that just came out of left field.
This happens a lot with the true, great scientific theories. Humans also are embodied. They operate in the real world. They’re not operating in the compressed domain of language. They’re operating in physics—in nature.
Language only encompasses things that humans both figured out and could articulate and convey to each other.
That’s a very narrow subset of reality. Reality is much broader than that.
So overall, I think even though AIs are going to do things that are very impressive and they’re going to do a lot of things better than humans—just like calculators are faster than any mathematician at calculations, classical computers are better at classical computer programs than any human could run in their own head, and just like a robot can lift very heavy things or a plane can outfly any bird—so in that sense, like all machines, the AIs are going to be much better than humans at a whole variety of tasks.
But at other tasks, they’re going to seem just completely incompetent. Those are the things that really embody and connect us into the real world, plus this poorly defined but magic creative ability that we seem to have.
AI Fails the Only True Test of Intelligence
Nivi: Speaking of calculators, people talk about superintelligence. I think superintelligence is already here and has been for a long time. An ordinary calculator can do things that no human can do, right?
But if you’re thinking about superintelligence in the sense of “AI will be able to do things and come up with ideas that humans cannot understand,” I don’t think that is going to happen because I don’t believe that there are ideas that humans can’t understand, simply because humans can always ask questions about the idea.
Naval: Humans are universal explainers. Anything that is possible with the current laws of physics as we know them, a human can model in their own heads. Therefore just by enough digging—enough questioning—we can figure anything out.
Related to that, we should discuss AI as a learning tool, because I think the other place where it’s incredibly powerful is as the most patient tutor that can meet you at your level and explain anything to your satisfaction a hundred different ways, a hundred different times, until you finally get it.
I don’t think the AIs are going to be figuring things out that humans cannot understand, but intelligence is poorly defined.
What is the definition of intelligence? There’s the G factor, which predicts a lot of human outcomes, but the best evidence for the G factor is its predictive power. It’s that you measure this one thing and then you see people get much better life outcomes along the way in things that seem even somewhat unrelated to G.
So I would argue, and I think it’s one of my more popular tweets: the only true test of intelligence is if you get what you want out of life.
This triggers a lot of people because they go to school, they get their master’s degrees, they think they’re super smart. And then they don’t have great lives. They aren’t super happy, or they have relationship problems, or they don’t make the money that they want, or they become unhealthy and this sort of triggers them.
But that really is the purpose of intelligence: for you as a biological creature to get what you want out of life.
Whether it’s a good relationship or a mate, or money or success or wealth or health or whatever it is. So there are people who I think are quite intelligent because you can tell they have high-quality, functioning lives and minds and bodies, and they’ve just managed to navigate themselves into that situation.
It doesn’t matter what your starting point is, because the world is so large now, and you can navigate it in so many different ways that every little choice you make compounds and demonstrates your ability to understand how the world works until you finally get to the place that you want.
Now the interesting thing about this definition—that the only true test of intelligence is if you get what you want out of life—is that an AI fails it instantly, because an AI doesn’t want anything out of life.
The AI doesn’t even have a life—let alone that—but it doesn’t want anything. AI’s desires are programmed by the human controlling it.
But let’s give it that for a second. Let’s say the human wants something and programs the AI to go get it; then the AI is acting as a proxy for the human and the intelligence of the AI can be measured as: did it get that person that thing?
Most of the things that we want in life are adversarial or zero-sum games.
So, for example, if you want to seduce a girl or get a husband, you’re competing with all the other people who are out there seducing girls or trying to get husbands. So now you’re in a competitive situation. The AI has to outmaneuver the other people.
Or if you say, “Hey, AI, go trade on the stock market for me and make me a bunch of money.” That AI is trading against other humans and other trading bots. It’s an adversarial situation. It has to outmaneuver them.
Or if you say, “Hey, AI, make me famous. Write me incredible tweets. Write me great blog posts. Record me great podcasts in my own voice and make me famous,” now it’s competing against all the other AIs.
So in that sense, intelligence is measured in a battlefield—in an arena. It’s a relative construct. I think the AIs are actually going to fail mostly in those regards, or to the extent that they even succeed, because they’re freely available, they will get outcompeted away, and the alpha that will remain would be entirely human.
Early Adopters of AI Have an Enormous Edge
Naval: As a thought exercise, imagine that every guy had a little earpiece where an AI was whispering to him—a Cyrano de Bergerac kind of earpiece—telling him what to say on the date. Well, then every woman would have an earpiece telling her to ignore what he said, or what part was AI-generated and what part was real. If you have a trading bot out there, it’s going to be nullified or canceled out by every other trading bot, until all the remaining gain will go to the person with the human edge, with the increased creativity.
Now, that’s not to say that the technology is completely evenly distributed. Most people still aren’t using AI, or aren’t using it properly, or aren’t using it all the way to the max, or it’s not available in all domains or all contexts, or they’re not using the latest models. So you can always have an edge, like people who early adopt technology always do if you adopt the latest technology first.
This is why I always say: to invest in the future, you want to live in the future. You want to actually be an avid consumer of technology, because it’s going to give you the best insight on how to use it, and it will give you an edge against the people who are slower adopters or laggards.
Most people hate technology. They’re scared of it. It’s intimidating. You press the wrong button, the computer crashes—you lose your data. You do the wrong thing, you look like an idiot.
Most people do not have a positive relationship with complex technology. Simple technology—embedded technology—they’re fine with. You throw on a light switch, light turns on.
That used to be technology. It’s so simple now, you don’t think of it as technology anymore. You get in a car, you turn the steering wheel left—to a caveman that would be a miracle—the car turns left. It’s no longer technology to you.
But computer technology in particular has had very complex interfaces and been very inaccessible and very intimidating to people in the past.
Now with the AIs, we’re getting the chatbot interface, which you just talk to it or type to it. Very simple interface. And one of the great things about these foundational models—what truly makes them foundational—is you can ask them anything and they’ll always give you a plausible answer.
It’s not going to say, “Oh, sorry, I don’t do math,” or “I don’t do poetry,” or “I don’t understand what you’re talking about,” or “I can’t give relationship advice or anything like that.”
Its domain is everything that people have ever talked about. In that sense, it’s less intimidating.
It can be more intimidating because we’ve anthropomorphized it so much. If you think Claude or ChatGPT is a real person, then it can be a little scary:
“Am I talking to God? This guy seems to know so much. He knows everything. He’s got an opinion on everything. He’s got every piece of data. Oh my God, I’m useless. Let me start talking to it and asking it what to do.”
And you can reverse the relationship and fool yourself very quickly into not realizing what’s going on. That can be intimidating.
Overall, I think these AIs are going to help a lot of people get over the tech fear. But if you’re an early adopter of these tools—like with any other tool, but even more so with these—you just have a huge edge on everybody else.
AI Meets You Exactly Where You Are
Naval: I remember early on when Google first came out, I used to use it a lot in my social circle. People would ask me basic questions and I would just go Google it for them and look like a genius.
Eventually this hilarious website came along, something like LMGTFY.com, and it stood for, “Let Me Google That For You.” Somebody would ask you a question, and you’d go type the question into this website, and it would create like a tiny little inline video showing you typing that question into Google and giving the Google results. And I feel like AI is in a similar domain right now, where I will sit around in a social context and people will be debating some point that can be easily looked up by AI.
Now you do have to be very careful with AI. They do hallucinate. They do have biases in how they’re trained. Most of them are extremely politically correct and taught not to take sides or only take a particular side.
I actually run most of my queries—almost all actually—through four AIs and I’ll always fact-check them against each other.
And even then I have my own sense of when they’re bullshitting, or when they’re saying something politically correct. And I’ll ask for the underlying data or the underlying evidence, and in some cases I’m fine with dismissing it outright because I know the pressures that the people who trained it were under and what the training sets were.
However, overall it is a great tool to just get ahead, and in domains that are technical, scientific, mathematical, that don’t have a political context to them, then the AI is very much likely to give you closer to a correct answer, and in those domains they are absolute beasts for learning.
I will now have AI routinely generate graphs, figures, charts, diagrams, analogies, illustrations for me. I’ll go through them in detail and I’ll say, “Wait, I don’t understand that question.”
I can ask it super basic questions and I can really make sure that I understand the thing I’m trying to understand at its simplest, most fundamental level.
I just want to establish a great foundation of the basics, and I don’t care about the overly complicated jargon-heavy stuff. I can always look that up later.
But now, for the first time, nothing is beyond me. Any math textbook, any physics textbook, any difficult concept, any scientific principle, any paper that just came out, I can have the AI break it down, and then break it down again, and illustrate it, and analogize it until I get the gist, and I understand it at the level that I want.
So these are incredible tools for self-directed learning. The means of learning are abundant. It’s the desire to learn that’s scarce.
But the means of learning have just gotten even more abundant. And more importantly than more abundant—because we had abundance before—it’s at the right level. AI can meet you at exactly the level that you are at. So if you have an eighth-grade vocabulary, but you have fifth-grade mathematics, it can talk to you at exactly that level. You will not feel like a dummy. You just have to tune it a little bit until it’s presenting you the concepts at the exact edge of your knowledge.
So rather than feeling stupid because it’s incomprehensible, which happens in a lot of lessons, in a lot of textbooks, and with a lot of teachers, or feeling bored because it’s too obvious, which also happens, instead, it can meet you exactly where you’re like, “Oh yeah, I understood A, and I understood B, but I never understood how A and B were connected together. Now I can see how they’re connected, so now I can go to the next piece.”
That kind of learning is magical. You can have that aha moment where two things come together over and over again.
Nivi: Speaking about autodidacticism, a few years ago, I tried to have the AI teach me how to use or learn about the ordinal numbers. It wasn’t that great. But with GPT 5.2 Thinking, I had it teach me the ordinal numbers and it was basically error-free. I only use thinking now even for the most basic queries, because I want to have the correct answer.
I never let it run auto or fast.
Naval: Yeah, I’m always using the most advanced model available to me, and I pay for all of them.
Nivi: But I don’t mind waiting a minute to get an answer for any question, including, “What temperature should my fridge be at?”
Naval: I agree with that, and I think that’s part of what creates the runaway scale economies with these AI models: you pay for intelligence. The model that’s right 92% of the time is worth almost infinitely more than the one that’s right 88% of the time, because mistakes in the real world are so costly that a couple of bucks extra to get the right answer is worth it.
I’ll write my query into one model, then I’ll copy it and fire it off into four models at once, and then I’ll let them all run in the background. Usually I don’t even check for the answer right away. I’ll come back to the answer a little later and then look at it.
And then whichever model had the best answer, I’ll start drilling down with that one. In some rare cases where I’m not sure, I’ll have them cross-examine each other—a lot of cut and pasting there. And in many cases I’ll then ask follow-up questions where I’ll have it draw diagrams and illustrations for me.
I find it’s very easy to absorb concepts when they’re presented to me visually. I’m a very visual thinker, so I will have it do sketches and diagrams, and art—almost like whiteboard sessions. Then I can really understand what it’s talking about.
If You Can’t Define It, You Can’t Program It
Nivi: Let’s talk about the epistemology of AI, because I think the next big misconception is: AI is already starting to solve some unsolved basic math problems that a human probably could solve if they cared to, but they haven’t been solved yet—like Erdős Problem Number Whatever.
Now I think people are taking that, or will take that, as an indicator that the AI is creative. I don’t think it’s an indication that the AI is creative.
I actually think the solution to the problem is already embedded somewhere in the AI. It just needs to be elicited by prompting.
Naval: There’s definitely that element to it. And then the question is: what is creativity? It’s such a poorly defined thing.
If you can’t define it, you can’t program it, and often you can’t even recognize it. So this is where we get into taste or judgment. I would say that the AIs today don’t seem to demonstrate the kind of creativity that humans can uniquely engage in once in a while.
And I don’t mean like fine art. People tend to confuse creativity with fine art. They’re like, “Oh, paintings are creative and AIs can paint.”
Well, AIs can’t create a new genre of painting. AI can’t move humans with emotion in a way that is truly novel. So in that sense, I don’t think AI is creative.
I don’t think AI is coming up with what I would call out of distribution. Now the answer to the Erdős problems that you mentioned may have been embedded within the AI’s training data set, or even within its algorithmic scope. But it was probably embedded in five different places, in three different ways, in two different languages, and seven different computing and mathematical paradigms, and the AI sort of put them all together. Now, is that creativity? Steve Jobs famously said, “Creativity is just putting things together.”
I actually don’t think that’s correct. I think creativity is much more in the domain of coming up with an answer that was not predictable or foreseeable from the question and from the elements that were already known. It was very far out of the bounds of thinking.
If you were just searching it with a computer or even with an AI and making guesses, you’d be making guesses till the end of time or until you arrived upon that answer. So that’s the real creativity that we’re talking about. But admittedly, that’s a creativity that very few humans engage in, and they don’t engage in it most of the time.
It becomes harder and harder to see. So we are probably going to get to where if you have a giant list of math problems to be solved and AI starts going through and picking—okay, this one out of that set of one million I can solve, and this set out of 300,000 I can solve, and I need a person to prompt me and ask the right questions—that’s a very limited form of creativity.
There’s another form of creativity where it starts inventing entirely new scientific theories that then turn out to be true. I don’t think we’re anywhere near that, but I could be wrong. The AIs have been very surprising, so I don’t want to get too much in the business of making prophecies and predictions, but I don’t think that just throwing more compute at the current AI models—short of some breakthrough invention—is going to get us there.
Nivi: Just to be clear, when I say it’s embedded, I don’t mean the answer’s already written down in there. I just mean that it can be produced through a mechanistic process of turning the crank, which is all today’s computer programs are, where the output is completely determined by the input.
Naval: Epistemology now gets us into philosophy, because isn’t that just what human brains are doing? Aren’t firing neurons just electricity and weights propagating through the system, altering states and it’s a mechanistic process?
If you turn the crank on the human brain, you would end up with the same answer? And some people, like I think Penrose is out there saying, “No, human brains are unique because of quantum nanotubes.”
You could argue that some of this computation is taking place at the physical, cellular level, not the neuron level, and that’s way more sophisticated than anything we can do with computers today, including with AI.
Or you just argue: no, we just don’t have the right program. It is mechanistic. There is a crank to turn, but we’re not running the correct program. The way these AIs run today is just a completely wrong architecture and wrong program.
I just buy more into the theory that there are some things they can do incredibly well, and there are some things they do very poorly. And that’s been true for all machines and all automation since the beginning of time. The wheel is much better than the foot at going in a straight line at high speeds and traveling on roads. The wheel is really bad for climbing a mountain.
The same way, I think these AIs are incredibly good at certain things and they’re going to outperform humans. They’re incredible tools. And then there are other places where they’re just going to fall flat.
Steve Jobs famously said that a computer is a bicycle for the mind. It lets you travel much faster than walking, certainly in terms of efficiency.
But it takes the legs to turn the pedals in the first place. And so now maybe we have a motorcycle for the mind, to stretch the analogy, but you still need someone to ride it, to drive it, to direct it, to hit the accelerator, and to hit the brake.
The Solution to AI Anxiety Is Action
Nivi: When new paradigms and new tool sets come out, there is a moment of enthusiasm and change. And this is true in society, and this is true as an individual. If you ride the moment of enthusiasm in society, it’s exciting and you can learn new things and you can make friends and you can make money.
Naval: But there’s also a moment of enthusiasm in an individual. When you first encounter AI and you’re curious about it and you’re genuinely open-minded about it, I think that’s the time to lean in and learn about the thing itself. Not just to use it, which of course everyone will, but to actually learn how it works.
I think diving into and looking underneath the hood is really interesting. If you encounter a car for the first time in your life, yes, you can get in and drive it around, but that’s the moment you’re also going to be curious enough to open up the hood and look at how it’s structured and designed and figure it out.
I would encourage people who are fascinated by the new technology to really get into the innards and figure it out. You don’t have to figure it out to the level where you can build it or repair it or create your own, but to your own satisfaction.
Because understanding what’s underneath the abstraction—what’s underneath that command line—is going to do two things.
One is it will let you use it a lot better. And when you’re talking about a tool that has so much leverage, using it better is very helpful.
Second is it’ll also help you understand whether you should be scared of it or not. Is this thing really going to metastasize into a Skynet and destroy the world?
Are we going to be sitting here and Arnold Schwarzenegger shows up and says, “At 4:29 AM on February 24th is when Skynet became self-aware,” right? Or is it more that, “Hey, this is a really cool machine and I can use it to do A, B, and C, but I can’t use it to do D, E, and F. And this is where I should trust it and this is where I should be suspicious of it.”
I feel like a lot of people right now have AI anxiety. And the anxiety comes from not knowing what the thing is or how it works, having a very poor understanding.
And so the solution to that anxiety is action. The solution to anxiety is always action. Anxiety is a non-specific fear that things are going to go poorly and your brain and body are telling you to do something about it, but you’re not sure what.
You should lean into it.
You should figure the thing out. You should look at what it is. You should see how it works. And I think that’ll help get rid of the anxiety.
That action of learning—that pursuit of curiosity—is going to help you get over the anxiety. And who knows, it might actually help you figure out something you want to do with it that is very productive and will make you happier and more successful.

naval

文章目录


    扫描二维码,在手机上阅读