人工智能是软件工程的终结,还是其演进的下一步?
内容来源:https://www.theverge.com/ai-artificial-intelligence/767973/vibe-coding-ai-future-end-evolution
内容总结:
AI编程助手:是软件工程的终结还是进化新阶段?
2023年初,当开发者首次使用ChatGPT编写代码时,许多人联想到经典恐怖小说《猴爪》——看似能实现愿望的诅咒宝物,却总以最恶劣的方式达成目标。AI编程工具确实能快速生成代码,但其输出往往过度工程化,夹杂无关代码片段,需要开发者花费大量时间梳理调试。
这种被称为"氛围编程(vibe-coding)"的新模式,让零基础用户也能通过自然语言指令创建程序。谷歌甚至为此专门推出了Opal应用。本质上,这延续了无代码应用的演进脉络,类似于程序员熟悉的"散弹枪调试法"——在脑力枯竭时随意修改代码并期待奇迹发生。
当前AI编程最适合处理受限问题空间。例如将需要480毫秒顺序执行的12行代码改为并行执行,使总耗时降至40毫秒。这就像使用高精度3D打印机生产飞机零部件:制作液压密封圈时完美无瑕,但若打印整个驾驶舱,可能得到仪表盘失灵、按钮胡乱组装的危险产物。
资深开发者发现,AI最实用的功能甚至是理解代码而非编写代码。当需要熟悉新代码库时,AI能生成组件流程图,节省数小时研读时间。
软件工程的核心挑战从来不是构建独立单元,而是确保系统间协同运作。这好比装修单个公寓与整栋楼消防系统联调的差异。虽然AI能快速搭建pop-up商店般的独立程序,但要建设如机场新航站楼般的复杂系统,仍需深厚的工程经验。
安全方面,近期Tea应用泄露数万用户驾照信息的事件被归咎于氛围编程,但调查显示AI并非主因。实际上,AI反而能帮助编写更安全的代码——当用户要求创建驾照数据库时,高级模型会主动提醒加密需求,甚至配置双人审批的解密流程。
作为抽象化进程的新阶段,编程语言从汇编到Python再到AI的演进,好比从"身体旋转60度前进10英尺"的指令,到"第十四街右转"的指引,再到直接告诉GPS"带我回家"的蜕变。
Linux创始人Linus Torvalds曾强调编程需要"品味",这种源自凌晨三点故障警报的直觉判断,是AI难以零样本模仿的。虽然AI已从单文件操作演进到多代码库关联分析,就像国际象棋AI从棋子视角升级到全局战略研判,但艺术性决策仍需人类经验。
值得关注的是,AI可能改变软件工程师的成长路径。现在学习者很少会像过去那样,在宿舍里痛苦地手写Dijkstra算法或红黑树来实现深度学习。就像仅靠观看NBA集锦无法学会扣篮,编程技能必须通过亲手敲代码才能内化。
未来软件工程师可能分化为两类:"城市规划师"专注系统宏观架构,"微雕师"精研代码内部逻辑。AI编程或许对前者是天赐福音,但对后者可能意味着生存空间的挤压。
当Brian W. Kernighan这样的编程泰斗在极简终端上用vi编辑器现场构建复杂语法分析器时,那种充满美学意味的编程艺术,或许终将成为数字时代的匠人传奇。未来我们回望手动调试并发问题、从零编写服务器代码的岁月时,或许会像今天仰望贝尔实验室传奇那般,带着对技艺失传的惋惜与敬畏。
(本文基于开发者亲身体验,探讨AI编程工具对软件工程行业的深层影响)
中文翻译:
我第一次用ChatGPT写代码是在2023年初,当时我想起了经典恐怖故事《猴爪》——那个被诅咒的宝物能实现愿望,但总会以最恶毒的方式达成:愿望实现前,必先索要残酷代价。ChatGPT同样带着一种刻板的 literal(字面化)执行:它确实实现了我的要求,却同时搅乱了数十行无关代码。其输出往往过度设计,常附着无关代码碎片。虽然能挑出些可用片段,但梳理这堆混乱代码感觉就像在绕远路。
人工智能究竟是软件工程的终结,还是其演进的下一个阶段?
凭借氛围编码(vibe-coding),人人都能成为写代码的人——但能否成长为软件工程师?
今年初开始使用AI辅助工具时,我深感力不从心。体验如同与一位天才实习生结对编程:能力出众却异常顺从,急于取悦我并执行大刀阔斧的修改。但当处理局部修改时,它的效率高得令人艳羡。
关键在于约束问题空间。最近我让AI将十二行顺序执行(每行耗时40毫秒)的代码改为并行执行,使整体运行时间缩短至单行代码的水平。这就像用高精度3D打印机造飞机:生产液压密封圈或O型环等定制小零件时完美无缺;若要打印整个驾驶舱,得到的可能是个驾驶舱形状的死亡陷阱——仪表盘失灵,旋钮胡乱串联。当前这批AI模型足够灵活,能让零编码基础的用户通过所谓"氛围编码"(价值十亿美元的流行词)创造出不同质量的产品(谷歌甚至为此推出了独立应用Opal)。
但氛围编码并非全新事物。作为非专业人士的工具,它延续了无代码应用的长脉络;作为一种依赖脊髓反射而非前额叶皮层的编程模式,任何诚实的程序员都会承认自己曾进行过"霰弹枪调试"——就像盲目扭动魔方指望颜色自动对齐,程序员在数小时徒劳调试后开始胡乱修改:删除随机代码行、交换变量、翻转布尔条件,然后重新运行程序指望出现正确结果。氛围编码与霰弹枪调试都是直觉性瞎试,用直觉和运气取代 deliberate(缜密)推理与理解。
我们曾用机器减轻认知负荷,但如今首次将认知本身外包给机器。
自尊的程序员通常不屑于霰弹枪调试。很快我发现,AI辅助编程最高效的模式可能是编辑模式——正如本文的成文过程:编辑提出要点,作者(正是在下)交出勉强可用的草稿(没有清醒的编辑会原样发布)。同理,负责任的氛围编码者必须扮演编辑角色:先对AI生成的大段代码进行结构编辑,再逐行优化。通过连续提示(如同多轮编辑),编码编辑不断缩小预期与输出之间的差距。
这些工具最实用的功能甚至不是写代码,而是解读代码。当我最近需要熟悉陌生代码库时,AI生成了主要组件衔接的流程图,省去了我整日下午的代码勘探。
我对氛围编码的能力喜忧参半。作为写作者,我乐见它打破硅谷某种令人作呕的傲慢——工程师对非技术角色的优越感;但作为工程师,我认为这种说法流于表面,因为缺乏多年实战经验的软件工程师难以构建非 trivial(简单)的生产级应用。
我始终认为代码库的最佳喻体是城市。其中有 literal(实指)管道(数据管道、事件队列、消息代理)和需要复杂路由的流量流。正如城市划分区域以管理复杂性,系统也分为模块或微服务。有些陈年代码如同欧洲城市地下未引爆的炸弹(今夏科隆刚拆除三枚二战炸弹),不动为妙。
若开发新功能像开设机场贵宾厅,大型项目就如建造新航站楼。以此喻之,氛围编码构建应用就像在航站楼开快闪店——自成一体无需整合。
氛围编码适合独立程序,但软件工程最棘手的问题不在于构建单元,而在于连接协同。改造单间公寓是一回事,让消防系统与应急电源跨楼层按序启动是另一回事。
影响远不止于内部。分布式系统中新增节点可能 disrupt(扰乱)网络,犹如新建筑改变周边风动力学、阳光分布、人行流线并触发无数涟漪效应。
依我之见,围绕氛围编码的安全担忧多少是种 bogeyman(骇人臆想)。
这并非高深学问,而是种 tacit(心照不宣)的硬功夫——不仅知如何执行,更知后续该问什么。氛围编码几乎能 coax(诱导出)任何答案,但真正挑战在于提出正确问题序列。即使监督过室内装修,若未亲临工地看混凝土灌入地基,就无法真正理解如何建楼。用AI拼凑出看似可用的东西很简单,但正如软件界名言:"若你认为好架构昂贵,不妨试试坏架构。"
按Linux之父林纳斯·托瓦兹的说法,软件还存在"品味"问题。优秀软件架构并非一蹴而就,而是源于无数 sound(合理)且雅致的微观决策——这是模型无法 zero-shot(零样本)实现的。这种直觉只能通过凌晨三点值班警报造成的特定神经损伤来培养。
或许这些类比终将过时。数月前AI仅能可靠操作单文件,如今已能理解多文件夹语境,此刻写作时甚至能跨代码库操作。犹如弈棋AI从单兵视角升至战略纵览全局。与参数无限的艺术品味不同,代码"品味"或许只是AI能从O'Reilly软件书和多年黑客新闻争论中吸收的设计模式总和。
当Tea应用事件暴露数万用户驾照信息(网友齐声归咎于氛围编码)时,质疑者仿佛迎来祈祷已久的时刻。X平台AI网红照例发表高见,某些 tech critics(科技评论者)——那些惯于 ritual ambulance chasing(追逐热点事故)之辈——反射性谴责所有AI应用。软件工程师罕见地从挨打对象变为安全卫士,趁机训诫闯入其专业领域的草率氛围编码者。
当事实证明氛围编码非主因时,事件更多暴露了我们长久以来将技术事故二分为:弱者与恶霸、受骗者与欺诈者、受害者与施害者的冲动。
恕我冒昧为AI炒作正名,依我之见,氛围编码的安全担忧实属 bogeyman(无端恐惧)——至少净效应可能非负,因为AI也能助我们编写更安全的代码。
固然会有"应用粪作"和不安全代码片段被欢乐分享,但我怀疑多数缺陷只需在检查清单添加"对此拉取请求运行安全审计"即可解决。自动化工具已在标记潜在漏洞。就个人而言,这些工具让我生成的测试案例远超平日愿意编写的数量。
更进一步,若模型足够优秀,当你问"需要存储驾照的数据库"时,AI可能回答:
"当然,但 idiot(白痴)你忘了安全考虑。这是用AES-256-GCM加密静态驾照号的代码,还设置了密钥管理系统存储轮换加密密钥,解密需双人审批。就算数据被盗,破解也需等到宇宙热寂。不客气。"
我的本职是资深软件工程师,主要做后端,偶尔涉足机器学习,迫不得已时才勉强碰前端。在某些方面,AI带来了显著便利:无需解析冗长API文档,模型直接告知答案;无需承受Stack Overflow版主对我"不值得提问"的 ritual shaming(仪式性羞辱)。取而代之的是一位不会对我那些 career-endingly dumb(蠢到终结职业生涯的)问题妄加评判的结对编程伙伴。
软件工程的演进是部抽象化史诗。
与写作不同,我对代码块毫无眷恋,乐于让AI编辑重写。但我守护自己的文字。不用AI写作是因为我珍惜那些难得时刻——当词语各就其位时的满足感。这 beyond sentimental piety(超越情感上的虔诚):作为用非母语写作的"exophonic(外语写作)"者,我深知后天语言如何快速退化。我在编程中亲历其腐蚀性:AI时代后我新学的Ruby语言,对其精妙处的掌握明显弱于其他语言。即使曾熟练的语言,我也能感到流利度在消退。
Ruby on Rails创始人大卫·海涅梅尔·汉森最近坦言不让AI代写代码,一语中的:"我能 literally(真切)感到能力从指尖流失。"一些我曾能在全身麻醉下完成的 trivial(琐碎)常规任务,如今想到要手动完成就头痛。
AI会扼杀软件工程职业吗?若真如此,世界至少能幸灾乐祸地目睹这个摧毁岗位的行业将自己自动化到无足轻重。更可能 meanwhile(与此同时)发生杰文斯悖论:更高效率刺激更多消费,工作量增加抵消生产力收益。
另一种视角是视其为编程的自然演进:软件工程史就是抽象化史,让我们离底层硬件越来越远,迈向更高概念层。从汇编语言到Python再到AI,好比从"身体旋转60度前进十英尺"到"十四街右转"再到直接告诉GPS"带我回家"。
作为ChatGPT前时代的程序员,我不禁怀疑在迈向新抽象层级时是否遗失了重要之物。这并非新事——历史总在重演。1970年代C语言出现时,汇编程序员可能视其丧失精细控制;而在C程序员看来,Python之类语言想必缓慢又 restrictive(限制重重)。
因此,这或许是史上最容易成为写代码者的时代,但成长为软件工程师或许比以往更难。优秀写代码者能写出合格代码,但伟大写代码者懂得如何不写代码就解决问题。若未在宿舍苦手写迪杰斯特拉算法或红黑树,很难 sober(清醒)掌握计算机科学基础。若你曾通过看视频学编程失败,原因在于内化编程的唯一方式是亲手键入。看NBA精彩片段学不会扣篮。
AI辅助编程是否真能提速尚无定论(至少一项知名研究显示可能更慢)。我信此结论,但也相信要让AI成为生产力方程中的真实 exponent(指数),我们需要一种 mental circuit breaker(心理断路器):觉察自己陷入无意识自动状态时及时抽离。关键在于用AI刚好克服障碍,就切换回锻炼灰质。否则会丧失对任务目的的理解内核。
乐观时,我愿意相信随着某些能力萎缩,我们将如常适应并发展新能力。但总有种 creeping pessimism(蔓延的悲观)认为这次不同。我们曾用机器减轻认知负荷,但如今首次将认知本身外包给机器。我不知道结局如何,但深知认为"自己这代是最后真正会思考的一代"始终是种 hubris(傲慢)。
无论获得多少,失落感真实存在。詹姆斯·索默斯在2023年《纽约客》文章《编码技艺式微时代的程序员思考》中精准捕捉这种感受——他发现自己"想为编码写悼文",因为"无需思考与知识就能达成相同目标"。文章发表不到两年,他表达的情绪愈发引起共鸣。
就我而言,学习新编程语言的动力减退了。既然AI能吐出任何语言的代码,学习新语法的乐趣、掌握Haskell或Lisp等小众语言的 cachet(声望)都黯然失色。我在想若自动翻译应用无处不在且完美无缺,学习外语的动力会否消退。
软件工程师爱抱怨调试,但牢骚背后始终藏着分享 war stories(实战经历)与巧妙解决方案的 quiet pride(暗自骄傲)。有了AI,这类 shoptalk(行话交流)还有空间吗?
软件工程师分两类:城市规划师与微雕师。城市规划师注重大局,更关注系统规模化运作而非代码细节(他们自己可能很少写代码);微雕师以钟表匠雕琢精密腕表的细致对待代码内部机制。这种新型编码模式或许是城市规划师的福音,但对微雕师则可能让领域变得 inhospitable(不宜居)。
我曾有幸目睹编程泰斗实战。大学时我上过布莱恩·W.克尼汉的课(这位活传奇缔造了"Hello, world"编程传统,亦是Unix原始贝尔实验室团队成员)。在我们注视下,他在裸终端上用极简代码编辑器vi(注意不是vim)现场编写复杂语法树解析器。他不仅无需IDE等现代工具,甚至用终端电子邮件客户端回邮件。其中自有一种 aesthetic(美学)。
不久后,编程或许会被视为混合键入手势与咒语的技艺。正如我们敬畏地回顾贝尔实验室元老,手动调试并发问题或从零编写Web服务器代码等 unglamorous(乏味)工作可能被视作 heroic(英勇之举)。偶尔我们仍会看到老派浪漫主义者 lingering(徘徊)于每个按键——这种举止高贵、精湛,且 hopelessly out of time(不合时宜到绝望)。
英文来源:
The first time I used ChatGPT to code, back in early 2023, I was reminded of “The Monkey’s Paw,” a classic horror story about an accursed talisman that grants wishes, but always by the most malevolent path — the desired outcome arrives after exacting a brutal cost elsewhere first. With the same humorless literalness, ChatGPT would implement the change I’d asked for, while also scrambling dozens of unrelated lines. The output was typically over-engineered, often barnacled with irrelevant fragments of code. There were some usable lines in the mix, but untangling the mess felt like a detour.
Is AI the end of software engineering or the next step in its evolution?
With vibe-coding, anyone can become a coder. But can they grow into a software engineer?
When I started using AI-assisted tools earlier this year, I felt decisively outmatched. The experience was like pair-programming with a savant intern — competent yet oddly deferential, still a tad too eager to please and make sweeping changes at my command. But when tasked with more localized changes, it nailed the job with enviable efficiency.
The trick is to keep the problem space constrained. I recently had it take a dozen lines of code, each running for 40 milliseconds in sequence — time stacking up — and run them all in parallel so the entire job finished in the time it used to take for just one. In a way, it’s like using a high-precision 3D printer to build an aircraft: use it to produce small custom parts, like hydraulic seals or O-rings, and it delivers flawlessly; ask it for something less localized like an entire cockpit, and you might get a cockpit-shaped death chamber with a nonfunctional dashboard and random knobs haphazardly strung together. The current crop of models is flexible enough for users with little-to-no coding experience to create products of varying quality through what’s called — in a billion-dollar buzzword — vibe-coding. (Google even released a separate app for it called Opal.)
Yet, one could argue that vibe-coding isn’t entirely new. As a tool for nonprofessionals, it continues a long lineage of no-code applications. As a mode of programming that involves less prefrontal cortex than spinal reflex, any honest programmer will admit to having engaged in a dishonorable practice known as “shotgun debugging.” Like mindlessly twisting a Rubik’s Cube and wishing the colors would magically align, a programmer, brain-fried after hours of fruitless debugging, starts arbitrarily tweaking code — deleting random lines, swapping a few variables, or flipping a Boolean condition — re-runs the program, and hopes for the correct outcome. Both vibe-coding and shotgun debugging are forms of intuitive flailing, substituting hunches and luck for deliberate reasoning and understanding.
We’ve used machines to take the load off cognition, but for the first time, we are offloading cognition itself to the machine.
As it happens, it’s not considered good form for a self-respecting programmer to engage in shotgun debugging. Soon, I came to see that the most productive form of AI-assisted coding may be an editorial one — much like how this essay took shape. My editor assigned this piece with a few guiding points, and the writer — yours truly — filed a serviceable draft that no sober editor would run as-is. (Before “prompt and pray,” there was “assign and wait.”)
Likewise, a vibe-coder — a responsible one, that is — must assume a kind of editorship. The sprawling blocks of code produced by AI first need structural edits, followed by line-level refinements. Through a volley of prompts — like successive rounds of edits — the editor-coder minimizes the delta between their vision and the output.
Often, what I find most useful about these tools isn’t even writing code but understanding it. When I recently had to navigate an unfamiliar codebase, I asked for it to explain its basic flow. The AI generated a flowchart of how the major components fit together, saving me an entire afternoon of spelunking through the code.
I’m of two minds about how much vibe-coding can do. The writer in me celebrates how it could undermine a particular kind of snobbery in Silicon Valley — the sickening smugness engineers often show toward nontechnical roles — by helping blur that spurious boundary. But the engineer in me sees that as facile lip service, because building a nontrivial, production-grade app without grindsome years of real-world software engineering experience is a tall order.
I’ve always thought the best metaphor for a large codebase is a city. In a codebase, there are literal pipelines — data pipelines, event queues, and message brokers — and traffic flows that require complex routing. Just as cities are divided into districts because no single person or team can manage all the complexity, so too are systems divided into units such as modules or microservices. Some parts are so old that it’s safer not to touch them, lest you blow something up — much like the unexploded bombs still buried beneath European cities. (Three World War II-era bombs were defused in Cologne, Germany, just this summer.)
If developing a new product feature is like opening a new airline lounge, a more involved project is like building a second terminal. In that sense, building an app through vibe-coding is like opening a pop-up store in the concourse — the point being that it’s self-contained and requires no integration.
Vibe-coding is good enough for a standalone program, but the knottiest problems in software engineering aren’t about building individual units but connecting them to interoperate. It’s one thing to renovate a single apartment unit and another to link a fire suppression system and emergency power across all floors so they activate in the right sequence.
These concerns extend well beyond the interior. The introduction of a single new node into a distributed system can just as easily disrupt the network, much like the mere existence of a new building can reshape its surroundings: its aerodynamic profile, how it alters sunlight for neighboring buildings, the rerouting of pedestrian traffic, and the countless ripple effects it triggers.
The security concerns around vibe-coding, in my estimation, are something of a bogeyman.
I’m not saying this is some lofty expertise, but rather the tacit, hard-earned kind — not just knowing how to execute, but knowing what to ask next. You can coax almost any answer out of AI when vibe-coding, but the real challenge is knowing the right sequence of questions to get where you need to go. Even if you’ve overseen an interior renovation, without standing at a construction site watching concrete being poured into a foundation, you can’t truly grasp how to create a building. Sure, you can use AI to patch together something that looks functional, but as the software saying goes: “If you think good architecture is expensive, try bad architecture.”
If you were to believe Linus Torvalds, the creator of Linux, there’s also a matter of “taste” in software. Good software architecture isn’t just drawn up in one stroke but emerges from countless sound — and tasteful — micro-decisions, something models can’t zero-shot. Such intuition can only be developed as a result of specific neural damage from a good number of 3AM on-call alerts.
Perhaps these analogies will only go so far. A few months ago, an AI could reliably operate only on a single file. Now, it can understand context across multiple folders and, as I’m writing this, across multiple codebases. It’s as if the AI, tasked with its next chess move, went from viewing the board through the eyes of a single pawn to surveying the entire game with strategic insight. And unlike artistic taste, which has infinitely more parameters, “taste” in code might just be the sum of design patterns that an AI could absorb from O’Reilly software books and years of Hacker News feuds.
When the recent Tea app snafu exposed tens of thousands of its users’ driver’s licenses — a failure that a chorus of online commenters swiftly blamed on vibe-coding — it felt like the moment that vibe-coding skeptics had been praying for. As always, we could count on AI influencers on X to grace the timeline with their brilliant takes, and on a certain strain of tech critics — those with a hardened habit of ritual ambulance chasing — to reflexively anathematize any use of AI. In a strange inversion of their usual role as whipping boys, software engineers were suddenly elevated to guardians of security, cashing in on the moment to punch down on careless vibe-coders trespassing in their professional domain.
When it was revealed that vibe-coding likely wasn’t the cause, the incident revealed less about vibe-coding than it did about our enduring impulse to dichotomize technical mishaps into underdogs and bullies, the scammed and fraudsters, victims and perpetrators.
At the risk of appearing to legitimize AI hype merchants, the security concerns around vibe-coding, in my estimation, are something of a bogeyman — or at least the net effect may be non-negative, because AI can also help us write more secure code.
Sure, we’ll see blooper reels of “app slop” and insecure code snippets gleefully shared online, but I suspect many of those flaws could be fixed by simply adding “run a security audit for this pull request” to a checklist. Already, automated tools are flagging potential vulnerabilities. Personally, using these tools has let me generate far more tests than I would normally care to write.
Further, if a model is good enough, when you ask, “Hey, I need a database where I can store driver’s licenses,” an AI might respond:
“Sure, but you forgot to consider security, you idiot. Here’s code that encrypts driver’s license numbers at rest using AES-256-GCM. I’ve also set up a key management system for storing and rotating the encryption key and configured it so decrypting anything requires a two-person approval. Even if someone walks off with the data, they’d still need until the heat death of the universe to crack it. You’re welcome.”
In my day job, I’m a senior software engineer who works on backend mainly, on machine learning occasionally, and on frontend — if I must — reluctantly. In some parts of the role, AI has brought a considerable sense of ease. No more parsing long API docs when a model can tell me directly. No more ritual shaming from Stack Overflow moderators who deemed my question unworthy of asking. Instead, I now have a pair-programmer who doesn’t pass judgment on my career-endingly dumb questions.
The evolution of software engineering is a story of abstraction.
Unlike writing, I have little attachment to blocks of code and will readily let AI edit or regenerate them. But I am protective of my own words. I don’t use AI for writing because I fear losing those rare moments of gratification when I manage to arrange words where they were ordained to be.
For me, this goes beyond sentimental piety because, as a writer who doesn’t write in his mother tongue — “exophonic” is the fancy term — I know how quickly an acquired language can erode. I’ve seen its corrosive effects firsthand in programming. The first language I learned anew after AI arrived was Ruby, and I have a noticeably weaker grasp of its finer points than any other language I’ve used. Even with languages I once knew well, I can sense my fluency retreating.
David Heinemeier Hansson, the creator of Ruby on Rails, recently said that he doesn’t let AI write code for him and put it aptly: “I can literally feel competence draining out of my fingers.” Some of the trivial but routine tasks I could once do under general anesthesia now give me a migraine at the thought of doing them without AI.
Could AI be fatal to software engineering as a profession? If so, the world could at least savor the schadenfreude of watching a job-destroying profession automate itself into irrelevance. More likely in the meantime, the Jevons Paradox — greater efficiency fuels more consumption — will prevail, negating any productivity gain with a higher volume of work.
Another way to see this is as the natural progression of programming: the evolution of software engineering is a story of abstraction, taking us further from the bare metal to ever-higher conceptual layers. The path from assembly language to Python to AI, to illustrate, is like moving from giving instructions such as “rotate your body 60 degrees and go 10 feet,” to “turn right on 14th Street,” to simply telling a GPS, “take me home.”
As a programmer from what will later be seen as the pre-ChatGPT generation, I can’t help but wonder if something vital has been left behind as we ascend to the next level of abstraction. This is nothing new — it’s a familiar cycle playing out again. When C came along in the 1970s, assembly programmers might have seen it as a loss of finer control. Languages like Python, in turn, must look awfully slow and restrictive to a C programmer.
Hence it may be the easiest time in history to be a coder, but it’s perhaps harder than ever to grow into a software engineer. A good coder may write competent code, but a great coder knows how to solve a problem by not writing any code at all. And it’s hard to fathom gaining a sober grasp of computer science fundamentals without the torturous dorm-room hours spent hand-coding, say, Dijkstra’s algorithm or a red-black tree. If you’ve ever tried to learn programming by watching videos and failed, it’s because the only way to internalize it is by typing it out yourself. You can’t dunk a basketball by watching NBA highlight reels.
The jury is still out on whether AI-assisted coding speeds up the job at all; at least one well-publicized study suggests it may be slower. I believe it. But I also believe that for AI to be a true exponent in the equation of productivity, we need a skill I’ll call a kind of mental circuit breaker: the ability to notice when you’ve slipped into mindless autopilot and snap out of it. The key is to use AI just enough to get past an obstacle and then toggle back to exercising your gray matter again. Otherwise, you’ll lose the kernel of understanding behind the task’s purpose.
On optimistic days, I like to think that as certain abilities atrophy, we will adapt and develop new ones, as we’ve always done. But there’s often a creeping pessimism that this time is different. We’ve used machines to take the load off cognition, but for the first time, we are offloading cognition itself to the machine. I don’t know which way things will turn, but I know there has always been a certain hubris to believing that one’s own generation is the last to know how to actually think.
Whatever gains are made, there’s a real sense of loss in all this. In his 2023 New Yorker essay “A Coder Considers the Waning Days of the Craft,” James Somers nailed this feeling after finding himself “wanting to write a eulogy” for coding as “it became possible to achieve many of the same ends without the thinking and without the knowledge.” It has been less than two years since that essay was published, and the sentiments he articulated have only grown more resonant.
For one, I feel less motivated to learn new programming languages for fun. The pleasure of learning new syntax and the cachet of gaining fluency in niche languages like Haskell or Lisp have diminished, now that an AI can spew out code in any language. I wonder whether the motivation to learn a foreign language would erode if auto-translation apps became ubiquitous and flawless.
Software engineers love to complain about debugging, but beneath the grumbling, there was always a quiet pride in sharing war stories and their clever solutions. With AI, will there be room for that kind of shoptalk?
There are two types of software engineers: urban planners and miniaturists. Urban planners are the “big picture” type, more focused on the system operating at scale than with fussing over the fine details of code — in fact, they may rarely write code themselves. Miniaturists bring a horologist’s care for a fine watch to the inner workings of code. This new modality of coding may be a boon for urban planners, but leave the field inhospitable to miniaturists.
I once had the privilege of seeing a great doyen of programming in action. In college, I took a class with Brian W. Kernighan, a living legend credited with making “Hello, world” into a programming tradition and a member of the original Bell Labs team behind Unix. Right before our eyes, he would live-code on a bare-bones terminal, using a spartan code editor called vi — not vim, mind you — to build a parser for a complex syntax tree. Not only did he have no need for modern tools like IDEs, he also replied to email using an email client running in a terminal. There was a certain aesthetic to that.
Before long, programming may be seen as a mix of typing gestures and incantations that once qualified as a craft. Just as we look with awe at the old Bell Labs gang, the unglamorous work of manually debugging concurrency issues or writing web server code from scratch may be looked upon as heroic. Every so often, we might still see the old romantics lingering over each keystroke — an act that’s dignified, masterful, and hopelessly out of time.