谷歌应对AI军备竞赛的答案——提拔其数据中心技术背后的关键人物。

内容总结:
谷歌近日在AI基础设施领域做出重大人事调整,将其核心架构师阿明·瓦赫达特擢升为AI基础设施首席技术官。这一新设职位将直接向首席执行官桑达尔·皮查伊汇报,凸显出该公司在AI军备竞赛中对基础设施战略的空前重视。
据内部备忘录显示,此次任命正值谷歌计划在2025年前投入高达930亿美元资本支出之际。母公司Alphabet预计明年相关投入还将大幅增长,而瓦赫达特正是这笔巨额投资的关键掌舵人。
这位加州大学伯克利分校博士出身的计算机科学家,早在1990年代就以施乐帕克研究中心研究实习生的身份开启职业生涯。在2010年以工程院士兼副总裁身份加入谷歌前,他曾在杜克大学和加州大学圣地亚哥分校担任教职。过去15年间,瓦赫达特始终致力于构建谷歌的AI基础设施基石,其发表的近395篇学术论文均聚焦于大规模计算效率提升。
八个月前,瓦赫达特以机器学习、系统与云AI副总裁身份在谷歌云大会上发布了第七代TPU芯片“铁木”。他当时披露的数据令人震撼:单个芯片组包含超过9000枚芯片,可提供42.5百亿亿次浮点运算能力,相当于当时全球最强超算的24倍以上。“八年间AI计算需求增长了1亿倍”,他在发布会上如是说。
在聚光灯之外,瓦赫达特长期主导着谷歌保持竞争力的关键工程:既包括为AI训练和推理定制的TPU芯片体系,也涵盖名为“木星”的超高速内部网络。这项突破性技术目前已实现每秒13 petabits的数据传输规模,按他本人的比喻,“足以支持全球80亿人同时进行视频通话”。这些看不见的底层架构,正支撑着从YouTube、搜索引擎到全球数百个数据中心AI训练系统的全线业务。
此外,瓦赫达特还深度参与谷歌集群管理系统Borg的持续开发,并主导了首款自研Arm架构数据中心通用处理器Axion的研发。在顶尖AI人才争夺白热化的市场环境下,此次晋升既是对其技术领导力的认可,也体现了谷歌对核心战略人才的保留决心。正如业界观察所言:当企业用15年将某人锻造成AI战略的关键枢纽时,确保其留任便成为必然选择。
中文翻译:
在人工智能基础设施的军备竞赛中,谷歌刚刚迈出关键一步。据Semafor独家披露的内部备忘录显示,谷歌已将阿明·瓦赫达特擢升为AI基础设施首席技术官——这个新设立的职位将直接向首席执行官桑达尔·皮查伊汇报。这项人事变动标志着相关工作的战略重要性:谷歌计划到2025年底投入高达930亿美元的资本支出,而母公司Alphabet预计明年这一数字还将大幅跃升。
瓦赫达特堪称行业宿将。这位拥有加州大学伯克利分校博士学位的计算机科学家,早在1990年代就以施乐帕克研究中心研究实习生的身份开启职业生涯。过去十五年间,他始终默默构筑着谷歌的AI基石。2010年以工程院士兼副总裁身份加盟谷歌前,他曾任杜克大学副教授,后成为加州大学圣地亚哥分校教授兼SAIC讲席教授。其学术背景令人瞩目——发表论文约395篇,研究始终聚焦于超大规模计算效率提升。
瓦赫达特在谷歌内部早已声名显赫。八个月前,他以机器学习、系统与云AI副总裁兼总经理身份,在谷歌云Next大会上发布了第七代TPU芯片"Ironwood"。当时披露的参数令人震撼:每个芯片组搭载超过9000枚芯片,提供42.5百亿亿次计算能力——他宣称这相当于当时全球最强超算的24倍以上。"八年间AI算力需求增长了1亿倍。"他这样告诉现场观众。
正如Semafor所指出的,瓦赫达特始终在幕后统筹那些看似平淡却至关重要的基础工程,包括为谷歌奠定竞争优势的定制TPU芯片(这些芯片在AI训练和推理领域让谷歌相对OpenAI等对手保持领先),以及名为"木星"的超高速内部网络。这个神经网络使所有服务器能够相互通信并传输海量数据(瓦赫达特在去年底的博客中透露,木星网络现已扩展至每秒13 petabits带宽,理论上足以支持全球80亿人同时进行视频通话)。正是这些隐形管道,将YouTube、搜索引擎与遍布全球数百个数据中心架构的AI训练体系连成整体。
瓦赫达特还深度参与了Borg软件系统的持续开发。这个作为谷歌集群管理系统的"智慧大脑",负责协调数据中心所有任务分配,精准决策每项任务该由哪台服务器执行、何时执行及执行时长。此外他曾表示,自己督导了谷歌首款定制Arm架构通用服务器芯片Axion的研发——这款专为数据中心设计的芯片于去年亮相,目前仍在持续完善中。
简而言之,瓦赫达特是谷歌AI战略的核心支柱。
事实上,在顶尖AI人才身价飙升、争夺白热化的市场中,谷歌此次人事安排或许也暗含留人之意。当企业耗费十五年将某人锻造成AI战略的关键枢纽时,自然会竭力确保其长驻阵营。
英文来源:
Google just made a major move in the AI infrastructure arms race, elevating Amin Vahdat to chief technologist for AI infrastructure, a newly created position reporting directly to CEO Sundar Pichai, according to an internal memo first reported by Semafor. It’s a signal of just how critical this work has become as Google pours up to $93 billion into capital expenditures by the end of 2025 — a number that parent company Alphabet expects will be a whole lot bigger next year.
Vahdat isn’t new to the game. The computer scientist, who holds a PhD from UC Berkeley and started as a research intern at Xerox PARC back in the early ’90s, has been quietly building Google’s AI backbone for the past 15 years. Before joining Google in 2010 as an engineering fellow and VP, he was an associate professor at Duke University and later a professor and SAIC Chair at UC San Diego. His academic credentials are formidable — with what appears to be around 395 published papers — and his research has always focused on making computers work more efficiently at massive scale.
Vahdat already maintains a high profile with Google. Just eight months ago, at Google Cloud Next, he unveiled the company’s seventh-generation TPU, called Ironwood, in his role as VP and GM of ML, Systems, and Cloud AI. The specs he rattled off at the event were staggering, too: over 9,000 chips per pod delivering 42.5 exaflops of compute — more than 24 times the power of the world’s No. 1 supercomputer at the time, he said. “Demand for AI compute has increased by a factor of 100 million in just eight years,” he told the audience.
Behind the scenes, as noted by Semafor, Vahdat has been orchestrating the unglamorous and essential work that keeps Google competitive, including those custom TPU chips for AI training and inference that give Google an edge over rivals like OpenAI as well as the Jupiter network, the super-fast internal network that allows all its servers to talk to each other and move massive amounts of data around. (In a blog post late last year, Vahdat said that Jupiter now scales to 13 petabits per second, explaining that’s enough bandwidth to theoretically support a video call for all 8 billion people on Earth simultaneously.) It’s the invisible plumbing connecting everything from YouTube and Search to Google’s massive AI training operations across hundreds of data center fabrics worldwide.
Vahdat has also been deeply involved in the ongoing development of the Borg software system, Google’s cluster management system that acts as the brain coordinating all the work happening across its data centers and whose job is to figure out which servers should run which tasks, when, and for how long. And he has said he oversaw the development of Axion, Google’s first custom Arm-based general-purpose CPUs designed for data centers, which the company unveiled last year and continues to build.
In short, Vahdat is central to Google’s AI story.
Indeed, in a market where top AI talent commands astronomical compensation and constant recruitment, Google’s decision to elevate Vahdat to the C-suite may also be about retention. When you’ve spent 15 years building someone into a linchpin of your AI strategy, you make sure they stay.
文章标题:谷歌应对AI军备竞赛的答案——提拔其数据中心技术背后的关键人物。
文章链接:https://qimuai.cn/?post=2397
本站文章均为原创,未经授权请勿用于任何商业用途