«

就在OpenAI争分夺秒建造人工智能数据中心之际,纳德拉提醒世人:微软早已手握现成设施。

qimuai 发布于 阅读:7 一手编译


就在OpenAI争分夺秒建造人工智能数据中心之际,纳德拉提醒世人:微软早已手握现成设施。

内容来源:https://techcrunch.com/2025/10/09/while-openai-races-to-build-ai-data-centers-nadella-reminds-us-that-microsoft-already-has-them/

内容总结:

微软首席执行官萨蒂亚·纳德拉近日通过社交媒体宣布,其公司正式部署首个大型人工智能系统集群。这套被英伟达称为AI"工厂"的基础设施,将作为微软Azure全球数据中心部署系列同类系统的首例,专门用于处理OpenAI的工作负载。

据悉,每个系统由超过4600台英伟达GB300机架计算机组成,搭载市场紧缺的Blackwell Ultra GPU芯片,并通过英伟达InfiniBand超高速网络技术实现互联。微软承诺将在全球范围部署"数十万个Blackwell Ultra GPU",其规模令人瞩目。

此次发布时机值得关注。就在此前,与微软关系微妙的合作伙伴OpenAI先后与英伟达和AMD达成两项数据中心合作协议。据估算,OpenAI为自建数据中心已获得约1万亿美元投资承诺,其首席执行官萨姆·奥尔特曼本周还透露将推进更多建设项目。

微软此时发声意在强调,其已在34个国家运营300余个数据中心,现有基础设施"独具优势"足以满足前沿AI需求。该公司表示,这些超级AI系统还能运行参数规模达"数百万亿级"的下一代模型。

业界预计本月底将获知微软AI部署新进展。微软首席技术官凯文·斯科特已确认出席10月27-29日在旧金山举办的TechCrunch Disrupt科技大会并发表演讲。

中文翻译:

微软首席执行官萨蒂亚·纳德拉周四通过社交媒体发布视频,首次展示该公司部署的大规模人工智能系统——英伟达偏好称之为AI"工厂"。他承诺这将是微软在Azure全球数据中心部署的"首批"英伟达AI工厂,专门用于处理OpenAI的工作负载。

每套系统由超过4600台英伟达GB300机架式计算机组成集群,搭载市场炙手可热的Blackwell Ultra GPU芯片,并通过英伟达名为InfiniBand的超高速网络技术实现互联。(除AI芯片外,英伟达CEO黄仁勋还展现出前瞻视野——该公司于2019年以69亿美元收购Mellanox,从而主导了InfiniBand市场。)

微软承诺在全球部署这些系统时将采用"数十万块Blackwell Ultra GPU"。这些系统的规模令人震撼(该公司还披露了大量技术细节供硬件爱好者研读),而此次发布的时机同样值得玩味。

就在此前,其合作伙伴兼众所周知的"亦敌亦友"OpenAI刚与英伟达和AMD签署两项备受瞩目的数据中心协议。据估算,OpenAI为自建数据中心已获得约1万亿美元的投入承诺。其CEO萨姆·奥尔特曼本周更透露还将扩大建设规模。

微软明确向世界传递其已拥有覆盖34国300余座数据中心的实力,并强调这些设施"具有独特优势"能够"满足当前前沿AI的需求"。该公司表示,这些巨型AI系统还能运行参数规模达"数百万亿级"的下一代模型。

本月底我们将获悉微软推进AI负载服务的更多细节。微软首席技术官凯文·斯科特将于10月27日至29日在旧金山举办的TechCrunch Disrupt大会上发表演讲。

英文来源:

Microsoft CEO Satya Nadella on Thursday tweeted a video of his company’s first deployed massive AI system — or AI “factory” as Nvidia likes to call them. He promised this is the “first of many” such Nvidia AI factories that will be deployed across Microsoft Azure’s global data centers to run OpenAI workloads.
Each system is a cluster of more than 4,600 Nvidia GB300 rack computers sporting the much-in-demand Blackwell Ultra GPU chip and connected via Nvidia’s super-fast networking tech called InfiniBand. (Besides AI chips, Nvidia CEO Jensen Huang also had the foresight to corner the market on InfiniBand when his company acquired Mellanox for $6.9 billion in 2019.)
Microsoft promises that it will be deploying “hundreds of thousands of Blackwell Ultra GPUs” as it rolls out these systems globally. While the size of these systems is eye-popping (and the company shared plenty more technical details for hardware enthusiasts to peruse), the timing of this announcement is also noteworthy.
It comes just after OpenAI, its partner and well-documented frenemy, inked two high-profile data center deals with Nvidia and AMD. In 2025, OpenAI has racked up, by some estimates, $1 trillion in commitments to build its own data centers. And CEO Sam Altman said this week that more were coming.
Microsoft clearly wants the world to know that it already has the data centers — more than 300 in 34 countries — and that they are “uniquely positioned” to “meet the demands of frontier AI today,” the company said. These monster AI systems are also capable of running the next generation of models with “hundreds of trillions of parameters,” it said.
We expect to hear more about how Microsoft is ramping up to serve AI workloads later this month. Microsoft CTO Kevin Scott will be speaking at TechCrunch Disrupt, which will be held October 27 to October 29 in San Francisco.

TechCrunchAI大撞车

文章目录


    扫描二维码,在手机上阅读