«

高通携AI芯片进军人工智能基础设施市场

qimuai 发布于 阅读:9 一手编译


高通携AI芯片进军人工智能基础设施市场

内容来源:https://aibusiness.com/generative-ai/qualcomm-enters-ai-infrastructure-market-with-ai-chips

内容总结:

高通公司于本周一正式发布两款人工智能加速器芯片,此举标志着这家拥有40年历史的半导体与通信企业强势进军AI硬件市场,直接挑战英伟达和AMD等行业巨头。

此次发布的AI200芯片单卡配备768GB低功耗内存,专为大型语言模型及多模态AI应用优化,预计2026年上市;AI250芯片则宣称其内存带宽达到现有产品的十倍,针对大规模型设计且功耗更低,计划于2027年推出。两款芯片均基于高通自研的Hexagon神经网络处理器架构打造,该技术历经十余年迭代升级。

行业观察人士指出,高通此次进军实属必然。自2021年收购CPU制造商Nuvia后,该公司便着手布局数据中心处理器市场。分析师认为,新芯片架构与手机、笔记本电脑中的骁龙处理器一脉相承,只是针对高性能场景进行升级扩容。

当前AI芯片市场正呈现多元化发展趋势。除英伟达持续领跑外,AMD、英特尔等传统芯片厂商也已纷纷入局。分析师强调,高通虽在移动设备领域经验丰富,但在数据中心市场仍需证明自身实力。其核心竞争力可能在于能效优化方面的技术积累。

值得注意的是,高通欲在AI基础设施市场立足,仍需攻克多重关卡:既要赢得云服务巨头的信任,也要构建完善的软件生态,更要在实际应用中验证其大规模推理任务的高效可靠性。业内预计英伟达将很快推出应对产品,而AMD不断扩展的AI产品线同样构成竞争压力。

(本文部分信息由谷歌云支持提供)

中文翻译:

由谷歌云赞助
如何选择首个生成式AI应用场景
开展生成式AI应用时,首先应关注能提升人类信息交互体验的领域。

这家拥有40年历史的半导体与通信设备供应商正凭借新型芯片进军新市场。
高通于周一发布全新AI加速芯片,此举将使这家供应商直接与AI软硬件巨头英伟达及老牌芯片制造商AMD展开竞争。
AI200芯片每卡配备768GB LPDDR(低功耗双倍数据速率)内存,专为大型语言模型和多模态AI应用优化,计划于2026年上市。据高通称,AI250的内存带宽达到当前AI加速芯片的十倍,在降低功耗的同时专为大型Transformer模型设计,预计2027年问世。
两款芯片均采用高通的Hexagon神经处理单元(NPU)构建——这项于2018年基于2007年架构开发的AI加速技术曾在2020年经历重大升级。

意料之中的布局
Futurum集团旗下Signal65总裁兼分析师瑞安·施劳特指出,虽然这看似是高通的新业务,实属必然之举。2021年收购CPU制造商Nuvia时,高通就已计划开发新一代数据中心CPU。
"这两款新芯片设计显然受到那次收购的影响,"施劳特表示,"由NPU驱动的AI计算芯片,与搭载高通骁龙芯片的智能手机和笔记本电脑中的NPU存在大量架构相似性,只是为更高性能进行了升级。"
他进一步强调,新芯片也印证了计算机行业AI基础设施的发展趋势。AMD和英特尔等传统芯片制造商今年纷纷涌入英伟达近年主导的AI芯片市场,并已取得初步进展。
"具备高通这种技术实力的企业,自然不会错过这样的增长机遇。"

进军AI数据中心与基础设施市场对高通而言是绝佳商机。

尚需证明实力
尽管高通是该市场的新入局者,但其骁龙平台在智能手机、可穿戴设备和物联网系统领域已验证了成功。
然而既往成绩不能保证未来胜绩。
"在这个他们缺乏历史成功经验的领域,高通仍需证明自己。但丰富的经验为其奠定了独特基础,"施劳特分析道,"若执行得当,他们有望在以能效优化的AI计算细分领域开辟差异化优势。"
此外,虽然高通进军基础设施市场对英伟达构成挑战,但这家AI软硬件巨头不可能坐视不理。
施劳特预测:"英伟达很可能推出新产品应对高通新芯片。"
值得注意的是,在与主要目标英伟达角逐的同时,高通还需应对AMD不断扩展的AI产品组合。
施劳特指出,当务之急是赢得潜在大客户超大规模企业的信任。
"硬件研发只是成功要素之一,建立软件生态信心并证明其推理工作负载可高效可靠地大规模运行至关重要,"他补充道,"与这些根深蒂固的对手竞争需要时间沉淀、深度合作以及经过验证的性能能效记录。"

您可能还喜欢

英文来源:

Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
The new chips launch the 40-year-old semiconductor and telecommunications vendor into a new market.
Qualcomm introduced new AI accelerator chips on Monday, a move that puts the vendor in a position to compete directly with AI hardware and software giant Nvidia and longtime chipmaker AMD.
The AI200 chip features 768GB of LPDDR (low-power double data rate) memory per card and is optimized for large language models and multimodal AI applications. It will be available in 2026. The AI250 delivers ten times the adequate memory bandwidth of current AI accelerator chips, according to Qualcomm. It lowers power consumption and is designed for large transformer models, the vendor said. It will be available in 2027.
Qualcomm built both chips using its Hexagon neural processing unit (NPU), an AI accelerator it built in 2018 on an architecture dating to 2007, and then significantly updated in 2020.
Expected Step
While this seems like a new venture for Qualcomm, it was to be expected, said Ryan Shrout, president and analyst at Signal65, a division of Futurum Group. When Qualcomm acquired CPU manufacturer Nuvia in 2021, Qualcomm planned to develop a new class of data center CPUs.
These two new chip designs seem to be influenced by that acquisition, Shrout said.
"The AI compute chip -- driven by an NPU -- appears to share a lot of architectural similarities with the NPUs already found in smartphones and laptops," powered by Qualcomm's Snapdragon chips, he said. "Just scaled up for higher performance."
The new chips also exemplify the growing trend of AI infrastructure in the computer industry, Shrout continued. Old-line chipmakers like AMD and Intel this year jumped into the AI chip market that Nvidia has dominated in recent years, and have already made some headway.
"Any company with Qualcomm's capabilities naturally wants to participate in that growth opportunity," he said.
Aiming for the AI data center and infrastructure market is a good business opportunity for Qualcomm.
A Need to Prove Itself
While Qualcomm is relatively new to the market, it has proven successful with Snapdragon-powered smartphones, wearables, and IoT systems.
However, past success does not guarantee future success.
"They still have a lot to prove in this space since it's not one where they've historically been successful, but their broad experience gives them a unique foundation," Shrout said. "If they execute well, they could carve out a compelling niche focused on efficiency and power-optimized AI compute."
Moreover, while Qualcomm's entrance into the infrastructure market poses a challenge to Nvidia, the AI hardware and software giant will be unlikely to ignore it.
Nvidia will likely respond to Qualcomm's new chips with its own new offerings, Shrout said.
However, while Nvidia is the larger target, Qualcomm is also competing against AMD's expanding AI portfolio.
Qualcomm now needs to earn the trust of hyperscalers that could be big customers, Shrout said.
"Building the hardware is only part of the equation; establishing confidence in the software ecosystem and demonstrating that their inference workloads can run efficiently and reliably at scale will be critical," he added. "Competing against those entrenched players will take time, strong partnerships, and a proven record of performance and efficiency."
You May Also Like

商业视角看AI

文章目录


    扫描二维码,在手机上阅读