«

微软人工智能推出首批自研模型

qimuai 发布于 阅读:2 一手编译


微软人工智能推出首批自研模型

内容来源:https://www.theverge.com/news/767809/microsoft-in-house-ai-models-launch-openai

内容总结:

微软AI部门于周四首次推出自主研发的两款人工智能模型——MAI-Voice-1语音模型与MAI-1-preview预览版模型。据官方介绍,MAI-Voice-1仅需单块GPU即可在1秒内生成长达1分钟的音频,目前已应用于Copilot每日播报等产品功能;而MAI-1-preview则展现了下一代Copilot智能助手的雏形。

此次发布标志着微软与OpenAI的复杂合作关系出现新动向,其自研模型将直接与GPT-5、DeepSeek等主流AI模型展开竞争。MAI-1-preview基于约1.5万块英伟达H100 GPU训练而成,专注于提升指令理解与日常问答能力,目前已开始在Copilot助手部分文本场景中测试,并登陆AI基准测试平台LMArena进行公测。

微软AI负责人穆斯塔法·苏莱曼此前强调,公司核心战略是打造面向消费者的AI伴侣型产品,而非企业级应用。未来微软将通过协调多种专用模型组合,为用户创造更大价值。

中文翻译:

微软人工智能部门于周四首次发布了其自主研发的AI模型:MAI-语音-1和MAI-1预览版。该公司表示,新款MAI-语音-1语音模型仅需单块GPU即可在1秒内生成时长达1分钟的音频,而MAI-1预览版则"展现了Copilot未来功能的雏形"。

【微软推出自研AI模型】
随着发布可与GPT-5、DeepSeek等模型竞争的全新AI模型,微软与OpenAI复杂的合作伙伴关系正迎来新变局。

微软已在多项功能中应用MAI-语音-1技术,包括由AI主持人播报当日要闻的"Copilot每日播报",以及生成播客式讨论内容来辅助解释主题。用户可通过Copilot实验室亲身体验该模型,只需输入期望AI表达的内容,即可调整其语音风格。除该模型外,微软还推出了基于约1.5万块英伟达H100 GPU训练的MAI-1预览版,专为需要遵循指令且"能对日常查询提供有用回应"的用户群体打造。

微软AI负责人穆斯塔法·苏莱曼去年在《解码器》节目中表示,公司内部AI模型并不专注于企业场景:"我的理念是必须打造极致的消费者体验,并针对我们的使用场景深度优化。我们在广告端和用户遥测数据等领域拥有海量高价值数据,当前重点是构建真正适用于消费级伴侣的模型。"

微软计划在Copilot智能助手中针对特定文本场景部署MAI-1预览版(该产品目前仍基于OpenAI的大语言模型),并已在AI基准测试平台LMArena上启动公开测试。公司在博文中写道:"我们对未来发展怀有远大抱负,不仅将持续推动技术进步,更相信通过协调服务不同用户意图的专项模型组合,将释放巨大价值。"

【相关阅读】

英文来源:

Microsoft’s AI division announced its first homegrown AI models on Thursday: MAI-Voice-1 AI and MAI-1-preview. The company says its new MAI-Voice-1 speech model can generate a minute’s worth of audio in under one second on just one GPU, while MAI-1-preview “offers a glimpse of future offerings inside Copilot.”
Microsoft AI launches its first in-house models
Microsoft’s complicated partnership with OpenAI is adding a new twist as it releases AI models that will compete with GPT-5, DeepSeek, and all the rest.
Microsoft’s complicated partnership with OpenAI is adding a new twist as it releases AI models that will compete with GPT-5, DeepSeek, and all the rest.
Microsoft already uses MA1-Voice-1 to power a couple of its features, including Copilot Daily, which has an AI host recite the day’s top news stories, and to generate podcast-style discussions to help explain topics.
You can try MA1-Voice-1 out for yourself on Copilot Labs, where you can enter what you want the AI model to say, as well as change its voice and style of speaking. In addition to this model, Microsoft introduced MAI-1-preview, which it says it trained on around 15,000 Nvidia H100 GPUs. It’s built for users in need of an AI model capable of following instructions and “providing helpful responses to everyday queries.”
Microsoft AI chief Mustafa Suleyman said during an episode of Decoder last year that the company’s internal AI models aren’t focused on enterprise use cases. “My logic is that we have to create something that works extremely well for the consumer and really optimize for our use case,” Suleyman said. “So, we have vast amounts of very predictive and very useful data on the ad side, on consumer telemetry, and so on. My focus is on building models that really work for the consumer companion.”
Microsoft AI plans on rolling out MAI-1-preview for certain text use cases in its Copilot AI assistant, which currently relies on OpenAI’s large language models. It has also started publicly testing its MAI-1-preview model on the AI benchmarking platform LMArena.
“We have big ambitions for where we go next,” Microsoft AI writes in the blog post. “Not only will we pursue further advances here, but we believe that orchestrating a range of specialized models serving different user intents and use cases will unlock immense value.”
Most Popular

ThevergeAI大爆炸

文章目录


    扫描二维码,在手机上阅读