«

各行业人工智能投资回报探析

qimuai 发布于 阅读:5 一手编译


各行业人工智能投资回报探析

内容来源:https://www.technologyreview.com/2025/10/28/1126693/finding-return-on-ai-investments-across-industries/

内容总结:

【科技创新观察】企业如何破解AI投资回报难题?英特尔报告提出三大核心原则

在ChatGPT掀起全球人工智能热潮三年后,企业界正面临严峻的现实挑战。据麻省理工学院最新报告显示,高达95%的AI试点项目未能实现规模化应用或产生明确投资回报。这一数据在华尔街日报科技领袖峰会上引发热议,多位技术负责人甚至建议企业停止纠结于AI投资回报率的测算。

面对这一困境,英特尔近日发布专项研究报告,为企业提升AI投资效益指明三大实施路径:

首要原则在于重新认识数据价值。企业需意识到自身数据资产是谈判的重要筹码。在确保数据机密安全的前提下,可与模型供应商协商数据使用权交换服务优惠的共赢方案。近期Anthropic与OpenAI斥巨资获取企业数据的案例,印证了高质量非公开数据的稀缺性与商业价值。

其次应遵循“稳定至上”原则。调查显示,2024年全球涌现182款大模型,技术迭代速度远超企业系统更新周期。成功案例表明,AI部署应聚焦企业特定业务场景,在后台实现常态化运行。通过将直接调用模型API的业务流程进行抽象封装,可在保障系统稳定性的同时保留技术升级空间。

第三要义被概括为“家用车经济学”。企业应避免盲目追求尖端技术参数,而应着眼于实际业务承载能力。就像法拉利跑车在学区限速路段无法发挥性能优势那样,过度配置的AI系统反而会推高运营成本。实践证明,采用与人类阅读速度匹配(每秒50个词元以内)的系统设计,往往能以最小成本实现规模化应用。

报告最后强调,企业应当立足实际需求,构建具备技术独立性的实施方案,同时善用自身数据资源在AI生态中的战略价值,方能在人工智能转型浪潮中行稳致远。

(本文基于英特尔发布的技术白皮书编译整理)

中文翻译:

赞助内容

跨行业探寻AI投资回报之道
在这项飞速发展的技术领域,精心规划AI应用场景将助推企业行稳致远,提升投资回报。
由英特尔特约发布

ChatGPT问世已满三年,众多专家开始使用"泡沫"等词汇来解释为何除少数技术供应商外,生成式AI尚未带来实质性回报。九月发布的MIT NANDA报告引发轩然大波,其核心观点被广泛引用:高达95%的AI试点项目未能实现规模化推广或产生明确可衡量的投资回报。麦肯锡早前发布的趋势报告也指出,代理式AI将成为企业获取巨大运营效益的前行路径。在《华尔街日报》技术理事会峰会上,AI技术领袖们建议首席信息官不必纠结于AI投资回报,因为效益衡量本就困难,强行测算反而容易失真。

这使技术决策者陷入两难——现有稳健的技术架构已支撑企业运营,引入新技术究竟能带来什么价值?

数十年来,技术部署始终遵循着稳定节奏:运维人员会避免更替关键业务流程中的技术组件而影响系统稳定。例如,一项可能危及灾备系统的技术,即便更优质或更经济也毫无意义。当成熟中间件被新厂商接手后虽然可能涨价,但若因技术迁移导致企业数据丢失,其代价远高于继续使用经20年业务验证的稳定技术。

那么企业如何从最新技术转型中获取回报?

AI第一准则:数据即价值
多数AI数据论述聚焦于工程技术,确保模型能基于反映企业历史与现状的数据进行推理。但企业AI中最普及的应用场景,往往始于向模型上传文件附件进行提示。这种方式能将AI模型限定在特定文件内容范围内,既提升响应精准度,又减少获取最佳答案所需的提示次数。

这种向AI模型输入专有数据的策略,需同步考量两大要务:其一建立数据保密机制,其二制定缜密的供应商谈判策略——模型厂商需要如贵司数据这类非公开数据来推进前沿模型研发。近期Anthropic与OpenAI纷纷与企业数据平台达成重磅合作,正是因为互联网公开的高价值原始数据已不敷使用。

多数企业会自然将数据保密设为优先事项,通过业务流程设计保护商业机密。但从经济价值角度考量,特别是考虑到每次模型API调用的实际成本,以数据访问权换取服务或价格优惠或许是明智之策。相较于传统采购模式,更应着眼如何通过数据共享实现供应商模型演进与企业应用落地的双赢。

AI第二准则:稳定优于炫技
据Information is Beautiful统计,仅2024年就有182个新生成式AI模型面世。当GPT5于2025年上市时,此前12-24个月内发布的模型多被停用,直到订阅客户以退订相胁才恢复服务。这些企业曾基于旧模型构建的稳定工作流突然失效,技术供应商原以为客户会追捧新版模型,却忽略了商业流程对稳定性的极致要求。游戏玩家乐于在整机生命周期内持续升级配置,甚至为新品游戏彻底更新系统。

但商业运营逻辑截然不同。尽管员工可能使用最新模型处理文档或生成内容,后台系统却无法承受每周三次的技术栈更迭。后台工作的设计本质就是追求平稳。

最成功的AI部署都聚焦于解决企业特有痛点,通常在后台默默加速常规性强制任务。例如将法务或费用审计中的手工报表核对工作自动化,同时保留最终决策权给人脑,实现最优协同。关键在于,这些应用场景无需持续更新模型即可创造价值。通过将业务工作流与直接模型API解耦,既能获得长期稳定性,又可按企业节奏灵活更新底层引擎。

AI第三准则:务实经济学
避免本末倒置的最佳方式,是让系统设计适应用户而非迎合供应商指标。太多企业陷入误区:依据供应商主导的基准测试采购新设备或云服务,而非从现有能力出发规划AI落地节奏与规模。

法拉利的营销固然精彩,性能也确实卓越,但在学区限速路段同样无法飞驰,储物空间亦难满足采购需求。须知用户每次触达远程服务器和模型都会产生成本,应通过流程重构最大限度节约第三方服务支出。不少企业发现其客服AI工作流增加了数百万美元运营成本,后续还需投入更多开发资源控制支出。而那些选择以人类阅读速度(每秒低于50个token)运行系统的企业,反而以最小额外成本成功部署了规模化AI应用。

这项自动化新技术蕴含太多层面需要剖析——最有效的指引是:立足实际,构建底层技术组件的独立性以保障长期稳定,善用企业数据对技术供应商的发展价值。

本内容由英特尔制作,未经MIT Technology Review编辑团队撰写。

深度聚焦
人工智能
与AI聊天机器人建立情感联结竟如此简单
我们与聊天机器的羁绊日益加深。这对部分人群无妨,对他人却暗藏风险。

心理咨询师暗中使用ChatGPT,来访者产生应激反应
某些治疗师在诊疗过程中使用AI,此举正在透支客户的信任与隐私。

AI与维基百科如何将弱势语言推向消亡漩涡
机器翻译使得用小众语言创建漏洞百出的维基条目变得容易。当AI模型以垃圾内容为训练源时,将引发何种后果?

OpenAI在印度市场大放异彩,其模型却深陷种姓偏见泥潭
作为OpenAI第二大市场,印度用户发现ChatGPT与Sora仍在复制伤害数百万人的种姓刻板印象。

保持连接
获取MIT Technology Review最新动态
发现特别优惠、热点资讯、活动预告等精彩内容。

英文来源:

Sponsored
Finding return on AI investments across industries
Taking the time to make a use case for AI will propel companies further and improve the return on investment in this fast-changing technology.
Provided byIntel
The market is officially three years post ChatGPT and many of the pundit bylines have shifted to using terms like “bubble” to suggest reasons behind generative AI not realizing material returns outside a handful of technology suppliers.
In September, the MIT NANDA report made waves because the soundbite every author and influencer picked up on was that 95% of all AI pilots failed to scale or deliver clear and measurable ROI. McKinsey earlier published a similar trend indicating that agentic AI would be the way forward to achieve huge operational benefits for enterprises. At The Wall Street Journal’s Technology Council Summit, AI technology leaders recommended CIOs stop worrying about AI’s return on investment because measuring gains is difficult and if they were to try, the measurements would be wrong.
This places technology leaders in a precarious position–robust tech stacks already sustain their business operations, so what is the upside to introducing new technology?
For decades, deployment strategies have followed a consistent cadence where tech operators avoid destabilizing business-critical workflows to swap out individual components in tech stacks. For example, a better or cheaper technology is not meaningful if it puts your disaster recovery at risk.
While the price might increase when a new buyer takes over mature middleware, the cost of losing part of your enterprise data because you are mid-way through transitioning your enterprise to a new technology is way more severe than paying a higher price for a stable technology that you’ve run your business on for 20 years.
So, how do enterprises get a return on investing in the latest tech transformation?
First principle of AI: Your data is your value
Most of the articles about AI data relate to engineering tasks to ensure that an AI model infers against business data in repositories that represent past and present business realities.
However, one of the most widely-deployed use cases in enterprise AI begins with prompting an AI model by uploading file attachments into the model. This step narrows an AI model’s range to the content of the uploaded files, accelerating accurate response times and reducing the number of prompts required to get the best answer.
This tactic relies upon sending your proprietary business data into an AI model, so there are two important considerations to take in parallel with data preparation: first, governing your system for appropriate confidentiality; and second, developing a deliberate negotiation strategy with the model vendors, who cannot advance their frontier models without getting access to non-public data, like your business’ data.
Recently, Anthropic and OpenAI completed massive deals with enterprise data platforms and owners because there is not enough high-value primary data publicly available on the internet.
Most enterprises would automatically prioritize confidentiality of their data and design business workflows to maintain trade secrets. From an economic value point of view, especially considering how costly every model API call really is, exchanging selective access to your data for services or price offsets may be the right strategy. Rather than approaching model purchase/onboarding as a typical supplier/procurement exercise, think through the potential to realize mutual benefits in advancing your suppliers’ model and your business adoption of the model in tandem.
Second principle of AI: Boring by design
According to Information is Beautiful, in 2024 alone, 182 new generative AI models were introduced to the market. When GPT5 came into the market in 2025, many of the models from 12 to 24 months prior were rendered unavailable until subscription customers threatened to cancel. Their previously stable AI workflows were built on models that no longer worked. Their tech providers thought the customers would be excited about the newest models and did not realize the premium that business workflows place on stability. Video gamers are happy to upgrade their custom builds throughout the entire lifespan of the system components in their gaming rigs, and will upgrade the entire system just to play a newly released title.
However, behavior does not translate to business run rate operations. While many employees may use the latest models for document processing or generating content, back-office operations can’t sustain swapping a tech stack three times a week to keep up with the latest model drops. The back-office work is boring by design.
The most successful AI deployments have focused on deploying AI on business problems unique to their business, often running in the background to accelerate or augment mundane but mandated tasks. Relieving legal or expense audits from having to manually cross check individual reports but putting the final decision in a humans’ responsibility zone combines the best of both.
The important point is that none of these tasks require constant updates to the latest model to deliver that value. This is also an area where abstracting your business workflows from using direct model APIs can offer additional long-term stability while maintaining options to update or upgrade the underlying engines at the pace of your business.
Third principle of AI: Mini-van economics
The best way to avoid upside-down economics is to design systems to align to the users rather than vendor specs and benchmarks.
Too many businesses continue to fall into the trap of buying new gear or new cloud service types based on new supplier-led benchmarks rather than starting their AI journey from what their business can consume, at what pace, on the capabilities they have deployed today.
While Ferrari marketing is effective and those automobiles are truly magnificent, they drive the same speed through school zones and lack ample trunk space for groceries. Keep in mind that every remote server and model touched by a user layers on the costs and design for frugality by reconfiguring workflows to minimize spending on third-party services.
Too many companies have found that their customer support AI workflows add millions of dollars of operational run rate costs and end up adding more development time and cost to update the implementation for OpEx predictability. Meanwhile, the companies that decided that a system running at the pace a human can read—less than 50 tokens per second—were able to successfully deploy scaled-out AI applications with minimal additional overhead.
There are so many aspects of this new automation technology to unpack—the best guidance is to start practical, design for independence in underlying technology components to keep from disrupting stable applications long term, and to leverage the fact that AI technology makes your business data valuable to the advancement of your tech suppliers' goals.
This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.
Deep Dive
Artificial intelligence
It’s surprisingly easy to stumble into a relationship with an AI chatbot
We’re increasingly developing bonds with chatbots. While that’s safe for some, it’s dangerous for others.
Therapists are secretly using ChatGPT. Clients are triggered.
Some therapists are using AI during therapy sessions. They’re risking their clients’ trust and privacy in the process.
How AI and Wikipedia have sent vulnerable languages into a doom spiral
Machine translators have made it easier than ever to create error-plagued Wikipedia articles in obscure languages. What happens when AI models get trained on junk pages?
OpenAI is huge in India. Its models are steeped in caste bias.
India is OpenAI’s second-largest market, but ChatGPT and Sora reproduce caste stereotypes that harm millions of people.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读