打造高性能数据与人工智能组织(第二版)

内容总结:
【独家调查】数据基建滞后拖累AI战略成效,仅2%企业获显著回报
麻省理工科技评论洞察最新研究报告指出,尽管人工智能技术正以前所未有的速度迭代升级,全球企业数据管理能力却未能同步提升,导致AI战略实际效益普遍不及预期。这项与Databricks联合开展的调查覆盖800名数据与技术高管,揭示了AI浪潮下的企业转型困境。
调查显示,多模态处理、自主推理等AI核心技术已取得突破性进展,但数据质量仍是制约AI效能的关键瓶颈。与2021年首次研究相比,自评为数据管理"高效能"的企业比例不升反降(2025年12% vs 2021年13%),数据团队普遍面临人才短缺、数据更新迟缓、安全合规复杂等挑战。
这种数据基建的滞后直接影响了AI应用成效。目前仅2%受访高管认为其AI战略带来显著商业回报,约三分之二企业虽已部署生成式AI,但仅7%实现规模化应用。研究指出,数据溯源能力不足与实时数据获取困难,正成为阻碍AI深度应用的主要障碍。
该报告同时警示,当技术加速迭代遭遇数据管理瓶颈,企业可能陷入"AI投入增长-效益停滞"的恶性循环。在ChatGPT等AI工具已渗透至医疗咨询、语言保护等敏感领域的当下,数据治理缺陷可能进一步放大技术伦理风险。如何构建与AI发展同步的数据战略,已成为全球企业亟待破解的命题。
中文翻译:
赞助内容
构建高效的数据与人工智能组织(第二版)
如何实现数据与AI战略的承诺
与Databricks联合呈现
在人工智能领域,四年犹如一个时代。自本研究报告首版于2021年发布以来,AI技术飞速演进,生成式AI取得突破后其发展势头依然不减。例如,多模态能力(不仅能处理文本,还能处理音频、视频及其他非结构化信息)正逐渐成为AI模型的标配。AI的自主推理与行动能力也日益增强,许多组织已开始部署具备此类功能的AI代理。
然而万变不离其宗:AI模型的输出质量,始终取决于输入数据的质量。数据管理技术与实践虽在进步,但本研究第二版显示,多数企业未能及时利用这些进步以跟上AI发展步伐。受此及其他因素制约,能通过AI战略实现理想业务成果的企业寥寥无几——在接受调研的高管中,仅不足2%对自身组织的AI成果给予高度评价。
为评估生成式AI等技术普及后企业数据能力的提升程度,《麻省理工科技评论》洞察团队调研了800名数据与技术领域的高管,并深度访谈了15位科技与商业领袖。
报告核心发现如下:
- 数据团队难以匹配AI发展速度:当前企业在数据战略落实方面较生成式AI问世前并无改善。2025年调研对象中仅12%自评为数据领域“高成就者”,而2021年这一比例为13%。人才短缺仍是瓶颈,但团队在获取实时数据、追溯血缘关系及应对安全复杂性等AI成功关键环节也面临挑战。
- AI潜能尚未充分释放:仅2%的受访者认为其组织在创造可量化业务成果方面表现卓越。事实上,多数企业仍在挣扎于生成式AI的规模化应用——虽三分之二已部署该技术,但仅7%实现广泛落地。
本文由《麻省理工科技评论》定制内容团队Insights创作,未经过编辑部编写。内容由人类作者、编辑、分析师与插画师完成,若使用AI工具仅限辅助生产环节且经过严格人工审核。
深度聚焦
人工智能
-
用户与AI聊天机器人建立情感联结竟如此轻易
随着人类与聊天机器人关系日益紧密,这对部分人群安全无害,对另一些人却潜藏危机。 -
心理治疗师秘密使用ChatGPT,来访者感到被侵犯
某些治疗师在诊疗过程中使用AI,此举正在牺牲来访者的信任与隐私。 -
AI与维基百科如何将弱势语言推向消亡漩涡
机器翻译使冷门语言生成错误百出的维基条目变得容易,当AI模型以此类低质内容为训练素材时会发生什么? -
OpenAI在印度市场大获成功,其模型却深陷种姓偏见
印度作为OpenAI第二大市场,ChatGPT与Sora输出的内容仍在复制伤害数百万人的种姓刻板印象。
保持联系
欢迎关注《麻省理工科技评论》
获取最新动态、专属优惠、头条新闻及活动资讯
英文来源:
Sponsored
Building a high performance data and AI organization (2nd edition)
What it takes to deliver on data and AI strategy.
In partnership withDatabricks
Four years is a lifetime when it comes to artificial intelligence. Since the first edition of this study was published in 2021, AI’s capabilities have been advancing at speed, and the advances have not slowed since generative AI’s breakthrough. For example, multimodality— the ability to process information not only as text but also as audio, video, and other unstructured formats—is becoming a common feature of AI models. AI’s capacity to reason and act autonomously has also grown, and organizations are now starting to work with AI agents that can do just that.
Amid all the change, there remains a constant: the quality of an AI model’s outputs is only ever as good as the data
that feeds it. Data management technologies and practices have also been advancing, but the second edition of this study suggests that most organizations are not leveraging those fast enough to keep up with AI’s development. As a result of that and other hindrances, relatively few organizations are delivering the desired business results from their AI strategy. No more than 2% of senior executives we surveyed rate their organizations highly in terms of delivering results from AI.
To determine the extent to which organizational data performance has improved as generative AI and other AI advances have taken hold, MIT Technology Review Insights surveyed 800 senior data and technology executives. We also conducted in-depth interviews with 15 technology and business leaders.
Key findings from the report include the following:
• Few data teams are keeping pace with AI. Organizations are doing no better today at delivering on data strategy than in pre-generative AI days. Among those surveyed in 2025, 12% are self-assessed data “high achievers” compared with 13% in 2021. Shortages of skilled talent remain a constraint, but teams also struggle with accessing fresh data, tracing lineage, and dealing with security complexity—important requirements for AI success.
• Partly as a result, AI is not fully firing yet. There are even fewer “high achievers” when it comes to AI. Just 2% of respondents rate their organizations’ AI performance highly today in terms of delivering measurable business results. In fact, most are still struggling to scale generative AI. While two thirds have deployed it, only 7% have done so widely.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Deep Dive
Artificial intelligence
It’s surprisingly easy to stumble into a relationship with an AI chatbot
We’re increasingly developing bonds with chatbots. While that’s safe for some, it’s dangerous for others.
Therapists are secretly using ChatGPT. Clients are triggered.
Some therapists are using AI during therapy sessions. They’re risking their clients’ trust and privacy in the process.
How AI and Wikipedia have sent vulnerable languages into a doom spiral
Machine translators have made it easier than ever to create error-plagued Wikipedia articles in obscure languages. What happens when AI models get trained on junk pages?
OpenAI is huge in India. Its models are steeped in caste bias.
India is OpenAI’s second-largest market, but ChatGPT and Sora reproduce caste stereotypes that harm millions of people.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.