智能体混沌时代:数据何以成为救赎之光

内容总结:
智能体时代来临:数据根基决定企业AI成败
随着人工智能技术从辅助工具迈向企业运营核心,自主智能体(AI Agent)正掀起新一轮生产力革命。未来,中型企业部署的智能体数量可能高达数千个,全面接管从销售线索挖掘、供应链优化到客户服务与财务对账的端到端流程。然而,波士顿咨询集团的研究显示,尽管企业投入巨大,约60%的AI项目未能带来显著的营收增长或成本下降。成功者与落后者之间的关键差距,并非在于资金或模型选择,而在于是否构建了统一、可信、富含上下文的数据基础。
智能体的可靠性挑战:不止于模型
企业智能体的可靠运行依赖于四大支柱:模型、工具、上下文与治理。以订购披萨的简单任务为例:模型理解指令,工具调用订餐接口,上下文提供个性化偏好(如“周五晚七点常点意大利香肠味”),而治理则验证披萨是否送达。任何一环的缺失都可能导致智能体决策失误。
当前,AI模型能力飞速提升,工具集成也日益便捷,但许多企业的智能体仍因“数据债务”而步履维艰——历史遗留的系统孤岛、部门壁垒导致数据矛盾、重复且碎片化。在此环境下,少量智能体尚可运行,但随着规模扩大,数据不一致将引发连锁反应:智能体各自为政,做出矛盾决策,甚至违反合规要求,最终侵蚀企业信任。
构建“统一上下文”:从实验走向规模化价值
领先企业已意识到,在智能体驱动的新时代,数据本身就是核心基础设施。它们率先投入建设统一的数据管理平台,确保所有智能体基于同一套准确、实时、完整的业务上下文进行推理与行动。这不仅避免了内部混乱,更使企业能够安全、高效地部署成千上万个智能体,实现规模化价值。
专家指出,智能体将重新定义企业运营,而“上下文智能”将成为决定胜负的关键。对于志在驾驭这波变革的企业而言,夯实数据根基已不是选择题,而是必然路径。只有将分散的数据转化为协同的智慧,才能真正释放智能体时代的全部潜力。
中文翻译:
赞助内容
智能体混沌时代:数据何以成为救赎关键
自主智能体即将驱动成千上万的企业工作流程,唯有拥有统一、可信、富含上下文数据的组织,才能避免混乱,实现规模化可靠价值。
由Reltio提供
人工智能智能体正从编码助手和客服聊天机器人,深入企业的运营核心。投资回报前景可观,但若缺乏协同,自主性将引发混乱。企业领导者必须立即夯实基础。
智能体爆发浪潮将至
智能体已在独立处理从潜在客户挖掘、供应链优化、客户支持到财务对账的端到端流程。一家中型企业可能轻松运行4,000个智能体,每个决策都影响着收入、合规与客户体验。
向智能体驱动型企业转型已势不可挡。经济效益巨大,不容忽视,且潜力正以超预期的速度成为现实。问题何在?大多数企业及其底层基础设施尚未为此转变做好准备。先行者已发现,规模化推进人工智能计划极具挑战。
制约AI发展的可靠性鸿沟
企业正大力投资AI,但回报未达预期。波士顿咨询集团近期研究显示,尽管投入巨大,60%的企业收入与成本改善微乎其微。然而,领先企业却实现了五倍收入增长和三倍成本降低。显然,领先地位带来巨大溢价。
领先者与追随者的差距不在于投入资金或所用模型,而在于规模化部署AI前,这些“面向未来构建”的企业已夯实关键数据基础设施能力。他们投资于让AI可靠运行的基础工作。
智能体可靠性框架:四大象限
要理解企业AI可能如何及在何处失效,需考察四个关键象限:模型、工具、上下文与治理。
以订购披萨的智能体为例:模型解析请求(“帮我订披萨”),工具执行操作(调用达美乐或必胜客API),上下文提供个性化信息(你常在周五晚7点订购意大利香肠披萨),治理验证结果(披萨是否准时送达)。
每个维度都可能成为故障点:
- 模型:解析指令、生成响应、做出预测的底层AI系统
- 工具:连接AI与企业系统的集成层,如API、协议与连接器
- 上下文:决策前所需的全业务视图信息,包括客户历史、产品目录与供应链网络
- 治理:确保数据质量、安全与合规的策略、控制与流程
该框架有助于诊断可靠性缺口。当企业智能体失效时,问题出在哪个象限?是模型误解意图?工具不可用或故障?上下文不完整或矛盾?还是缺乏验证智能体行为合规的机制?
为何这是数据问题,而非模型问题
人们容易认为可靠性将随模型改进而提升。然而,模型能力正呈指数级进步:三年间推理成本下降近900倍,幻觉率持续降低,AI处理长任务的能力每半年翻倍。
工具也在加速发展。如模型上下文协议(MCP)等集成框架,极大简化了智能体与企业系统及API的连接。
既然模型强大、工具日趋成熟,为何应用仍受阻?借用詹姆斯·卡维尔的名言:“症结在于数据。”多数智能体行为失常的根源,在于数据错位、不一致或不完整。
企业数十年来积累了“数据债务”:并购、定制系统、部门工具与影子IT使数据散落于互不兼容的孤岛。支持系统与营销系统数据不匹配,供应商数据在财务、采购与物流部门重复记录,同一地点在不同来源中有多种表述。
在此环境中部署少量智能体,初期表现或许出色,因为每个智能体被赋予特定系统调用权限。但随着智能体数量增加,裂痕将扩大——每个智能体都在构建自己的“碎片化真相”。
类似情形早有先例:当商业智能实现自助化,人人开始创建仪表板,效率飙升却报告矛盾。现在想象这种现象不再局限于静态仪表板,而是存在于可执行行动的AI智能体中。数据不一致将引发真实业务后果,而非仅是部门争论。
构建统一上下文与强健治理的企业,能自信部署成千上万的智能体,确保它们协同工作并遵守业务规则。忽视基础工作的企业,将目睹智能体输出矛盾结果、违反政策,最终导致信任崩塌快于价值创造。
驾驭智能体AI,避免混沌
企业的核心问题在于组织准备度:是提前构建支持智能体转型的数据基础,还是耗费数年逐个调试智能体,疲于应对本可通过基础设施建设避免的问题?
自主智能体已在改变工作方式,但企业唯有让这些系统基于同一事实运作,才能真正获益。这确保智能体在推理、规划与行动时,依据的是准确、一致且最新的信息。
当前从AI中获取价值的企业,皆建立在适用数据基础之上。他们早已认识到:在智能体时代,数据即核心基础设施。坚实的数据基础,是将实验转化为可靠运营的关键。
Reltio专注于构建这一基础。Reltio数据管理平台整合企业全域核心数据,让每个智能体即时访问统一的业务上下文。这种一体化方法助力企业更快行动、更智能决策,释放AI全部价值。
智能体将定义企业未来,上下文智能将决定谁引领未来。
致引领下一波转型的领导者
参阅Reltio实践指南:《解锁智能体AI:数据就绪商业实战手册》。立即获取,了解实时上下文如何在智能时代成为决定性优势。
深度阅读
人工智能
2026年AI新趋势
我们的AI作者预测未来一年五大热点趋势。
将大语言模型视为外星生命的新生物学家
通过将大语言模型视作生命体而非计算机程序,科学家首次揭示了它们的部分秘密。
基于监狱通话训练的AI模型现可识别预谋犯罪
该模型旨在检测犯罪“策划阶段”。
保持联系
获取《麻省理工科技评论》最新动态
发现特别优惠、头条新闻、即将举办的活动等更多内容。
英文来源:
Sponsored
The era of agentic chaos and how data will save us
Autonomous agents will soon run thousands of enterprise workflows, and only organizations with unified, trusted, context-rich data will prevent chaos and unlock reliable value at scale.
Provided byReltio
AI agents are moving beyond coding assistants and customer service chatbots into the operational core of the enterprise. The ROI is promising, but autonomy without alignment is a recipe for chaos. Business leaders need to lay the essential foundations now.
The agent explosion is coming
Agents are independently handling end-to-end processes across lead generation, supply chain optimization, customer support, and financial reconciliation. A mid-sized organization could easily run 4,000 agents, each making decisions that affect revenue, compliance, and customer experience.
The transformation toward an agent-driven enterprise is inevitable. The economic benefits are too significant to ignore, and the potential is becoming a reality faster than most predicted. The problem? Most businesses and their underlying infrastructure are not prepared for this shift. Early adopters have found unlocking AI initiatives at scale to be extremely challenging.
The reliability gap that’s holding AI back
Companies are investing heavily in AI, but the returns aren't materializing. According to recent research from Boston Consulting Group, 60% of companies report minimal revenue and cost gains despite substantial investment. However, the leaders reported they achieved five times the revenue increases and three times the cost reductions. Clearly, there is a massive premium for being a leader.
What separates the leaders from the pack isn't how much they're spending or which models they're using. Before scaling AI deployment, these “future-built” companies put critical data infrastructure capabilities in place. They invested in the foundational work that enables AI to function reliably.
A framework for agent reliability: The four quadrants
To understand how and where enterprise AI can fail, consider four critical quadrants: models, tools, context, and governance.
Take a simple example: an agent that orders you pizza. The model interprets your request ("get me a pizza"). The tool executes the action (calling the Domino's or Pizza Hut API). Context provides personalization (you tend to order pepperoni on Friday nights at 7pm). Governance validates the outcome (did the pizza actually arrive?).
Each dimension represents a potential failure point:
- Models: The underlying AI systems that interpret prompts, generate responses, and make predictions
- Tools: The integration layer that connects AI to enterprise systems, such as APIs, protocols, and connectors
- Context: Before making decisions, information agents need to understand the full business picture, including customer histories, product catalogs, and supply chain networks
- Governance: The policies, controls, and processes that ensure data quality, security, and compliance
This framework helps diagnose where reliability gaps emerge. When an enterprise agent fails, which quadrant is the problem? Is the model misunderstanding intent? Are the tools unavailable or broken? Is the context incomplete or contradictory? Or is there no mechanism to verify that the agent did what it was supposed to do?
Why this is a data problem, not a model problem
The temptation is to think that reliability will simply improve as models improve. Yet, model capability is advancing exponentially. The cost of inference has dropped nearly 900 times in three years, hallucination rates are on the decline, and AI’s capacity to perform long tasks doubles every six months.
Tooling is also accelerating. Integration frameworks like the Model Context Protocol (MCP) make it dramatically easier to connect agents with enterprise systems and APIs.
If models are powerful and tools are maturing, then what is holding back adoption?
To borrow from James Carville, “It is the data, stupid.” The root cause of most misbehaving agents is misaligned, inconsistent, or incomplete data.
Enterprises have accumulated data debt over decades. Acquisitions, custom systems, departmental tools, and shadow IT have left data scattered across silos that rarely agree. Support systems do not match what is in marketing systems. Supplier data is duplicated across finance, procurement, and logistics. Locations have multiple representations depending on the source.
Drop a few agents into this environment, and they will perform wonderfully at first, because each one is given a curated set of systems to call. Add more agents and the cracks grow, as each one builds its own fragment of truth.
This dynamic has played out before. When business intelligence became self-serve, everyone started creating dashboards. Productivity soared, reports failed to match. Now imagine that phenomenon not in static dashboards, but in AI agents that can take action. With agents, data inconsistency produces real business consequences, not just debates among departments.
Companies that build unified context and robust governance can deploy thousands of agents with confidence, knowing they'll work together coherently and comply with business rules. Companies that skip this foundational work will watch their agents produce contradictory results, violate policies, and ultimately erode trust faster than they create value.
Leverage agentic AI without the chaos
The question for enterprises centers on organizational readiness. Will your company prepare the data foundation needed to make agent transformation work? Or will you spend years debugging agents, one issue at a time, forever chasing problems that originate in infrastructure you never built?
Autonomous agents are already transforming how work gets done. But the enterprise will only experience the upside if those systems operate from the same truth. This ensures that when agents reason, plan, and act, they do so based on accurate, consistent, and up-to-date information.
The companies generating value from AI today have built on fit-for-purpose data foundations. They recognized early that in an agentic world, data functions as essential infrastructure. A solid data foundation is what turns experimentation into dependable operations.
At Reltio, the focus is on building that foundation. The Reltio data management platform unifies core data from across the enterprise, giving every agent immediate access to the same business context. This unified approach enables enterprises to move faster, act smarter, and unlock the full value of AI.
Agents will define the future of the enterprise. Context intelligence will determine who leads it.
For leaders navigating this next wave of transformation, see Relatio’s practical guide:
Unlocking Agentic AI: A Business Playbook for Data Readiness. Get your copy now to learn how real-time context becomes the decisive advantage in the age of intelligence.
Deep Dive
Artificial intelligence
What’s next for AI in 2026
Our AI writers make their big bets for the coming year—here are five hot trends to watch.
Meet the new biologists treating LLMs like aliens
By studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time.
An AI model trained on prison phone calls now looks for planned crimes in those calls
The model is built to detect when crimes are being “contemplated.”
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.