在能动性人工智能时代构建数字韧性

内容总结:
【科技前沿】自主人工智能时代,企业如何构筑“数字韧性”?
随着人工智能从辅助工具转向自主决策,企业正面临新一轮数字化挑战。科技领袖指出,构建智能数据架构已成为释放自主AI潜力、筑牢企业数字韧性的关键。
数字韧性——即预防、抵御并快速恢复数字中断的能力——正因自主AI的崛起面临全新考验。这类能够自主规划、推理并执行任务的系统,在带来效率革命的同时,其运行速度与自主性也放大了数据不一致、安全漏洞等问题的破坏力。
尽管全球AI投资预计在2025年达到1.5万亿美元,但近半数企业领导者对突发情况下维持服务连续性缺乏信心。思科旗下Splunk公司高级副总裁卡马尔·哈西强调:“传统大语言模型基于人类文本数据的训练方式,已无法满足自主AI对系统安全性、韧性和持续可用性的要求。”
机器数据正成为自主AI时代的战略核心。服务器日志、设备指标等机器产生的实时数据流,如同“现代企业的心跳”。哈西指出,当前鲜有企业实现充分的机器数据整合,这不仅限制AI应用场景,更可能导致输出错误。正如早期自然语言处理模型受困于语义模糊,缺乏机器数据训练的自主AI同样面临“失控”风险。
为应对这一挑战,企业需要构建新型数据架构——将分散在安全、IT、业务运营等环节的数据资源编织成统一网络。这种架构既能打破数据孤岛,实现实时风险感知,又能通过联邦式设计保障数据安全。值得注意的是,非结构化的机器数据(如系统日志、安全事件)处理能力,正成为区分传统平台与AI就绪平台的关键标志。
在实践层面,AI本身已成为构建数据架构的得力助手。AI工具能自动识别异构数据关联,修正错误并实现智能分类。而自主AI系统更可协助人类快速识别非结构化数据流中的异常,弥补人工分析的盲区。
哈西特别强调人机协作的重要性:“明确的防护栏和人类监督,是实现可信AI应用的关键。AI可以增强人类决策,但最终方向仍由人类掌控。”这种持续演进的人机共生体系,正是数字韧性从防御中断走向自我优化的终极形态。
(本文根据MIT技术评论定制内容编译)
中文翻译:
赞助内容
构筑自主人工智能时代的数字韧性
当人工智能从利用人类提供的信息转向代为决策时,技术领导者必须编织一张智能数据网络,以释放自主人工智能的全部潜力,同时夯实企业范围内的韧性。
与思科合作呈现
数字韧性——即预防、抵御数字中断并从中恢复的能力——长期以来一直是企业的战略重点。随着自主人工智能的崛起,对强大韧性的需求变得比以往任何时候都更加紧迫。
自主人工智能代表了新一代能够主动规划、推理并以最少人力干预执行任务的自主系统。随着这些系统从实验试点转变为业务运营的核心要素,它们在带来新机遇的同时,也为确保数字韧性带来了新挑战。这是因为自主人工智能运作的自主性、速度和规模,会放大即便是微小的数据不一致、碎片化或安全漏洞所造成的影响。
尽管预计到2025年全球对人工智能的投资将达到1.5万亿美元,但只有不到一半的企业领导者对其组织在突发事件中维持服务连续性、安全性和成本控制的能力充满信心。这种信心的缺乏,加上自主人工智能的自主决策及其与关键基础设施交互所带来的深刻复杂性,要求我们必须重新构想数字韧性。
各组织正转向"数据网络"这一概念——一种连接并治理所有业务层信息的集成架构。通过打破数据孤岛并实现对全企业范围数据的实时访问,数据网络能够赋能人类团队和自主人工智能系统,使其能够感知风险、防患于未然、在问题发生时快速恢复并维持运营。
机器数据:自主人工智能与数字韧性的基石
早期的人工智能模型严重依赖人类生成的数据,如文本、音频和视频,但自主人工智能需要深入了解组织的机器数据:即由设备、服务器、系统和应用程序生成的日志、指标和其他遥测数据。
要让自主人工智能在驱动数字韧性方面发挥作用,它必须能够无缝、实时地访问这些数据流。如果没有机器数据的全面整合,组织可能会限制人工智能的能力,遗漏关键异常或引入错误。正如思科旗下公司Splunk的高级副总裁兼总经理卡迈勒·哈西所强调的,自主人工智能系统依赖机器数据来理解上下文、模拟结果并持续适应。这使得机器数据监管成为数字韧性的基石。
"我们常将机器数据比作现代企业的心跳,"哈西说道,"自主人工智能系统由这至关重要的脉搏驱动,需要实时访问信息。至关重要的是,这些智能代理要直接在错综复杂的机器数据流上运行,并且人工智能本身也要使用完全相同的数据流进行训练。"
目前,很少有组织能够达到充分启用自主系统所需的机器数据整合水平。这不仅缩小了自主人工智能可能的应用场景范围,更糟糕的是,还可能导致数据异常以及输出或行动中的错误。在生成式预训练变换模型发展之前设计的自然语言处理模型,曾饱受语言歧义、偏见和不一致性的困扰。如果组织在没有让模型对机器数据具备基础"流利度"的情况下就仓促推进,自主人工智能也可能发生类似的失误。
对许多公司而言,跟上人工智能飞速发展的步伐一直是一个重大挑战。"在某些方面,这种创新速度开始对我们造成伤害,因为它带来了我们尚未准备好的风险,"哈西说,"问题在于,随着自主人工智能的发展,当你需要系统安全、有韧性且始终可用时,依赖在人类文本、音频、视频或印刷数据上训练的传统大语言模型是行不通的。"
为韧性设计数据网络
为了解决这些缺陷并构建数字韧性,技术领导者应转向哈西所说的数据网络设计,这种设计更适合自主人工智能的需求。这涉及将来自安全、IT、业务运营和网络领域的碎片化资产编织在一起,创建一个连接不同数据源、打破孤岛并实现实时分析和风险管理的集成架构。
"一旦你拥有了统一的视图,你就可以完成所有这些自主和能动的任务,"哈西说,"你的盲点会大大减少。决策速度会快得多。未知不再令人恐惧,因为你拥有一个能够吸收这些冲击和中断而不丧失连续性的整体系统,"他补充道。
哈西指出,要创建这个统一的系统,数据团队首先必须打破部门间在数据共享方式上的孤岛。然后,他们必须实施联邦数据架构——一种去中心化系统,其中自主数据源作为一个单一单元协同工作,而无需物理合并——从而在保持治理和安全性的同时,创建一个统一的数据源。最后,团队必须升级数据平台,以确保这个新统一的视图对自主人工智能而言是可操作的。
在此过渡期间,如果团队依赖基于结构化数据建模的传统平台——即主要是定量信息,如客户记录或金融交易,这些信息可以按预定义格式组织,通常易于查询——可能会面临技术限制。相反,公司需要一个也能管理非结构化数据流的平台,例如系统日志、安全事件和应用程序追踪,这些数据缺乏统一性且通常是定性而非定量的。从这类数据中分析、整理并提取洞察,需要借助人工智能实现的更先进方法。
利用人工智能作为协作者
人工智能本身可以成为创建赋能人工智能系统的数据网络的有力工具。例如,由人工智能驱动的工具可以快速识别不同数据之间的关系——无论是结构化的还是非结构化的——自动将它们合并为一个可信来源。它们可以检测并纠正错误,并利用自然语言处理来标记和分类数据,使其更易于查找和使用。
自主人工智能系统还可用于增强人类在检测和破译企业非结构化数据流中异常情况的能力。这些异常通常超出人类快速发现或解读的能力,导致威胁被遗漏或响应延迟。但旨在自主感知、推理和行动的自主人工智能系统可以填补这一空白,为企业提供更高水平的数字韧性。
"数字韧性不仅仅关乎抵御中断,"哈西说,"它还关乎随着时间的推移不断演进和成长。人工智能代理可以处理海量数据,并持续向提供安全保障和监督的人类学习。这是一个真正的自我优化系统。"
人在回路的监督机制
尽管潜力巨大,自主人工智能应被定位为辅助智能。没有适当的监督,人工智能代理可能会引发应用程序故障或安全风险。
哈西表示,明确设定的防护栏和保持"人在回路"是"可信赖且实际应用人工智能的关键"。"人工智能可以增强人类的决策,但最终,人类掌握着方向盘。"
内容来源说明
本内容由《麻省理工科技评论》的定制内容部门Insights制作。它并非由《麻省理工科技评论》的编辑人员撰写。它由人类作者、编辑、分析师和插画师进行研究、设计和撰写。这包括调查问卷的编写和数据收集。可能使用的人工智能工具仅限于经过严格人工审查的次要生产流程。
深度阅读
人工智能
- 与AI聊天机器人建立关系竟出人意料地容易
- AGI如何成为我们这个时代最具 consequential 的阴谋论
- OpenAI的新LLM揭示了AI运作的真正秘密
- AI和维基百科如何将弱势语言推向恶性循环
保持联系
获取来自《麻省理工科技评论》的最新动态
发现特别优惠、头条新闻、即将举行的活动等。
英文来源:
Sponsored
Designing digital resilience in the agentic AI era
As AI shifts from leveraging information provided by humans to making decisions on their behalf, tech leaders must weave an intelligent data fabric to unlock the full potential of agentic AI while shoring up enterprise-wide resilience.
In partnership withCisco
Digital resilience—the ability to prevent, withstand, and recover from digital disruptions—has long been a strategic priority for enterprises. With the rise of agentic AI, the urgency for robust resilience is greater than ever.
Agentic AI represents a new generation of autonomous systems capable of proactive planning, reasoning, and executing tasks with minimal human intervention. As these systems shift from experimental pilots to core elements of business operations, they offer new opportunities but also introduce new challenges when it comes to ensuring digital resilience. That’s because the autonomy, speed, and scale at which agentic AI operates can amplify the impact of even minor data inconsistencies, fragmentation, or security gaps.
While global investment in AI is projected to reach $1.5 trillion in 2025, fewer than half of business leaders are confident in their organization’s ability to maintain service continuity, security, and cost control during unexpected events. This lack of confidence, coupled with the profound complexity introduced by agentic AI’s autonomous decision-making and interaction with critical infrastructure, requires a reimagining of digital resilience.
Organizations are turning to the concept of a data fabric—an integrated architecture that connects and governs information across all business layers. By breaking down silos and enabling real-time access to enterprise-wide data, a data fabric can empower both human teams and agentic AI systems to sense risks, prevent problems before they occur, recover quickly when they do, and sustain operations.
Machine data: A cornerstone of agentic AI and digital resilience
Earlier AI models relied heavily on human-generated data such as text, audio, and video, but agentic AI demands deep insight into an organization’s machine data: the logs, metrics, and other telemetry generated by devices, servers, systems, and applications.
To put agentic AI to use in driving digital resilience, it must have seamless, real-time access to this data flow. Without comprehensive integration of machine data, organizations risk limiting AI capabilities, missing critical anomalies, or introducing errors. As Kamal Hathi, senior vice president and general manager of Splunk, a Cisco company, emphasizes, agentic AI systems rely on machine data to understand context, simulate outcomes, and adapt continuously. This makes machine data oversight a cornerstone of digital resilience.
“We often describe machine data as the heartbeat of the modern enterprise,” says Hathi. “Agentic AI systems are powered by this vital pulse, requiring real-time access to information. It’s essential that these intelligent agents operate directly on the intricate flow of machine data and that AI itself is trained using the very same data stream.”
Few organizations are currently achieving the level of machine data integration required to fully enable agentic systems. This not only narrows the scope of possible use cases for agentic AI, but, worse, it can also result in data anomalies and errors in outputs or actions. Natural language processing (NLP) models designed prior to the development of generative pre-trained transformers (GPTs) were plagued by linguistic ambiguities, biases, and inconsistencies. Similar misfires could occur with agentic AI if organizations rush ahead without providing models with a foundational fluency in machine data.
For many companies, keeping up with the dizzying pace at which AI is progressing has been a major challenge. “In some ways, the speed of this innovation is starting to hurt us, because it creates risks we’re not ready for,” says Hathi. “The trouble is that with agentic AI's evolution, relying on traditional LLMs trained on human text, audio, video, or print data doesn't work when you need your system to be secure, resilient, and always available.”
Designing a data fabric for resilience
To address these shortcomings and build digital resilience, technology leaders should pivot to what Hathi describes as a data fabric design, better suited to the demands of agentic AI. This involves weaving together fragmented assets from across security, IT, business operations, and the network to create an integrated architecture that connects disparate data sources, breaks down silos, and enables real-time analysis and risk management.
“Once you have a single view, you can do all these things that are autonomous and agentic,” says Hathi. “You have far fewer blind spots. Decision-making goes much faster. And the unknown is no longer a source of fear because you have a holistic system that's able to absorb these shocks and disruption without losing continuity,” he adds.
To create this unified system, data teams must first break down departmental silos in how data is shared, says Hathi. Then, they must implement a federated data architecture—a decentralized system where autonomous data sources work together as a single unit without physically merging—to create a unified data source while maintaining governance and security. And finally, teams must upgrade data platforms to ensure this newly unified view is actionable for agentic AI.
During this transition, teams may face technical limitations if they rely on traditional platforms modeled on structured data—that is, mostly quantitative information such as customer records or financial transactions that can be organized in a predefined format (often in tables) that is easy to query. Instead, companies need a platform that can also manage streams of unstructured data such as system logs, security events, and application traces, which lack uniformity and are often qualitative rather than quantitative. Analyzing, organizing, and extracting insights from these kinds of data requires more advanced methods enabled by AI.
Harnessing AI as a collaborator
AI itself can be a powerful tool in creating the data fabric that enables AI systems. AI-powered tools can, for example, quickly identify relationships between disparate data—both structured and unstructured—automatically merging them into one source of truth. They can detect and correct errors and employ NLP to tag and categorize data to make it easier to find and use.
Agentic AI systems can also be used to augment human capabilities in detecting and deciphering anomalies in an enterprise’s unstructured data streams. These are often beyond human capacity to spot or interpret at speed, leading to missed threats or delays. But agentic AI systems, designed to perceive, reason, and act autonomously, can plug the gap, delivering higher levels of digital resilience to an enterprise.
“Digital resilience is about more than withstanding disruptions,” says Hathi. “It's about evolving and growing over time. AI agents can work with massive amounts of data and continuously learn from humans who provide safety and oversight. This is a true self-optimizing system.”
Humans in the loop
Despite its potential, agentic AI should be positioned as assistive intelligence. Without proper oversight, AI agents could introduce application failures or security risks.
Clearly defined guardrails and maintaining humans in the loop is “key to trustworthy and practical use of AI,” Hathi says. “AI can enhance human decision-making, but ultimately, humans are in the driver's seat.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Deep Dive
Artificial intelligence
It’s surprisingly easy to stumble into a relationship with an AI chatbot
We’re increasingly developing bonds with chatbots. While that’s safe for some, it’s dangerous for others.
How AGI became the most consequential conspiracy theory of our time
The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do.
OpenAI’s new LLM exposes the secrets of how AI really works
The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways—and how trustworthy they really are.
How AI and Wikipedia have sent vulnerable languages into a doom spiral
Machine translators have made it easier than ever to create error-plagued Wikipedia articles in obscure languages. What happens when AI models get trained on junk pages?
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.