在人工智能与量子时代重塑网络安全

内容总结:
【科技前沿】人工智能与量子计算重塑网络安全防线,零信任架构成应对关键
随着人工智能与量子计算两大颠覆性技术的快速发展,全球网络安全威胁格局正在发生根本性变革。行业专家指出,企业必须构建前瞻性防御体系,其中零信任安全架构被视为应对新型威胁的核心策略。
人工智能武器化:攻击效率跃升
当前,AI技术正被广泛用于网络攻击全链条。犯罪分子利用生成式AI批量制作精准钓鱼邮件,仅需数秒即可生成数万封定制化欺诈邮件;售价仅数美元的语音克隆软件已能突破声纹识别防线。更值得警惕的是,具备自主决策能力的“智能体AI”已出现,可模拟人类攻击者进行推理和自适应攻击。思科安全业务总经理彼得·贝利警告:“AI攻击者正以机器速度运作,防御方必须同样采用AI自动化响应才能与之抗衡。”
量子计算威胁:加密体系临挑战
与此同时,量子计算的发展对现行加密体系构成潜在威胁。量子算法能破解支撑现代密码学的数学难题,特别是广泛应用于网络通信、数字签名和加密货币的RSA、椭圆曲线等公钥系统。贝利表示:“量子时代来临后将颠覆所有领域的数据安全防护,包括政府、通信和金融系统。”调查显示,73%的美国企业认为犯罪分子利用量子技术破译数据只是时间问题。
防御体系升级:零信任架构筑防线
面对双重威胁,零信任安全架构通过持续验证、最小权限访问和实时监测,构建起动态防护体系。该架构可将潜在攻击隔离在有限区域,有效保护核心系统。目前苹果公司已在iMessage部署抗量子加密协议PQ3,谷歌正于Chrome浏览器测试后量子密码技术。思科也透露已对其软件基础设施进行量子防护改造。
贝利建议企业从两方面着手准备:首先建立数据资产清单,评估敏感度并更新加密密钥;其次规划向抗量子算法迁移的技术路线。他强调:“这并非假设性场景,而是时间问题。早期投入防御建设的企业将掌握主动权,而非疲于追赶。”
(本文基于MIT Technology Review定制内容团队研究报告综合整理)
中文翻译:
赞助内容
在人工智能与量子时代重塑网络安全
当前威胁格局正被两股颠覆性力量重塑。为构建面向未来的安全体系,安全负责人必须采取主动防御姿态,实施零信任策略。
本文与思科合作发布
人工智能与量子技术正在彻底改变网络安全的运作模式,重新定义数字防御者与其对手的行动速度与规模。
AI工具被武器化用于网络攻击的事实证明,其已对现有防御体系构成严峻挑战。从侦察到勒索软件,网络犯罪分子利用AI实现前所未有的自动化攻击。这包括使用生成式AI大规模制造社会工程攻击:数秒内生成数万封精准定制的钓鱼邮件,或仅需几美元即可获取能绕过安全防护的语音克隆软件。而如今,具备自主能力的AI更将风险推向新高度——它们能像人类攻击者一样推理、行动并自我调整。
但AI并非威胁格局的唯一塑造者。若放任发展,量子计算有可能彻底撼动现行加密标准。量子算法能破解大多数现代密码学依赖的数学难题,尤其是广泛应用于安全通信、数字签名与加密货币的RSA与椭圆曲线等公钥系统。
“量子时代必将到来。一旦成为现实,它将迫使政府、电信、金融等所有领域的数据保护方式发生根本变革。”思科安全业务高级副总裁兼总经理彼得·贝利表示。
贝利指出:“多数企业自然更关注AI威胁的紧迫性。量子技术听似科幻,但其威胁逼近的速度远超人们想象。当下开始投资能同时抵御AI与量子攻击的防御体系至关重要。”
零信任架构正是此类防御的核心。该策略假设所有用户与设备均不可默认信任,通过持续验证实现全天候监控,确保任何漏洞利用企图都能被实时检测与处置。这种与技术解耦的架构能构建弹性防护框架,从容应对持续演变的威胁环境。
构建AI防御阵线
AI正持续降低网络攻击门槛,使资源有限的黑客也能渗透、操纵并利用最微小的数字漏洞。
近四分之三(74%)的网络安全从业者表示,AI赋能威胁已对其组织产生显著影响,90%预计未来一至两年内将面临此类威胁。
“AI驱动的攻击者掌握先进技术并以机器速度运作,”贝利强调,“唯有以AI自动化响应,才能实现机器速度的防御。”
为此,企业必须升级系统、平台与安全运维,将过去依赖人工规则编写与反应时间的威胁检测响应流程自动化。这些系统需随环境演进与攻击策略变化动态调整。
同时,企业须加强AI模型与数据的安全性,降低遭AI恶意软件操控的风险。例如提示词注入攻击——恶意用户通过精心设计的指令操纵AI模型执行非预期操作,绕过原始设定与防护机制。
具备自主能力的AI进一步加剧风险:黑客可利用AI智能体自动化攻击并自主制定战术。“自主AI可能大幅压缩攻击链成本,”贝利警告,“这意味着普通网络罪犯未来也能发动目前仅限资金充足的间谍行动的大型攻击。”
企业界正积极探索AI智能体的防御潜力。据思科《2025年度AI就绪指数》,近40%企业预计未来12个月内将部署自主AI辅助团队,尤其在网络安全领域。应用场景包括训练AI智能体解析遥测数据——从过于分散非结构化的机器数据中识别人脑无法解读的异常信号。
量化量子威胁
当多数安全团队聚焦AI威胁时,量子威胁已悄然潜伏。毕马威调研显示,近四分之三(73%)的美国企业认为网络罪犯利用量子技术解密并破坏现行安全协议只是时间问题。然而,大多数(81%)也承认在数据保护方面存在改进空间。
企业的担忧不无道理。威胁发起者已开始实施“现在窃取,后解密”攻击:囤积敏感加密数据,待量子技术成熟后破解。案例包括国家背景黑客截取政府通信,犯罪网络存储加密网络流量与金融记录。
科技巨头率先部署量子防御。例如苹果采用密码学协议PQ3防护iMessage平台的“现在窃取,后解密”攻击;谷歌在Chrome浏览器测试抗量子计算与传统计算机攻击的后量子密码学(PQC)。思科则“已投入大量资源为软件与基础设施实现量子防护”。贝利补充:“未来18至24个月内,更多企业与政府将采取类似行动。”
随着《美国量子计算网络安全准备法案》等法规明确量子威胁缓解要求(包括采用美国国家标准技术研究院的标准化PQC算法),更多组织将启动量子防御准备。
贝利为起步企业指明两项关键举措:首先建立可视性——“厘清数据资产及其存储位置,全面盘点、评估敏感度并检查加密密钥,淘汰弱密钥与过期密钥”;其次规划迁移路径——“评估基础设施支持后量子算法所需条件,这涉及技术、流程与人员三重维度”。
践行主动防御
贝利强调,构建AI与量子双重防御的根基仍是零信任策略。通过在用户、设备、应用、网络与云层面嵌入零信任访问控制,仅授予完成任务所需最小权限并实现持续监控。该架构还能通过隔离潜在威胁缩小攻击面,阻止其触达关键系统。
在此框架下,企业可集成专项措施应对AI与量子风险。例如采用抗量子密码技术、AI驱动分析及安全工具,以识别复杂攻击模式并实现实时自动化响应。
“零信任能延缓攻击并构建韧性,”贝利总结,“即使发生突破,也能确保核心资产受保护且业务快速恢复。”
企业不应坐视威胁演化。“这不是‘如果’而是‘何时’发生的问题,”贝利最后警示,“早投资的企业将引领节奏,而非疲于追赶。”
本内容由《麻省理工科技评论》定制内容团队Insights制作,并非由编辑部门撰写。内容经由人类作者、编辑、分析师与插画师完成调研、设计与撰稿,包括问卷撰写与数据收集。若使用AI工具,仅限通过严格人工审核的辅助生产流程。
深度解读
人工智能
• 陷落AI聊天bot关系竟如此轻易:建立情感联结对部分人群安全,但对另一些人可能致命
• AI与维基百科如何加速弱势语言消亡:机器翻译催生充斥错误的冷门语言维基条目,当AI模型以垃圾内容为训练源会发生什么?
• AGI何以成为时代最具影响力阴谋论:关于机器智能超越人类的迷思如何绑架整个行业?其存续逻辑与阴谋论如出一辙
• OpenAI风靡印度背后的种姓偏见:作为其第二大市场,ChatGPT与Sora复现的种姓刻板印象正伤害数百万民众
保持联系
获取《麻省理工科技评论》
最新动态、特惠推荐、头条解析与活动预告
英文来源:
Sponsored
Reimagining cybersecurity in the era of AI and quantum
The threat landscape is being shaped by two seismic forces. To future-proof their organizations, security leaders must take a proactive stance with a zero trust approach.
In partnership withCisco
AI and quantum technologies are dramatically reconfiguring how cybersecurity functions, redefining the speed and scale with which digital defenders and their adversaries can operate.
The weaponization of AI tools for cyberattacks is already proving a worthy opponent to current defenses. From reconnaissance to ransomware, cybercriminals can automate attacks faster than ever before with AI. This includes using generative AI to create social engineering attacks at scale, churning out tens of thousands of tailored phishing emails in seconds, or accessing widely available voice cloning software capable of bypassing security defenses for as little as a few dollars. And now, agentic AI raises the stakes by introducing autonomous systems that can reason, act, and adapt like human adversaries.
But AI isn’t the only force shaping the threat landscape. Quantum computing has the potential to seriously undermine current encryption standards if developed unchecked. Quantum algorithms can solve the mathematical problems underlying most modern cryptography, particularly public-key systems like RSA and Elliptic Curve, widely used for secure online communication, digital signatures, and cryptocurrency.
“We know quantum is coming. Once it does, it will force a change in how we secure data across everything, including governments, telecoms, and financial systems,” says Peter Bailey, senior vice president and general manager of Cisco’s security business.
“Most organizations are understandably focused on the immediacy of AI threats," says Bailey. “Quantum might sound like science fiction, but those scenarios are coming faster than many realize. It’s critical to start investing now in defenses that can withstand both AI and quantum attacks.”
Critical to this defense is a zero trust approach to cybersecurity, which assumes no user or device can be inherently trusted. By enforcing continuous verification, zero trust enables constant monitoring and ensures that any attempts to exploit vulnerabilities are quickly detected and addressed in real time. This approach is technology-agnostic and creates a resilient framework even in the face of an ever-changing threat landscape.
Putting up AI defenses
AI is lowering the barrier to entry for cyberattacks, enabling hackers even with limited skills or resources to infiltrate, manipulate, and exploit the slightest digital vulnerability.
Nearly three-quarters (74%) of cybersecurity professionals say AI-enabled threats are already having a significant impact on their organization, and 90% anticipate such threats in the next one to two years.
“AI-powered adversaries have advanced techniques and operate at machine speed,” says Bailey. “The only way to keep pace is to use AI to automate response and defend at machine speed.”
To do this, Bailey says, organizations must modernize systems, platforms, and security operations to automate threat detection and response—processes that have previously relied on human rule-writing and reaction times. These systems must adapt dynamically as environments evolve and criminal tactics change.
At the same time, companies must strengthen the security of their AI models and data to reduce exposure to manipulation from AI-enabled malware. Such risks could include, for instance, prompt injections, where a malicious user crafts a prompt to manipulate an AI model into performing unintended actions, bypassing its original instructions and safeguards.
Agentic AI further ups the ante, with hackers able to use AI agents to automate attacks and make tactical decisions without constant human oversight. “Agentic AI has the potential to collapse the cost of the kill chain,” says Bailey. “That means everyday cybercriminals could start executing campaigns that today only well-funded espionage operations can afford.”
Organizations, in turn, are exploring how AI agents can help them stay ahead. Nearly 40% of companies expect agentic AI to augment or assist teams over the next 12 months, especially in cybersecurity, according to Cisco’s 2025 AI Readiness Index. Use cases include AI agents trained on telemetry, which can identify anomalies or signals from machine data too disparate and unstructured to be deciphered by humans.
Calculating the quantum threat
As many cybersecurity teams focus on the very real AI-driven threat, quantum is waiting on the sidelines. Almost three-quarters (73%) of US organizations surveyed by KPMG say they believe it is only a matter of time before cybercriminals are using quantum to decrypt and disrupt today’s cybersecurity protocols. And yet, the majority (81%) also admit they could do more to ensure that their data remains secure.
Companies are right to be concerned. Threat actors are already carrying out harvest now, decrypt later attacks, stockpiling sensitive encrypted data to crack once quantum technology matures. Examples include state-sponsored actors intercepting government communications and cybercriminal networks storing encrypted internet traffic or financial records.
Large technology companies are among the first to roll out quantum defenses. For example, Apple is using cryptography protocol PQ3 to defend against harvest now, decrypt later attacks on its iMessage platform. Google is testing post-quantum cryptography (PQC)—which is resistant to attacks from both quantum and classical computers—in its Chrome browser. And Cisco “has made significant investments in quantum-proofing our software and infrastructure,” says Bailey. “You’ll see more enterprises and governments taking similar steps over the next 18 to 24 months,” he adds.
As regulations like the US Quantum Computing Cybersecurity Preparedness Act lay out requirements for mitigating against quantum threats, including standardized PQC algorithms by the National Institute of Standards and Technology, a wider range of organizations will start preparing their own quantum defenses.
For organizations beginning that journey, Bailey outlines two key actions. First, establish visibility. “Understand what data you have and where it lives,” he says. “Take inventory, assess sensitivity, and review your encryption keys, rotating out any that are weak or outdated.”
Second, plan for migration. “Next, assess what it will take to support post-quantum algorithms across your infrastructure. That means addressing not just the technology, but also the process and people implications,” Bailey says.
Adopting proactive defense
Ultimately, the foundation for building resilience against both AI and quantum is a zero trust approach, says Bailey. By embedding zero trust access controls across users, devices, business applications, networks, and clouds, this approach grants only the minimum access required to complete a task and enables continuous monitoring. It can also minimize the attack surface by confining a potential threat to an isolated zone, preventing it from accessing other critical systems.
Into this zero trust architecture, organizations can integrate specific measures to defend against AI and quantum risks. For instance, quantum-immune cryptography and AI-powered analytics and security tools can be used to identify complex attack patterns and automate real-time responses.
“Zero trust slows down attacks and builds resilience,” Bailey says. “It ensures that even if a breach occurs, the crown jewels stay protected and operations can recover quickly.”
Ultimately, companies should not wait for threats to emerge and evolve. They must get ahead now. “This isn’t a what-if scenario; it’s a when,” says Bailey. “Organizations that invest early will be the ones setting the pace, not scrambling to catch up.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Deep Dive
Artificial intelligence
It’s surprisingly easy to stumble into a relationship with an AI chatbot
We’re increasingly developing bonds with chatbots. While that’s safe for some, it’s dangerous for others.
How AI and Wikipedia have sent vulnerable languages into a doom spiral
Machine translators have made it easier than ever to create error-plagued Wikipedia articles in obscure languages. What happens when AI models get trained on junk pages?
How AGI became the most consequential conspiracy theory of our time
The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do.
OpenAI is huge in India. Its models are steeped in caste bias.
India is OpenAI’s second-largest market, but ChatGPT and Sora reproduce caste stereotypes that harm millions of people.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.