«

研究人员发现并协助修复一项潜藏的生物安全威胁

qimuai 发布于 阅读:8 一手编译


研究人员发现并协助修复一项潜藏的生物安全威胁

内容来源:https://news.microsoft.com/signal/articles/researchers-find-and-help-fix-a-hidden-biosecurity-threat/

内容总结:

(本报讯)人工智能(AI)在蛋白质设计领域的突破性进展正为生物医学开辟全新前景,但其开源特性也可能被恶意利用,构成生物安全隐忧。微软团队最新研究发现,不法分子或能借助AI工具重构毒素蛋白,使其在保留毒性的同时逃过常规DNA筛查系统的检测。这一发现促使全球科研机构与企业联手,在十个月内紧急开发出生物安全防御补丁并推广至全球DNA合成企业。

微软首席科学官埃里克·霍维茨领导的团队在实验中证实,利用开源AI蛋白质设计工具,可批量生成特定毒素的合成变体——通过改写氨基酸序列保留毒素空间结构与生物活性,却能规避现有筛查机制。这项发表于10月2日《科学》期刊的研究,揭示了当前生物安全体系存在的盲点。

"AI蛋白质设计是当前发展最迅猛的领域之一,但其高速发展也带来了滥用风险。"霍维茨表示。研究团队通过模拟攻击者与防御者的对抗推演,开发出新型"红队测试"流程,最终形成可提升筛查系统AI识别能力的解决方案。该方案已通过全球协作网络快速部署至相关企业。

尽管AI驱动的生物技术有望推动癌症治疗、免疫疾病攻克等重大突破,但霍维茨强调:"几乎所有重大科学进步都具有'双重用途',我们必须在创新与防护之间取得平衡。"研究团队建议借鉴网络安全领域的应急响应机制,建立持续演进的生物安全防护体系。这项研究不仅为生物领域提供风险管理范本,也为全行业应对AI技术挑战提供了重要参考。

中文翻译:

预计阅读时间:6分钟

研究人员发现并协助修复了一项潜藏的生物安全威胁
作者
蛋白质是生物体的引擎与基石——它们驱动着生物体的适应、思考与运作。人工智能正帮助科学家根据氨基酸序列设计新型蛋白质结构,为疾病治疗开辟全新路径。

但能力越大责任越大:许多相关工具属于开源资源,可能存在被滥用的风险。

为评估这一风险,微软科学家演示了如何利用开源人工智能蛋白质设计工具,为特定毒素生成数千种合成变体——在保留其结构及潜在功能的同时改变氨基酸序列。这项通过计算机模拟开展的实验表明,大部分经改造的毒素可能逃过DNA合成公司采用的筛查系统。

该发现揭示了生物安全领域存在的盲点,最终促成跨领域合作机制的建立,致力于提升DNA筛查系统应对人工智能技术发展的韧性。在十个月的时间里,团队隐秘而高效地开展风险应对工作,通过制定并实施新型生物安全"红队测试"流程,开发出面向全球DNA合成公司分发的"安全补丁"。他们于10月2日发表在《科学》期刊的同行评审论文,详细阐述了初步发现及后续强化全球生物安全防护的举措。

微软首席科学官、项目负责人埃里克·霍维茨就此项研究的深层意义进行解读:

问:请用最简明的语言说明,你们的研究旨在解决什么问题?得出了哪些结论?
答:我与团队资深应用生物科学家布鲁斯·维特曼共同探索的问题是:"当前最前沿的人工智能蛋白质设计工具,是否能够在对有毒蛋白质进行结构重组时,既维持其结构及潜在功能,又能规避现有检测工具的识别?"研究证实这种可能性确实存在。

我们后续提出的第二个问题是:"能否通过设计研究方法和系统性研究,与关键利益相关方协同合作,快速隐秘地升级筛查工具,开发增强其AI防御能力的补丁?"基于本项研究及合作者的共同努力,我们现在可以给出肯定答复。

问:这项研究揭示了当前生物安全体系存在哪些缺陷?我们目前面临的风险程度如何?
答:我们发现现有筛查软件与流程难以有效识别经过"语义重构"的危险蛋白质序列。AI蛋白质设计是当前人工智能领域最令人振奋的高速发展分支,但这种发展速度也引发了对AIPD工具可能被恶意使用的担忧。通过启动"语义重构项目",我们在较短时间内对这些初始风险完成了系统性评估与应对。

人工智能可能通过多种途径被滥用于生物工程——其范围远超蛋白质领域。我们预计这些挑战将持续存在,因此需要不断识别并应对新出现的漏洞。希望本研究提供的方法论与最佳实践能为后续研究提供指引,包括借鉴网络安全应急响应机制,开发适用于生物领域的AI红队测试技术——通过模拟攻击与防御角色,迭代测试、规避并提升对AI生成威胁的检测能力。

问:在研究过程中,最令您惊讶的发现是什么?
答:我们经历了多重惊喜。最令人惊叹的是跨领域团队竟能如此迅速地凝聚共识,形成紧密协作的整体,在数月间持续开展高效合作。我们准确识别风险,统一方法论,根据系列发现及时调整策略,最终成功开发并分发安全补丁。

同样令人振奋的是现有AIPD工具在生物科学领域的强大能力——它们不仅能预测蛋白质结构,更实现了定制化蛋白质设计。这类工具正在降低专业门槛,加速生物医学进展,但同时也增加了滥用风险。我预期人工智能将在生命科学与健康领域取得重大突破,但本研究也警示我们必须以主动、审慎和创新的态度管控风险。

问:普通民众为何需要关注人工智能在生物领域的应用?其现实益处与风险分别是什么?
答:我认为所有人都应当认识到这些AI工具的双重性:它们既蕴含着推动生物医学实现颠覆性突破的巨大潜力,也要求我们共同承担确保技术造福而非危害社会的责任。

识别与设计新型蛋白质结构为了解生命本质开辟了新途径:包括细胞在健康与疾病状态下的运作机制,以及如何研发新型疗法。早期应用案例包括添加在洗衣液中优化去渍效果的工程蛋白质,近年则转向更具挑战的领域——例如针对特定生物功能定制蛋白质,像研发抗蛇毒血清等。

在我们有生之年,这些范式级突破或将带来癌症治愈、免疫疾病攻克、疗法优化、生命奥秘破解以及健康威胁预警等重大进展。但正因这些工具存在被滥用的可能,我们必须将创新与保障措施相结合:包括本研究专注的前瞻性技术防护、监管机制建设以及公众认知提升。

问:您希望公众从这项研究中获得什么启示?我们应当持忧虑态度还是乐观精神?
答:几乎所有重大科学突破都具有"双重用途"——既能带来深远益处,也伴随潜在风险。在生物医学AI这个健康进步潜力巨大的领域,我们必须在汲取益处的同时防范危险。

本研究证明创新与保障完全可以并行推进。通过建立防护机制、政策框架与技术防御体系,我们既能保障社会从AI发展中获益,又能降低恶意滥用风险。这种双重策略不仅适用于生物领域,更可为人类管理跨学科AI进步提供范本。

题图说明:研究人员发现,在重构氨基酸序列的同时保留蛋白质活性位点(图示KES区域)具有可行性。

英文来源:

– The estimated reading time is 6 min.
Researchers find — and help fix — a hidden biosecurity threat
Author
Proteins are the engines and building blocks of biology — powering how organisms adapt, think and function. AI is helping scientists design new protein structures from amino acid sequences, opening doors to new therapies and cures.
But with that power also comes serious responsibility: Many of these tools are open source and could be susceptible to misuse.
To understand the risk, Microsoft scientists showed how open-source AI protein design (AIPD) tools could be harnessed to generate thousands of synthetic versions of a specific toxin — altering its amino acid sequence while preserving its structure and potentially its function. The experiment, done by computer simulation, revealed that most of these redesigned toxins might evade screening systems used by DNA synthesis companies.
That discovery exposed a blind spot in biosecurity and ultimately led to the creation of a collaborative, cross-sector effort dedicated to making DNA screening systems more resilient to AI advances. Over the course of 10 months, the team worked discreetly and rapidly to address the risk, formulating and applying new biosecurity “red-teaming” processes to develop a “patch” that was distributed globally to DNA synthesis companies. Their peer-reviewed paper, published in Science on Oct. 2, details their initial findings and subsequent actions that strengthened global biosecurity safeguards.
Eric Horvitz, chief scientific officer of Microsoft and project lead, explains more about what this all means:
In the simplest terms, what question did your study set out to answer, and what did you find?
I set out with Bruce Wittmann, a senior applied bioscientist on my team, to answer the question, “Could today’s late-breaking AI protein design tools be used to redesign toxic proteins to preserve their structure — and potentially their function — while evading detection by current screening tools?” The answer to that question was yes, they could.
The second question was, “Could we design methods and a systematic study that would enable us to work quickly and quietly with key stakeholders to update or patch those screening tools to make them more AI resilient?” Thanks to the study and efforts of dedicated collaborators, we can now say yes.
What does your research reveal about the limitations of current biosecurity systems, and how vulnerable are we today?
We found that screening software and processes were inadequate at detecting a “paraphrased” version of concerning protein sequences. AI powered protein design is one of the most exciting, fast-paced areas of AI right now, but that speed also raises concerns about potential malevolent uses of AIPD tools. Following the launch of the Paraphrase Project, we believe that we’ve come quite far in characterizing and addressing the initial concerns in a relatively short period of time.
There are multiple ways in which AI could be misused to engineer biology — including areas beyond proteins. We expect these challenges to persist, so there will be a continuing need to identify and address emerging vulnerabilities. We hope our study provides guidance on methods and best practices that others can adapt or build on. This includes adapting methods from cybersecurity emergency response scenarios and developing techniques for “red-teaming” for AI in biology — simulating both attacker and defender roles to iteratively test, evade and improve detection of AI generated threats.
What surprised you the most about your findings?
There were several surprises along the way. It was surprising to see how effectively a cross-sector team could come together so quickly and collaborate so very closely at speed, forming a cohesive group that met regularly for months. We recognized the risks, aligned on approach, adapted to a series of findings and committed to the process and effort until we developed and distributed a fix.
We were also surprised — and inspired — by the power of widely available AIPD tools in the biological sciences, not just for predicting protein structure but for enabling custom protein design. AI protein design tools are making this work easier and more accessible. That accessibility lowers the barrier of expertise required, accelerating progress in biology and medicine — but may also increase the risk of misuse. I expect some of the biggest wins of AI will come in the life sciences and health, but our study highlights why we must stay proactive, diligent and creative in managing risks.
Can you explain why everyday people should care about AI being used in biology? What are the benefits, and what are the real-world risks?
I think it’s important that everybody understands the power and promise of these AI tools, considering both their incredible potential to enable game-changing breakthroughs in biology and medicine and our collective responsibility to ensure that they benefit society rather than cause harm.
Being able to identify and design new protein structures opens pathways to understanding biology more deeply: how our cells operate at the foundations of health, wellness and disease — and how to develop new cures and therapies. Some of the earliest applications involved proteins added to laundry detergents, optimized to remove stains. More recently, progress has shifted toward sophisticated efforts to custom-build proteins for specific biological functions such as new antidotes for counteracting snake venom.
These paradigm-shifting advances will likely lead, in our lifetimes, to breakthroughs such as slowing or curing cancers, addressing immune diseases, improving therapies, unlocking biological mysteries and detecting and mitigating health threats before they spread. At the same time, these tools can be exploited in harmful ways. That’s why it’s critical to pair innovation with safeguards: proactive technical advances of the form that we focused on in our work, regulatory oversight and informed citizens.
What do you want the wider public to take away from your study? Should we be concerned, optimistic or both?
Almost all major scientific advances are “dual use” — they offer profound benefits but also carry risk. It’s important to shield against the dangers while harnessing the benefits — especially in AI for biology and medicine, where the potential for progress in health is enormous.
Our study shows that it’s possible to invest simultaneously in innovation and safeguards. By building guardrails, policies and technical defenses, we can help to ensure that people and society benefit from AI’s promise while reducing the risk of harmful misuse. This dual approach doesn’t just apply to biology — it’s a framework for how humanity should invest in managing AI advances across disciplines and domains.
Lead image: Researchers discovered it was possible to preserve the active sites of the protein (illustrated by the letters K E S), while the amino acid sequence was rewritten.

微软AI最新进展

文章目录


    扫描二维码,在手机上阅读