从硬件视角看人工智能安全

内容来源:https://aibusiness.com/responsible-ai/ai-safety-from-a-hardware-perspective
内容总结:
联想高管详解AI安全战略:在个人设备上构建负责任的人工智能
在近日举行的2026年高德纳数据与分析峰会上,联想集团人工智能治理总监克里斯托弗·坎贝尔深入阐述了这家科技巨头在蓬勃发展的个人智能体(AI Agent)市场中的战略与安全考量。
随着年初开源个人智能体框架OpenClaw的突然涌现,个人AI助理正迅速普及。作为全球主要的服务器、笔记本电脑和台式机供应商,联想不仅关注供应链与芯片短缺问题,更将伴随这一趋势而来的安全与隐私挑战置于首要位置。
坎贝尔明确指出,对于联想而言,智能体和聊天机器人是需要被严密防护的“端点”,其重要性不亚于物理设备本身。此外,确保运行于本地设备与云端模型之间的一致性,以避免产生不可预期的结果,也是关键课题之一。
为此,联想正致力于建立一套负责任的AI流程,以管控智能体在个人设备上的开发与部署。坎贝尔强调,作为硬件厂商,联想在使用个人聊天机器人选择AI模型和应用时,必须履行法律、伦理和合规义务。目前,联想内部各业务集团均在开发或使用对内及对外的聊天机器人,所有这些项目都必须通过负责任的AI审查。
谈及更广泛的AI安全与治理,坎贝尔特别提到了近期因用户与大型语言模型长时间交互而引发的悲剧。他表示,理解AI对人类的影响、保障人类安全,是其团队最核心的关切。行业正处在一个转折点,人们必须审视AI如何切实影响人类安全,同时持续适应欧盟《人工智能法案》等法规要求。
联想正在探索如何让智能体工具更以开发者为中心,赋能开发者获取所需信息,这既是内部开发与准备的需要,也体现了其应对AI时代安全挑战的系统性思路。
中文翻译:
由谷歌云赞助
如何选择首个生成式AI应用场景
要启动生成式AI项目,首先应关注能够优化人类信息交互体验的领域。
联想工程师正在思考与在笔记本电脑和个人电脑上构建和部署个人智能体相关的安全问题。
奥兰多讯——总部位于香港的跨国企业联想设计、制造并销售服务器、笔记本电脑和台式电脑。如今全球用户正越来越多地使用这些设备来构建和部署个人AI智能体。
除了供应链和芯片短缺的担忧,这家供应商还必须关注由年初突然出现的开源个人智能体框架OpenClaw所引发的个人智能体热潮带来的安全隐患。
联想人工智能治理总监兼全球产品与服务安全负责人克里斯托弗·坎贝尔在《聚焦AI》节目最新一期中表示:"纯粹从安全防御的角度来看,对于联想这样的企业,智能体和聊天机器人就像物理设备一样,是需要防护的终端节点。"该期节目录制于2026年高德纳数据与分析峰会现场。
坎贝尔进一步指出:"另一方面……如果本地模型与云端模型缺乏一致性,有时可能导致结果差异。从我们的视角,当前正在推进的一项内部开发与筹备工作是:研究如何让智能体更聚焦开发者需求,使开发人员能够获得所需信息武装自己。"
坎贝尔透露,面对快速扩张的个人智能体市场,联想的应对策略之一是建立负责任的人工智能流程,以规范个人设备上智能体的创建与部署。
他补充说,作为硬件厂商,联想希望通过使用个人聊天机器人选择AI模型与应用的方式,履行法律、道德及合规义务。
与此同时,联想内部也在使用智能体,并需遵循相同的约束框架。
"我们所有业务部门都在使用或开发个人聊天机器人……或用于客户支持的外部聊天机器人。"坎贝尔表示,"从治理角度,我们正加强管控。所有这类项目仍需通过负责任AI审查。"
坎贝尔还更广泛地探讨了AI安全与治理问题,特别是近期发生的多起事件——用户在与大语言模型长期交互后自杀,AI被指至少负有部分责任。
"许多机构和我交流过的人士都在努力理解AI对人类的影响。而我和团队始终将AI对人类的影响与安全置于首位。"他强调,"企业必须持续适应欧盟《人工智能法案》等法规。但我认为整个行业正来到转折点:人们必须审视AI如何实际影响人类安全。"
英文来源:
Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
Lenovo engineers are thinking about safety issues related to building and deploying personal agents on laptops and PCs.
ORLANDO -- Hong Kong-based multinational Lenovo designs, manufactures and sells the servers, laptops and desktop PCs that people around the world are increasingly using to build and deploy personal AI agents.
In addition to supply chain and chip shortage worries, the vendor also has to focus on safety and security problems associated with the personal agent phenomenon triggered by the sudden emergence of open source personal agent framework OpenClaw around the start of the year.
"If you look at it just straight from a security and defense standpoint, let's say a company like Lenovo, to us, agents and chatbots are endpoints that need to be defended just like the physical device," said Christopher Campbell, director of AI governance and global products and services security leader at Lenovo, on the latest episode of Targeting AI. The podcast was recorded onsite at the Gartner Data & Analytics Summit 2026.
"The other side of that is ... sometimes you can get differing results if there's not consistency, let's say, between a local model and a cloud model," Campbell continued. "From our perspective, one thing that we're doing, and this is internally for development purposes and readiness purposes, is looking at how we can make agents … be more developer focused so that developers … are armed with the information they need."
Part of Lenovo's approach to the fast-growing personal agent market is to develop a responsible AI process to govern how agents are created and deployed on personal devices, Campbell said.
As a hardware company, Lenovo wants to meet legal, ethical and compliance obligations in how it uses personal chatbots to select AI models and applications, he added.
Meanwhile, Lenovo is using agents internally and needs to operate within those same guardrails.
"All of our different business groups are either using or developing personal chatbots … or external chatbots for customer support," Campbell said. "And from a governance standpoint, we're getting a better handle on that now. All of those projects still have to go through responsible AI review."
Campbell also talked more broadly about AI safety and governance, particularly in light of recent incidents in which AI has been blamed, at least partly, for users committing suicide after protracted interactions with large language models.
"A lot of organizations and other people I talk to are dealing with issues of trying to understand that human impact of AI. And [for] myself and my team, [that's something that's] been first and foremost on our minds is the human impact and safety of AI," he said. "You have to continue to adapt to the EU AI Act and other regulations. But I think industry-wide, we're getting to a turning point where people have to be looking at how AI actually affects human safety."