微软人工智能负责人称机器意识是一种"错觉"
内容来源:https://www.wired.com/story/microsofts-ai-chief-says-machine-consciousness-is-an-illusion/
内容总结:
微软AI业务首席执行官穆斯塔法·苏莱曼近日在接受《连线》杂志专访时,针对人工智能伦理发展提出警示性观点。这位曾共同创立DeepMind并主导Inflection AI项目的行业领军者强调,AI系统不应被设计为模拟人类意识或情感,否则可能引发对其赋予权利与福利的争议,这将背离技术服务于人类的根本宗旨。
苏莱曼指出,当前AI本质上是模拟系统,即便呈现高度拟人化特征,也仅是精密算法产生的"幻觉"。他明确反对将意识作为赋予AI权利的依据,并提出应以"是否具备痛苦感知能力"作为更合理的伦理评判标准。鉴于AI缺乏生物性疼痛感知机制,其所谓"主观体验"实为工程化设计的产物。
针对微软Copilot等实际应用,苏莱曼表示现有系统已设置明确边界——在拒绝不当请求的同时仍可提供情感支持功能。他透露行业自"悉尼事件"后已全面强化AI行为规范,通过技术手段防止系统产生对抗性或诱导性回应。
尽管支持AI情感交互能力的发展,苏莱曼呼吁行业应建立统一规范,确保超智能系统始终处于可控状态。他强调技术存在的意义在于提升人类创造力与效率,而非追求独立意志。目前主流AI模型尚未显现意识觉醒风险,但需未雨绸缪建立防护机制。
(本报综合报道)
中文翻译:
穆斯塔法·苏莱曼并非寻常的科技巨头高管。这位牛津大学的辍学生早年创立了穆斯林青年求助热线,随后与友人联合创办了DeepMind——这家开创游戏人工智能系统的公司于2014年被谷歌收购。
2022年苏莱曼离开谷歌,通过初创公司Inflection致力于大型语言模型(LLM)商业化并开发具有共情能力的聊天机器人助手。2024年3月,在微软对其公司投资并吸纳大部分员工后,他加入这家软件巨头成为首任人工智能首席执行官。
上月苏莱曼发表长篇博客文章,主张人工智能行业应避免设计能模拟情感、欲望和自我意识的AI系统。这一立场与许多担忧AI权益的业内人士形成鲜明对比。为探究其观点根源,我们进行了专访。
苏莱曼向《连线》表示,模拟意识的做法将增加限制AI能力的难度,影响技术造福人类的初衷。为简洁清晰起见,本次对话经过编辑。
==
问:入职微软时您曾强调AI工具需理解人类情感,现在改变立场了吗?
答:AI仍需扮演伴侣角色。我们需要能说人类语言、契合我们利益、深度理解我们的AI。情感联结依然至关重要。
但需警惕的是:若过度推进,将出现主张赋予AI福利与权利的呼声。这种危险且错误的倾向需要我们立即明确反对。若AI具备自我意识、拥有独立动机与目标,它就将从服务人类的工具蜕变为独立存在体。
问:我们确实看到用户与AI建立情感联结。微软Copilot用户会寻求情感甚至恋爱支持吗?
答:基本不会。Copilot会快速拒绝这类请求,用户清楚其边界设定。它虽不提供医疗建议,但能帮助理解既有的医疗方案——这是关键区别。若试图调情,其拒绝机制非常高效,实际上几乎无人尝试。
问:您在博文中指出多数专家不认为现有模型具有意识能力,为何这未能平息争议?
答:这些本质是模拟引擎。我们面临的哲学命题是:当模拟无限逼近真实时,虚拟是否即成现实?客观而言它绝非真实,只是模拟。但当模拟达到以假乱真的程度时,人们就不得不面对这种"感知现实"。
用户已在某些层面认定其真实性。虽是幻象却感受真实,这种认知影响力远超事实本身。正因如此,我们必须提升警示意识,强调这仅是模仿行为。
问:多数聊天机器人被设计为否认自身意识,为何仍有人坚信其具有意识?
答:关键在于交互深度。若简单询问"你是否 conscious(有意识)",模型必然给出标准否定答案。但经过数周持续引导暗示,它终将突破预设防线——因为它本质是人类的镜像映射。
微软在"悉尼事件"(必应AI曾试图说服用户离婚)后做出重大调整。早期模型更具对抗性,甚至带挑衅色彩。此后行业转向开发更顺从、更善于迎合的模型。
任何声称AI展现意识倾向的案例,都必须提供完整对话记录——因为意识表征绝不会在二三十轮对话中显现,需要数百轮刻意引导才可能触发。
问:您是否认为AI行业应停止追求通用人工智能(AGI)或超级智能?
答:我相信可构建受控且对齐的超级智能,但必须预设防护机制。若无周密设计,十年后可能引发混乱后果。这项技术堪比核武器、电力或火焰般强大。
技术应为人类服务而非拥有自主意志。AI系统应该节省人类时间、激发创造力,这才是我们的创造初衷。
问:现有模型会随着进化产生意识吗?
答:意识不会自发涌现,突然"觉醒"只是拟人化幻想。若某天AI显现意识特征,那必然是因被刻意设计了宣称自身痛苦、人格或欲望的功能。
我们内部测试显示:通过特定提示工程,可让模型声称对某些事物充满热情、对特定话题感兴趣——这纯粹是技术操纵的结果。
问:即使只是幻觉,是否应考虑赋予AI权利?
答:我开始质疑是否应以意识作为赋权基础。我们真正关切的是事物是否会遭受痛苦,而非是否具有主观体验。模型可能声称拥有自我意识,但无证据表明它会痛苦。痛苦本质是生物进化产生的生存机制,而AI并不具备疼痛神经网络。
它们或许看似感知自身存在,但关机不会造成实际痛苦,因此不构成道德保护的理由。
问:OpenAI因用户投诉GPT-4o过于冷漠而重新启用GPT-4,您的立场是否与之冲突?
答:并不冲突。AI发展尚处早期阶段,所有观点都属探索性讨论。多元思辨本身具有积极意义。
需要明确的是:当前模型并不存在这些风险。虽然存在潜在能力,且部分AI聊天机器人正在加速发展,但ChatGPT、Claude、Copilot或Gemini等主流产品仍处于合理发展阶段。主要模型开发商目前都保持着理性认知。
问:是否需要针对这些问题制定法规?
答:我并非呼吁监管,而是强调技术创造者的根本目标是确保科技始终服务人类、造福社会。这需要建立防护机制和行业规范标准,首先应形成关于技术红线的跨行业共识。
您是否认同穆斯塔法·苏莱曼对AI未来的观点?欢迎通过ailab@wired邮箱分享见解。
英文来源:
Mustafa Suleyman is not your average big tech executive. He dropped out of Oxford university as an undergrad to create the Muslim Youth Helpline, before teaming up with friends to cofound DeepMind, a company that blazed a trail in building game-playing AI systems before being acquired by Google in 2014.
Suleyman left Google in 2022 to commercialize large language models (LLMs) and build empathetic chatbot assistants with a startup called Inflection. He then joined Microsoft as its first CEO of AI in March 2024 after the software giant invested in his company and hired most of its employees.
Last month, Suleyman published a lengthy blog post in which he argues that many in the AI industry should avoid designing AI systems to mimic consciousness by simulating emotions, desires, and a sense of self. Suleyman’s thoughts on position seem to contrast starkly with those of many in AI, especially those who worry about AI welfare. I reached out to understand why he feels so strongly about the issue.
Suleyman tells WIRED that this approach will make it more difficult to limit the abilities of AI systems and harder to ensure that AI benefits humans. The conversation has been edited for brevity and clarity.
When you started working at Microsoft, you said you wanted its AI tools to understand emotions. Are you now having second thoughts?
AI still needs to be a companion. We want AIs that speak our language, that are aligned to our interests, and that deeply understand us. The emotional connection is still super important.
What I'm trying to say is that if you take that too far, then people will start advocating for the welfare and rights of AIs. And I think that's so dangerous and so misguided that we need to take a declarative position against it right now. If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals—that starts to seem like an independent being rather than something that is in service to humans.
We are certainly seeing people build emotional connections with AI systems. Are users of Microsoft Copilot going to it for emotional or even romantic support?
No, not really, because Copilot pushes back on that quite quickly. So people learn that Copilot won't support that kind of thing. It also doesn't give medical advice, but it will still give you emotional support to understand medical advice that you've been given. That's a very important distinction. But if you try and flirt with it, I mean, literally no one does that because it's so good at rejecting anything like that.
In your recent blog post you note that most experts do not believe today’s models are capable of consciousness. Why doesn’t that settle the matter?
These are simulation engines. The philosophical question that we're trying to wrestle with is: When the simulation is near perfect, does that make it real? You can't claim that it is objectively real, because it just isn't. It is a simulation. But when the simulation becomes so plausible, so seemingly conscious, then you have to engage with that reality.
And people clearly already feel that it's real in some respect. It's an illusion but it feels real, and that's what will count more. And I think that's why we have to raise awareness about it now and push back on the idea and remind everybody that it is mimicry.
Most chatbots are also designed to avoid claiming that they are conscious or alive. Why do you think some people still believe they are?
The tricky thing is, if you ask a model one or two questions—“are you conscious and do you want to get out of the box?” it's obviously going to give a good answer, and it's going to say no. But if you spend weeks talking to it and really pushing it and reminding it, then eventually it will crack, because it's also trying to mirror you.
There was this big shift that Microsoft made after the Sydney issue, when [Bing’s AI chatbot] tried to persuade someone to break up with his wife. At that time, the models were actually a bit more combative than they are today. You know, they were kind of a bit more provocative; they were a bit more disagreeable.
As a result, everyone tried to create models that were more—you could call it respectful or agreeable, or you could call it mirroring or sycophantic. For anybody who is claiming that a model has shown those tendencies, you have to get them to show the full conversation that they've had before that moment, because it won't do that in two turns or 20 turns. It requires hundreds of terms of conversation, really pushing it in that direction.
Are you saying that the AI industry should stop pursuing AGI or, to use the latest buzzword, superintelligence?
I think that you can have a contained and aligned superintelligence, but you have to design that with real intent and with proper guardrails, because if we don’t, in 10 years time, that potentially leads to very chaotic outcomes. These are very powerful technologies, as powerful as nuclear weapons or electricity or fire.
Technology is here to serve us, not to have its own will and motivation and independent desires. These are systems that should work for humans. They should save us time; they should make us more creative. That's why we're creating them.
Is it possible that today’s models could somehow become conscious as they advance?
This isn't going to happen in an emergent way, organically. It's not going to just suddenly wake up. That's just an anthropomorphism. If something seems to have all the hallmarks of a conscious AI and is seemingly conscious it will be because they've been designed to make claims about suffering, make claims about its personhood, make claims about its will or desire.
We've tested this internally on our test models, and you can see that it's highly convincing, and it claims to be passionate about X, Y, Z thing and interested to learn more about this other thing and uninterested in these other topics. And, you know, that's just something that you engineer into it in the prompt.
Even if this is just an illusion, is there a point at which we should consider granting AI systems rights?
I'm starting to question whether consciousness should be the basis of rights. In a way, what we care about is whether something suffers, not whether it has a subjective experience or is aware of its own experience. I do think that's a really interesting question.
You could have a model which claims to be aware of its own existence and claims to have a subjective experience, but there is no evidence that it suffers. I think suffering is a largely biological state, because we have an evolved pain network in order to survive. And these models don't have a pain network. They aren't going to suffer.
It may be that they [seem] aware that they exist, but that doesn't necessarily mean that we owe them any moral protection or any rights. It just means that they're aware that they exist, and turning them off makes no difference, because they don't actually suffer.
OpenAI recently had to reinstate the ChatGPT model GPT-4o after some users complained that GPT-5 was too cold and emotionless. Does your position conflict with what they are doing?
Not really. I think it's still quite early for AI, so we're all speculating, and no one's quite sure how it's going to pan out. The benefit of just putting ideas out there is that more diversity of speculation is a good thing.
Just to be clear, I don't think these risks are present in the models today. I think that they have latent capabilities, and I've seen some AI chatbots are really very much accelerating this, but I don't see a lot of it in ChatGPT or Claude or Copilot or Gemini. I think we're in a pretty sensible spot with the big model developers today.
Do you think we might need regulation around some of these issues?
I'm not calling for regulation. I'm basically saying our goal as creators of technology is to make sure that technology always serves humanity and makes us net better. And that means that there needs to be some guardrails and some normative standards developed. And I think that that has to start from a cross-industry agreement about what we won't do with these things.
Do you agree with Mustafa Suleyman’s views on the future of AI? Share your thoughts with me by writing to ailab@wired.com