在人工智能时代营造心理安全感

内容来源:https://www.technologyreview.com/2025/12/16/1125899/creating-psychological-safety-in-the-ai-era/
内容总结:
MIT报告揭示:企业成功应用AI的关键在于构建“心理安全”文化
在人工智能浪潮席卷全球企业之际,一项由MIT Technology Review Insights与印孚瑟斯合作进行的最新调研指出,企业部署AI的最大障碍可能并非技术瓶颈,而是员工内心的“恐惧”。报告强调,构建一个允许试错、自由表达的“心理安全”工作环境,已成为AI时代企业转型成败的核心。
调研访问了500位企业领导者。数据显示,尽管83%的高管认同“心理安全”文化能显著提升AI项目的成功率,但现实与理想仍存差距。近四分之一(22%)的受访者坦言,曾因害怕项目失败后被追责而犹豫是否牵头AI项目。这揭示了企业内部普遍存在的“言”与“行”的脱节:公开鼓励创新,但深层文化中仍存在问责恐惧。
印孚瑟斯首席技术官拉菲·塔拉夫达尔指出:“在AI新时代,心理安全是强制要求。技术本身迭代极快,企业必须进行实验,而其中一些尝试注定会失败。我们需要一个安全网。”
报告核心发现包括:
- 文化先行者更易成功:崇尚实验、心理安全感高的企业,其AI项目取得成功的可能性更大。84%的领导者观察到心理安全与具体的AI成果之间存在直接关联。
- 人的障碍大于技术:心理与文化层面的阻碍,已成为比技术挑战更严峻的AI应用拦路虎。
- 基础尚不稳固:仅39%的领导者认为其组织的心理安全水平“非常高”,近半数(48%)评价为“中等”。这意味着许多企业是在尚未完全稳固的文化地基上推行AI。
专家指出,建立真正的心理安全并非人力资源部门独立可完成的任务,而需要企业将这一理念系统性地深度嵌入协作流程与管理体系中。在缺乏成熟最佳实践的AI探索初期,赋予员工挑战假设、提出问题而无惧后果的自由,或许是释放AI巨大价值的首要前提。
中文翻译:
赞助内容
在人工智能时代构建心理安全感
当领导者承认自身认知的局限、直面恐惧并帮助员工适应时,对AI的信任便开始萌芽。
与Infosys Topaz联合呈现
部署企业级人工智能如同同时攀登两座陡峭悬崖:既要理解并落实技术本身,也要营造能让员工充分发挥其价值的文化环境。技术障碍固然巨大,但人为因素往往影响更深——恐惧与不确定性足以拖垮最具潜力的项目。
心理安全感——即员工能够自由表达观点、承担可控风险而无须担忧职业后果——是AI成功落地的关键。在心理安全的工作环境中,员工敢于质疑既有假设,坦然提出对新工具的忧虑。对于这种尚未形成成熟实践、却蕴含巨大能量的新兴技术而言,这样的环境不可或缺。
Infosys执行副总裁兼首席技术官拉菲·塔拉夫达尔指出:“在AI新时代,心理安全感是必需品。技术本身迭代极快,企业必须尝试,也难免遭遇失败。我们需要一张安全网。”
为探究心理安全感如何影响企业级AI应用成效,《麻省理工科技评论》洞察团队对500位企业领导者展开调研。结果显示,尽管受访者自评心理安全感较高,但恐惧情绪依然存在。行业专家指出,这种认知与现实的脱节源于:企业虽公开倡导“勇于试错”,深层文化暗流却可能消解这一初衷。
构建心理安全感需要系统性的协同推进,仅靠人力资源部门无法实现这一转变。企业必须将心理安全深度融入协作流程。
本报告核心发现包括:
- 鼓励试错的文化显著提升AI项目成功率。83%的高管认为,重视心理安全的企业文化能切实提升AI项目成效。五分之四的领导者认同,具备心理安全的组织更易成功应用AI;84%的受访者观察到心理安全与AI实际成果存在关联。
- 心理障碍比技术挑战更阻碍AI应用。近四分之三(73%)的受访者表示能在工作中坦诚反馈、自由表达,但仍有相当比例(22%)承认曾因担心项目失败被追责而犹豫是否主导AI项目。
- 心理安全感对多数企业仍是移动靶标。仅39%的领导者认为所在组织心理安全感“非常高”,48%评价为“中等”。这意味着部分企业可能在尚未稳固的文化基础上推行AI应用。
本文由《麻省理工科技评论》旗下定制内容团队Insights制作,并非编辑部采写。内容由人类作者、编辑、分析师及插画师完成调研、设计与撰写,包括问卷设计及数据收集。若使用AI工具,仅限辅助后期制作环节,且均经过严格人工审核。
深度解读
人工智能
- 通用人工智能如何成为时代最具影响力的“阴谋论”
机器将与人同等甚至更高智能的观点席卷了整个行业。但细察之下,其持久流传的原因与阴谋论的滋生逻辑如出一辙。 - OpenAI新模型揭示AI运作的奥秘
这款实验模型虽无法与顶尖大模型竞争,却能解释它们为何行为异常,以及其可信度究竟如何。 - 量子物理学家“瘦身”并“解禁”DeepSeek R1
研究者将AI推理模型体积缩减超一半,并宣称其现已能回答曾在中国AI系统中受限的敏感问题。 - AI聊天机器人比政治广告更能影响选民
与聊天机器人的对话可改变民众政治倾向,但最具说服力的模型也最易传播错误信息。
保持关注
获取《麻省理工科技评论》最新动态
探索特别优惠、头条新闻、近期活动及更多内容。
英文来源:
Sponsored
Creating psychological safety in the AI era
Trust in AI begins when leaders admit what they do not know, address fears, and help people adapt.
In partnership withInfosys Topaz
Rolling out enterprise-grade AI means climbing two steep cliffs at once. First, understanding and implementing the tech itself. And second, creating the cultural conditions where employees can maximize its value. While the technical hurdles are significant, the human element can be even more consequential; fear and ambiguity can stall momentum of even the most promising initiatives.
Psychological safety—feeling free to express opinions and take calculated risks without worrying about career repercussions1—is essential for successful AI adoption. In psychologically safe workspaces, employees are empowered to challenge assumptions and raise concerns about new tools without fear of reprisal. This is nothing short of a necessity when introducing a nascent and profoundly powerful technology that still lacks established best practices.
“Psychological safety is mandatory in this new era of AI,” says Rafee Tarafdar, executive vice president and chief technology officer at Infosys. “The tech itself is evolving so fast—companies have to experiment, and some things will fail. There needs to be a safety net.”
To gauge how psychological safety influences success with enterprise-level AI, MIT Technology Review Insights conducted a survey of 500 business leaders. The findings reveal high self-reported levels of psychological safety, but also suggest that fear still has a foothold. Anecdotally, industry experts highlight a reason for the disconnect between rhetoric and reality: while organizations may promote a safe to experiment message publicly, deeper cultural undercurrents can counteract that intent.
Building psychological safety requires a coordinated, systems-level approach, and human resources (HR) alone cannot deliver such transformation. Instead, enterprises must deeply embed psychological safety into their collaboration processes.
Key findings for this report include:
- Companies with experiment-friendly cultures have greater success with AI projects. The majority of executives surveyed (83%) believe a company culture that prioritizes psychological safety measurably improves the success of AI initiatives. Four in five leaders agree that organizations fostering such safety are more successful at adopting AI, and 84% have observed connections between psychological safety and tangible AI outcomes.
- Psychological barriers are proving to be greater obstacles to enterprise AI adoption than technological challenges. Encouragingly, nearly three-quarters (73%) of respondents indicated they feel safe to provide honest feedback and express opinions freely in their workplace. Still, a significant share (22%) admit they’ve hesitated to lead an AI project because they might be blamed if it misfires.
- Achieving psychological safety is a moving target for many organizations. Fewer than half of leaders (39%) rate their organization’s current level of psychological safety as “very high.” Another 48%report a “moderate” degree of it. This may mean that some enterprises are pursuing AI adoption on cultural foundations that are not yet fully stable.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Deep Dive
Artificial intelligence
How AGI became the most consequential conspiracy theory of our time
The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do.
OpenAI’s new LLM exposes the secrets of how AI really works
The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways—and how trustworthy they really are.
Quantum physicists have shrunk and “de-censored” DeepSeek R1
They managed to cut the size of the AI reasoning model by more than half—and claim it can now answer politically sensitive questions once off limits in Chinese AI systems.
AI chatbots can sway voters better than political advertisements
A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.