«

人工智能现状:展望2030年的世界图景

qimuai 发布于 阅读:22 一手编译


人工智能现状:展望2030年的世界图景

内容来源:https://www.technologyreview.com/2025/12/08/1128922/the-state-of-ai-a-vision-of-the-world-in-2030/

内容总结:

【新闻总结】2030年人工智能世界展望:技术加速与不平等隐忧

近日,《金融时报》与《麻省理工科技评论》合作推出的“人工智能现状”系列对话迎来终篇。双方资深科技记者围绕“2030年的人工智能世界”展开探讨,揭示技术演进路径与社会分化隐忧。

技术演进:狂热与理性并存
关于生成式AI的短期影响,业界存在显著分歧。前OpenAI研究员主导的“AI未来计划”预测,AI将在十年内产生超越工业革命的影响力;而普林斯顿大学研究者则持保守观点,认为技术前沿的突破不会直接转化为经济社会变革,因为技术普及“始终遵循人类社会的适应速度”。

尽管ChatGPT问世已三年,但其替代律师、开发者等专业工作的能力仍不明确,技术更新带来的能力跃升也已放缓。专家指出,随着核心技术进展减速,AI公司的竞争将转向应用层创新,同时高端模型将更廉价、更易获取。此外,强化学习、世界模型等非大语言模型技术路线可能重新崛起。

社会影响:技术鸿沟或将加剧
《金融时报》科技记者蒂姆·布拉德肖预测,2030年前AI将快速变革世界,但利益分配不均将造就“AI拥有者与匮乏者”的分化。他预计AI泡沫将在本十年末前破裂,大量应用开发商会消失,而基础模型公司也可能经历洗牌。随着资本追求回报,AI服务价格可能上涨,使企业及个人用户面临“用不起”的困境。

这种分化将体现在多个层面:付费用户与免费用户的能力差距将拉大;机器人出租车、家庭人形机器人等物理AI应用可能成为富裕阶层的专属;全球南北差异也将加剧——微软报告显示AI是史上普及最快的技术,但全球仍有大量地区缺乏使用AI所需的电力与网络基础。

未来格局:创新中心可能转移
专家指出,当前硅谷的AI热潮缺乏开发高效模型或革命性芯片的动力,这可能促使下一波创新来自美国以外地区,如中国、印度等。开源模型的普及或将在一定程度上抑制价格,但技术领导权与利益分配的政治博弈,很可能持续至下一个十年。

尽管技术前景充满不确定性,但共识在于:AI的真正挑战不仅在于技术突破,更在于如何让全人类共享其发展成果。

中文翻译:

人工智能现状:2030年世界展望

《麻省理工科技评论》高级人工智能编辑威尔·道格拉斯·海文与《金融时报》全球科技记者蒂姆·布拉德肖探讨未来五年世界将如何演变。

欢迎回到《人工智能现状》系列,这是《金融时报》与《麻省理工科技评论》的全新合作项目。每周一,两家媒体的撰稿人将围绕重塑全球格局的生成式人工智能革命的某个方面展开辩论。您可在此处阅读本系列其他文章。

在本系列终篇中,《麻省理工科技评论》高级人工智能编辑威尔·道格拉斯·海文与《金融时报》全球科技记者蒂姆·布拉德肖探讨了人工智能的下一步发展方向,以及未来五年我们的世界将呈现何种面貌。

(作为本系列的一部分,欢迎加入《麻省理工科技评论》主编马特·霍南与特约编辑大卫·罗特曼,与《金融时报》专栏作家理查德·沃特斯进行独家对话,探讨人工智能如何重塑全球经济。直播时间为12月9日(周二)美国东部时间下午1:00。此为订阅者专属活动,您可在此处报名。)

威尔·道格拉斯·海文写道:

每次被问及未来会怎样,我脑海里总会响起卢克·海恩斯的一首歌:“请不要问我未来会怎样 / 我不是预言家。”但无论如何,我还是试着回答:2030年世界会是什么样?我的答案是:既熟悉又不同。

在预测生成式人工智能近期影响方面,各方观点存在巨大鸿沟。一方是“人工智能未来项目”,这是一个由前OpenAI研究员丹尼尔·科科塔伊洛领导的小型捐赠资助研究机构。该非营利组织今年四月凭借《AI 2027》报告引发巨大反响,该报告推测了两年后世界的面貌。

报告虚构了一家名为OpenBrain(如有雷同,纯属巧合)的人工智能公司失控发展的故事,最终导向一个“选择你自己的冒险”式的繁荣或毁灭结局。科科塔伊洛及其合著者毫不讳言地预测,未来十年人工智能的影响将超过工业革命——那场持续150年、带来巨大经济和社会动荡的变革,其塑造的世界至今仍影响着我们。

天平的另一端是“常态技术”团队:普林斯顿大学研究员阿尔温德·纳拉亚南和萨亚什·卡普尔,他们是《AI蛇油》一书的合著者。他们不仅反驳了《AI 2027》的大部分预测,更重要的是,质疑了其根本的世界观。他们认为,技术并非那样运作。

前沿技术的进步可能密集而迅速,但更广泛的经济乃至整个社会的变革,却以人类的速度推进。新技术的广泛采用可能很慢;被接受则更慢。人工智能也不会例外。

我们该如何看待这两种极端观点?ChatGPT于三年前的上个月问世,但最新版本的技术在替代律师、软件开发人员或(令人倒吸一口气)记者方面究竟有多出色,目前仍不清楚。而且,新版本更新带来的能力飞跃已不如从前。

然而,这项颠覆性技术如此之新,过早否定它将是愚蠢的。想想看:甚至没人确切知道这项技术如何运作——更不用说它真正的用途是什么。

随着核心技术进步速度放缓,该技术的应用将成为人工智能公司之间的主要区别所在。(看看市场上已出现的新浏览器大战和聊天机器人混搭风潮便知。)与此同时,高端模型的运行成本正在降低,也更易获取。预计未来大部分动态将集中于此:使用现有模型的新方法将保持其新鲜感,并转移人们等待下一波突破的注意力。

与此同时,进步仍在大型语言模型之外持续。(别忘了——ChatGPT之前有人工智能,之后也会有。)例如强化学习技术——2016年击败围棋大师的DeepMind棋盘游戏人工智能AlphaGo背后的强大引擎——预计将卷土重来。围绕“世界模型”的讨论也很多,这是一种生成式人工智能,相比大型语言模型,它对物理世界如何组合有更强的把握力。

最终,我同意“常态技术”团队的观点,即快速的技术进步并不会直接转化为经济或社会的进步。中间夹杂了太多复杂的人类因素。

但是蒂姆,轮到你了。我很想听听你的“茶叶占卜”说了什么。

蒂姆·布拉德肖回应:

威尔,我比你更有信心,2030年的世界将大不相同。五年后,我预计人工智能革命将快速推进。但谁能从这些收益中获益,将创造一个拥有“人工智能富足者”与“匮乏者”的世界。

人工智能泡沫在本十年结束前的某个时候破裂似乎不可避免。无论风险投资资金的清算是在六个月还是两年后发生(我觉得当前的狂热还会持续一段时间),大批人工智能应用开发者将一夜之间消失。其中一些人的成果将被他们所依赖的模型吸收。另一些人将痛苦地认识到,如果没有风险投资资金的“消防水喉”支持,你无法以50美分的价格出售成本1美元的服务。

有多少基础模型公司能幸存下来更难预测,但OpenAI在硅谷内部形成的相互依赖链已使其“大到不能倒”,这一点似乎已很清晰。尽管如此,资金清算将迫使其提高服务定价。

OpenAI在2015年创立时,曾承诺“以最可能造福全人类的方式推进数字智能”。这似乎越来越站不住脚。迟早,那些以5000亿美元估值入股的投资者将要求回报。那些数据中心不会自己支付费用。到那时,许多公司和个人将依赖ChatGPT或其他人工智能服务处理日常工作流程。有能力支付者将收获生产力红利,攫取过剩的计算能力,而其他人则被高价挤出市场。

能够将多个人工智能服务层层叠加将产生复合效应。我最近在旧金山听到一个例子:解决“氛围编程”中的小问题,只需对同一问题多次处理,然后运行更多人工智能代理来查找错误和安全问题。这听起来极其消耗GPU资源,意味着要让人工智能真正兑现当前的生产力承诺,客户需要支付的费用将远高于现在大多数人的支出。

物理人工智能领域也是如此。我完全相信到本十年末,机器人出租车将在每个主要城市普及,我甚至预计会在许多家庭中看到人形机器人。但尽管Waymo在旧金山类似Uber的价格以及中国宇树科技生产的低成本机器人给人感觉这些技术很快将人人可及,但使其有用且无处不在所涉及的计算成本,似乎注定会将其变成富裕阶层的奢侈品,至少在近期如此。

而我们其他人,将面对一个充斥着“数字糟粕”的互联网,并且负担不起真正有效的人工智能工具。

也许计算效率的某些突破会避免这种命运。但当前的人工智能热潮意味着硅谷的人工智能公司缺乏动力去开发更精简的模型或尝试截然不同的芯片。这只会增加下一波人工智能创新来自美国以外的可能性,无论是中国、印度,还是更远的地方。

硅谷的人工智能热潮肯定会在2030年前结束,但围绕该技术发展的全球影响力争夺——以及关于其收益如何分配的政治争论——似乎注定将持续到下一个十年。

威尔回复:

我同意你的观点,这项技术的成本将导致一个“富足者”与“匮乏者”并存的世界。即使在今天,每月支付200多美元,ChatGPT或Gemini的高级用户获得的体验与免费用户截然不同。随着模型制造商寻求收回成本,这种能力差距必将扩大。

我们还将看到巨大的全球差异。在北方国家,采用率已超出图表。微软人工智能经济研究所最近的一份报告指出,人工智能是人类历史上传播最快的技术:“在不到三年的时间里,超过12亿人使用了人工智能工具,采用速度超过了互联网、个人电脑甚至智能手机。”然而,没有稳定的电力和互联网,人工智能毫无用处;而世界上大片地区两者皆缺。

我仍然怀疑,到2030年我们是否会看到许多业内人士承诺(以及投资者祈祷)的那种革命。当微软在此谈论采用率时,它计算的是临时用户,而非衡量需要时间的长期技术扩散。与此同时,临时用户会感到厌倦并转向其他事物。

这样如何:如果五年后我和一个家用机器人生活在一起,你可以随时用机器人出租车把你的脏衣服送到我家。

开玩笑的!好像我能买得起似的。

延伸阅读

深度报道

保持联系

获取《麻省理工科技评论》最新动态

发现特别优惠、头条新闻、即将举行的活动等。

英文来源:

The State of AI: A vision of the world in 2030
Senior AI editor Will Douglas Heaven talks with Tim Bradshaw, FT global tech correspondent, about what our world will look like in the next five years.
Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power. You can read the rest of the series here.
In this final edition, MIT Technology Review’s senior AI editor Will Douglas Heaven talks with Tim Bradshaw, FT global tech correspondent, about where AI will go next, and what our world will look like in the next five years.
(As part of this series, join MIT Technology Review’s editor in chief, Mat Honan, and editor at large, David Rotman, for an exclusive conversation with Financial Times columnist Richard Waters on how AI is reshaping the global economy. Live on Tuesday, December 9 at 1:00 p.m. ET. This is a subscriber-only event and you can sign up here.)
Will Douglas Heaven writes:
Every time I’m asked what’s coming next, I get a Luke Haines song stuck in my head: “Please don’t ask me about the future / I am not a fortune teller.” But here goes. What will things be like in 2030? My answer: same but different.
There are huge gulfs of opinion when it comes to predicting the near-future impacts of generative AI. In one camp we have the AI Futures Project, a small donation-funded research outfit led by former OpenAI researcher Daniel Kokotajlo. The nonprofit made a big splash back in April with AI 2027, a speculative account of what the world will look like two years from now.
The story follows the runaway advances of an AI firm called OpenBrain (any similarities are coincidental, etc.) all the way to a choose-your-own-adventure-style boom or doom ending. Kokotajlo and his coauthors make no bones about their expectation that in the next decade the impact of AI will exceed that of the Industrial Revolution—a 150-year period of economic and social upheaval so great that we still live in the world it wrought.
At the other end of the scale we have team Normal Technology: Arvind Narayanan and Sayash Kapoor, a pair of Princeton University researchers and coauthors of the book AI Snake Oil, who push back not only on most of AI 2027’s predictions but, more important, on its foundational worldview. That’s not how technology works, they argue.
Advances at the cutting edge may come thick and fast, but change across the wider economy, and society as a whole, moves at human speed. Widespread adoption of new technologies can be slow; acceptance slower. AI will be no different.
What should we make of these extremes? ChatGPT came out three years ago last month, but it’s still not clear just how good the latest versions of this tech are at replacing lawyers or software developers or (gulp) journalists. And new updates no longer bring the step changes in capability that they once did.
And yet this radical technology is so new it would be foolish to write it off so soon. Just think: Nobody even knows exactly how this technology works—let alone what it’s really for.
As the rate of advance in the core technology slows down, applications of that tech will become the main differentiator between AI firms. (Witness the new browser wars and the chatbot pick-and-mix already on the market.) At the same time, high-end models are becoming cheaper to run and more accessible. Expect this to be where most of the action is: New ways to use existing models will keep them fresh and distract people waiting in line for what comes next.
Meanwhile, progress continues beyond LLMs. (Don’t forget—there was AI before ChatGPT, and there will be AI after it too.) Technologies such as reinforcement learning—the powerhouse behind AlphaGo, DeepMind’s board-game-playing AI that beat a Go grand master in 2016—is set to make a comeback. There’s also a lot of buzz around world models, a type of generative AI with a stronger grip on how the physical world fits together than LLMs display.
Ultimately, I agree with team Normal Technology that rapid technological advances do not translate to economic or societal ones straight away. There’s just too much messy human stuff in the middle.
But Tim, over to you. I’m curious to hear what your tea leaves are saying.
Tim Bradshaw responds"
Will, I am more confident than you that the world will look quite different in 2030. In five years’ time, I expect the AI revolution to have proceeded apace. But who gets to benefit from those gains will create a world of AI haves and have-nots.
It seems inevitable that the AI bubble will burst sometime before the end of the decade. Whether a venture capital funding shakeout comes in six months or two years (I feel the current frenzy still has some way to run), swathes of AI app developers will disappear overnight. Some will see their work absorbed by the models upon which they depend. Others will learn the hard way that you can’t sell services that cost $1 for 50 cents without a firehose of VC funding.
How many of the foundation model companies survive is harder to call, but it already seems clear that OpenAI’s chain of interdependencies within Silicon Valley make it too big to fail. Still, a funding reckoning will force it to ratchet up pricing for its services.
When OpenAI was created in 2015, it pledged to “advance digital intelligence in the way that is most likely to benefit humanity as a whole.” That seems increasingly untenable. Sooner or later, the investors who bought in at a $500 billion price tag will push for returns. Those data centers won’t pay for themselves. By that point, many companies and individuals will have come to depend on ChatGPT or other AI services for their everyday workflows. Those able to pay will reap the productivity benefits, scooping up the excess computing power as others are priced out of the market.
Being able to layer several AI services on top of each other will provide a compounding effect. One example I heard on a recent trip to San Francisco: Ironing out the kinks in vibe coding is simply a matter of taking several passes at the same problem and then running a few more AI agents to look for bugs and security issues. That sounds incredibly GPU-intensive, implying that making AI really deliver on the current productivity promise will require customers to pay far more than most do today.
The same holds true in physical AI. I fully expect robotaxis to be commonplace in every major city by the end of the decade, and I even expect to see humanoid robots in many homes. But while Waymo’s Uber-like prices in San Francisco and the kinds of low-cost robots produced by China’s Unitree give the impression today that these will soon be affordable for all, the compute cost involved in making them useful and ubiquitous seems destined to turn them into luxuries for the well-off, at least in the near term.
The rest of us, meanwhile, will be left with an internet full of slop and unable to afford AI tools that actually work.
Perhaps some breakthrough in computational efficiency will avert this fate. But the current AI boom means Silicon Valley’s AI companies lack the incentives to make leaner models or experiment with radically different kinds of chips. That only raises the likelihood that the next wave of AI innovation will come from outside the US, be that China, India, or somewhere even farther afield.
Silicon Valley’s AI boom will surely end before 2030, but the race for global influence over the technology’s development—and the political arguments about how its benefits are distributed—seem set to continue well into the next decade.
Will replies:
I am with you that the cost of this technology is going to lead to a world of haves and have-nots. Even today, $200+ a month buys power users of ChatGPT or Gemini a very different experience from that of people on the free tier. That capability gap is certain to increase as model makers seek to recoup costs.
We’re going to see massive global disparities too. In the Global North, adoption has been off the charts. A recent report from Microsoft’s AI Economy Institute notes that AI is the fastest-spreading technology in human history: “In less than three years, more than 1.2 billion people have used AI tools, a rate of adoption faster than the internet, the personal computer, or even the smartphone.” And yet AI is useless without ready access to electricity and the internet; swathes of the world still have neither.
I still remain skeptical that we will see anything like the revolution that many insiders promise (and investors pray for) by 2030. When Microsoft talks about adoption here, it’s counting casual users rather than measuring long-term technological diffusion, which takes time. Meanwhile, casual users get bored and move on.
How about this: If I live with a domestic robot in five years’ time, you can send your laundry to my house in a robotaxi any day of the week.
JK! As if I could afford one.
Further reading
What is AI? It sounds like a stupid question, but it’s one that’s never been more urgent. In this deep dive, Will unpacks decades of spin and speculation to get to the heart of our collective technodream.
AGI—the idea that machines will be as smart as humans—has hijacked an entire industry (and possibly the US economy). For MIT Technology Review’s recent New Conspiracy Age package, Will takes a provocative look at how AGI is like a conspiracy.
The FT examined the economics of self-driving cars this summer, asking who will foot the multi-billion-dollar bill to buy enough robotaxis to serve a big city like London or New York.
A plausible counter-argument to Tim’s thesis on AI inequalities is that freely available open-source (or more accurately, “open weight”) models will keep pulling down prices. The US may want frontier models to be built on US chips but it is already losing the global south to Chinese software.
Deep Dive
Artificial intelligence
How AGI became the most consequential conspiracy theory of our time
The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do.
OpenAI’s new LLM exposes the secrets of how AI really works
The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways—and how trustworthy they really are.
Quantum physicists have shrunk and “de-censored” DeepSeek R1
They managed to cut the size of the AI reasoning model by more than half—and claim it can now answer politically sensitive questions once off limits in Chinese AI systems.
DeepSeek may have found a new way to improve AI’s ability to remember
Instead of using text tokens, the Chinese AI company is packing information into images.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读