«

人形机器人背后的人力付出正被悄然掩盖。

qimuai 发布于 阅读:2 一手编译


人形机器人背后的人力付出正被悄然掩盖。

内容来源:https://www.technologyreview.com/2026/02/23/1133508/the-human-work-behind-humanoid-robots-is-being-hidden/

内容总结:

英伟达CEO黄仁勋近日宣称,世界正进入“实体AI”时代,人工智能将从语言模型发展为具备物理执行能力的机器人。这一愿景伴随着人形机器人叠衣服、组装汽车等演示视频不断升温,但背后依赖的人类劳动与数据训练机制却鲜少被公众知晓。

当前,机器人训练正催生新型劳动形态:为获取训练数据,工人需佩戴VR设备与外骨骼重复机械动作,例如上海有工人每日数百次开关微波炉门以训练协作机器人;北美机器人公司Figure计划在10万套住宅中大规模采集家庭环境数据。与此同时,远程操控成为人形机器人落地的隐性支撑——售价2万美元的家用人形机器人Neo在遇到复杂任务时,将由加州操作员远程接管完成洗碗、熨衣等工作。这种模式虽获得用户授权,却可能演变为一种通过机器人实现劳动成本全球转移的新形态。

行业透明度缺失导致公众高估机器人能力。类似特斯拉“自动驾驶”宣传引发安全事故的案例警示,若机器人公司继续隐瞒人类在训练与操控中的关键作用,公众可能将隐藏的人类劳动误认为机器智能。正如AI内容审核依赖低收入国家人力处理有害信息,实体AI的发展同样建立在大量人类隐性劳动之上。若缺乏公开讨论与监管,这场技术变革或将加剧劳动力市场的隐形剥削。

中文翻译:

本文原载于《算法》周刊,这是我们每周发布的AI时事通讯。若想第一时间在收件箱中阅读此类报道,请点击此处订阅。

今年一月,全球市值最高公司的掌舵者、英伟达首席执行官黄仁勋宣称,我们正迈入实体人工智能时代——人工智能将超越语言与聊天机器人,融入具备实体行动能力的机器之中。(顺带一提,他去年也发表过相同言论。)

随着人形机器人收拾碗碟或组装汽车等新演示视频不断涌现,其传递的潜台词是:用功能单一的机械臂模仿人类肢体已是过时的自动化方案。新的方向在于复现人类在工作时的思考、学习与适应方式。但问题在于,相关训练和操作过程中人类劳动缺乏透明度,导致公众既误解了机器人的实际能力,也未能察觉围绕它们正形成的各种奇异新型工作形态。

以AI时代机器人常通过人类演示学习家务为例。大规模创建此类数据正催生《黑镜》式的场景。据《Rest of World》报道,上海一名工人近日连续一周头戴虚拟现实设备与外骨骼,每天重复数百次开关微波炉门的动作,以训练身旁的机器人。在北美,机器人公司Figure似乎也在筹划类似事项:该公司去年九月宣布将与资产管理规模达10万套住宅的投资公司Brookfield合作,在“多样化的家庭环境”中采集“海量”现实世界数据。(Figure未回应有关此项目的问询。)

正如我们的言语已成为大语言模型的训练数据,我们的肢体动作如今也即将步其后尘。只是这种未来可能让人类处境更为艰难,而变革已悄然开启。机器人专家亚伦·普拉瑟向我提及近期某快递公司的案例:工人们在搬运包裹时佩戴运动追踪传感器,采集的数据将用于训练机器人。人形机器人的研发很可能需要体力劳动者大规模充当数据采集员。“这过程会很诡异,”普拉瑟坦言,“毫无疑问。”

远程操控同样值得关注。尽管机器人技术的终极目标是实现机器自主完成任务,但机器人公司仍雇佣人员远程操控设备。初创公司1X推出的售价两万美元的人形机器人Neo计划今年进驻家庭,但其创始人伯恩特·奥伊温德·伯尼奇近期向我透露,公司并未设定明确的自主运作标准。若机器人遇到障碍或用户要求执行复杂任务,位于加州帕洛阿尔托总部的远程操作员将通过机器人摄像头视角,代为完成熨衣或清理洗碗机等操作。

这种方式本身未必有害——1X会在切换远程操控模式前征得用户同意——但当操作员通过机器人在你家中处理家务时,我们所知的隐私概念将不复存在。若家庭人形机器人并非真正自主运行,这种模式更应理解为一种薪资套利形式:它重现零工经济的运作逻辑,并首次实现实体任务向劳动力最廉价地区转移。

我们曾走过相似的道路。社交媒体平台进行“AI驱动”的内容审核,或为AI公司汇编训练数据,常需要低收入国家的劳动者浏览令人不适的内容。尽管有观点称AI很快能通过自我输出进行训练并自主学习,但即便最先进的模型仍需大量人工反馈才能达到预期效果。

这些人类劳动的存在并不意味着AI只是虚幻概念。但当他们隐于幕后时,公众往往会高估机器的实际能力。

这对投资者和炒作热潮固然有益,却让所有人承担后果。例如特斯拉将其驾驶辅助软件宣传为“自动驾驶”,抬高了公众对系统安全能力的预期——迈阿密陪审团近期裁定这种误导性宣传导致一名22岁女性身亡的事故(特斯拉被判赔偿2.4亿美元),正是扭曲认知的恶果。

人形机器人领域将面临相同困境。若黄仁勋预言成真,实体AI将进入我们的工作场所、家庭和公共空间,那么如何描述与审视这项技术便至关重要。然而机器人公司对训练过程与远程操控的透明度,仍如AI公司对待训练数据般讳莫如深。若不改变现状,我们可能误将隐匿的人类劳动视为机器智能,并幻想出远超实际存在的自主性。

深度聚焦
人工智能
“卸载GPT”运动呼吁用户取消ChatGPT订阅
对移民海关执法局的抵制正演变为更广泛的反AI公司与特朗普政府关联运动
Moltbook成为AI作秀巅峰时刻
这个病毒式传播的机器人社交网络,既揭示了智能体的未来图景,更映照出当下人类对AI的狂热痴迷
遇见将大语言模型视作外星生命的新生物学家们
通过将大语言模型视为生命体而非计算机程序进行研究,科学家首次揭示了它们的部分奥秘
2026年AI将走向何方
我们的AI作者对未来一年作出大胆预测——这五大热点趋势值得关注
保持联系
获取《麻省理工科技评论》最新动态
探索特别优惠、头条新闻、近期活动及更多内容

英文来源:

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
In January, Nvidia’s Jensen Huang, the head of the world’s most valuable company, proclaimed that we are entering the era of physical AI, when artificial intelligence will move beyond language and chatbots into physically capable machines. (He also said the same thing the year before, by the way.)
The implication—fueled by new demonstrations of humanoid robots putting away dishes or assembling cars—is that mimicking human limbs with single-purpose robot arms is the old way of automation. The new way is to replicate the way humans think, learn, and adapt while they work. The problem is that the lack of transparency about the human labor involved in training and operating such robots leaves the public both misunderstanding what robots can actually do and failing to see the strange new forms of work forming around them.
Consider how, in the AI era, robots often learn from humans who demonstrate how to do a chore. Creating this data at scale is now leading to Black Mirror–esque scenarios. A worker in Shanghai, for example, recently spent a week wearing a virtual-reality headset and an exoskeleton while opening and closing the door of a microwave hundreds of times a day to train the robot next to him, Rest of World reported. In North America, the robotics company Figure appears to be planning something similar: It announced in September it would partner with the investment firm Brookfield, which manages 100,000 residential units, to capture “massive amounts” of real-world data “across a variety of household environments.” (Figure did not respond to questions about this effort.)
Just as our words became training data for large language models, our movements are now poised to follow the same path. Except this future might leave humans with an even worse deal, and it’s already beginning. The roboticist Aaron Prather told me about recent work with a delivery company that had its workers wear movement-tracking sensors as they moved boxes; the data collected will be used to train robots. The effort to build humanoids will likely require manual laborers to act as data collectors at massive scale. “It’s going to be weird,” Prather says. “No doubts about it.”
Or consider tele-operation. Though the endgame in robotics is a machine that can complete a task on its own, robotics companies employ people to operate their robots remotely. Neo, a $20,000 humanoid robot from the startup 1X, is set to ship to homes this year, but the company’s founder, Bernt Øivind Børnich, told me recently that he’s not committed to any prescribed level of autonomy. If a robot gets stuck, or if the customer wants it to do a tricky task, a tele-operator from the company’s headquarters in Palo Alto, California, will pilot it, looking through its cameras to iron clothes or unload the dishwasher.
This isn’t inherently harmful—1X gets customer consent before switching into tele-operation mode—but privacy as we know it will not exist in a world where tele-operators are doing chores in your house through a robot. And if home humanoids are not genuinely autonomous, the arrangement is better understood as a form of wage arbitrage that re-creates the dynamics of gig work while, for the first time, allowing physical tasks to be performed wherever labor is cheapest.
We’ve been down similar roads before. Carrying out “AI-driven” content moderation on social media platforms or assembling training data for AI companies often requires workers in low-wage countries to view disturbing content. And despite claims that AI will soon enough train on its outputs and learn on its own, even the best models require an awful lot of human feedback to work as desired.
These human workforces do not mean that AI is just vaporware. But when they remain invisible, the public consistently overestimates the machines’ actual capabilities.
That’s great for investors and hype, but it has consequences for everyone. When Tesla marketed its driver-assistance software as “Autopilot,” for example, it inflated public expectations about what the system could safely do—a distortion a Miami jury recently found contributed to a crash that killed a 22-year-old woman (Tesla was ordered to pay $240 million in damages).
The same will be true for humanoid robots. If Huang is right, and physical AI is coming for our workplaces, homes, and public spaces, then the way we describe and scrutinize such technology matters. Yet robotics companies remain as opaque about training and tele-operation as AI firms are about their training data. If that does not change, we risk mistaking concealed human labor for machine intelligence—and seeing far more autonomy than truly exists.
Deep Dive
Artificial intelligence
A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions
Backlash against ICE is fueling a broader movement against AI companies’ ties to President Trump.
Moltbook was peak AI theater
The viral social network for bots reveals more about our own current mania for AI as it does about the future of agents.
Meet the new biologists treating LLMs like aliens
By studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time.
What’s next for AI in 2026
Our AI writers make their big bets for the coming year—here are five hot trends to watch.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.

MIT科技评论

文章目录


    扫描二维码,在手机上阅读