为何人形机器人仍难精于细节?

内容来源:https://www.quantamagazine.org/why-do-humanoid-robots-still-struggle-with-the-small-stuff-20260313/
内容总结:
人形机器人为何仍在“细节”上举步维艰?
十年前,人形机器人步履蹒跚、频频摔倒的场景还历历在目。如今,随着特斯拉Optimus等产品高调登场,行业似乎已迈入新时代。然而,当被问及旗下最先进的机器人能否可靠应对任意楼梯或房门时,波士顿动力前研究员斯科特·昆德斯玛与Agility Robotics的乔纳森·赫斯特均坦言:“尚未完全解决”。
尽管进步显著——深度强化学习让运动更流畅,新型“本体感知”电驱动装置提升了灵活性与抗冲击能力,多模态AI模型使机器人能理解指令并规划多步骤任务——但机器人仍难以像人类一样自如地处理精细物理交互。
核心瓶颈:对“力”的掌控不足
麻省理工学院研究员普尔基特·阿格拉瓦尔指出,实现真正类人的“多用途移动操作”能力,关键在于掌握对“力”的精确控制。人类能凭触觉微妙调节施力,如轻握鸡蛋或拧开瓶盖,而当前机器人主要依赖位置控制,缺乏对力的直接感知与调节。
虽然通过硬件改进(如高透明度电机间接实现力感)和慢速操作可部分规避问题,但这限制了机器人的速度与适应性。专家共识是,仅靠基于位置的控制无法实现真正通用、灵巧的机器人。“力必须成为一等公民,”昆德斯玛强调。
破局之路:软硬件协同进化
业界正从多角度寻求突破:阿格拉瓦尔研究将力控制融入强化学习;谷歌DeepMind的卡罗莱娜·帕拉达认为需为AI模型引入更多力感知数据;麻省理工学院的拉斯·泰德拉克倡导大规模数据收集与预训练;而现代机器人学教科书作者弗兰克·帕克则主张重构AI架构,使物理基础成为可学习的核心要素。
尽管路径不同,但领域共识是科学范式已变:我们正利用AI让机器人先“学会”行走与操作,而非完全理解其原理后再构建。泰德拉克将此比作电磁学发展的早期阶段——“我们仍处在‘伏打电堆’时期”。
结论:曙光已现,道阻且长
人形机器人已取得革命性进展,骨骼强健,智能初显。然而,要真正融入日常生活,自如应对门、楼梯乃至更复杂的物理交互,仍需在力控制与通用智能层面实现关键突破。这条路依然艰难,但基础已然夯实。
中文翻译:
为何人形机器人仍难精于细微之处?
引言
上次我探讨人形机器人技术时,最先进的成果还带着奥威尔式的荒诞感——借用《动物庄园》的隐喻,可谓"四腿优,两腿劣"。那是2015年。波士顿动力的首款四足机器人"Spot"在YouTube引发轰动,它稳健攀爬楼梯,被猛踢后仍能恢复平衡。当时同样流行的还有:人形机器人不断摔倒的集锦。看着那些踉跄的金属龙虾状机器,我甚至比对Spot更感同情。双足行走实在太难了。
转眼至今。人形机器人显然已进步到如此程度——特斯拉甚至暂停部分电动车产线为Optimus人形机器人让路,初创公司也正色预售仿生管家。抛开炒作,我实在好奇:这个领域是否在我未曾留意时发生了范式变革?当然,"人工智能"确实突飞猛进(特指ChatGPT问世后的进展)。这点我自然没有忽视。但我完全不明白这与机器人不再摔倒有何关联。
为探明现状,我联系了刚离开波士顿动力的斯科特·昆德斯玛与敏捷机器人公司的乔纳森·赫斯特。两位科学家都亲历过机器人频繁摔倒的时代。如今这些双足机器人奇迹理应能轻松爬楼梯、开门而不费吹灰之力——这些正是它们十年前 notorious 的难题。我分别询问两位研究者:你们的旗舰机器人(地球上最受认可的两款人形机器人——波士顿动力的Atlas或敏捷公司的Digit)能应对任意楼梯或门廊吗?
"尚未达到稳定可靠。"赫斯特坦言。
"我认为问题并未完全解决。"昆德斯玛表示。
别误会:我并非相信某个套着袜套的机械僵尸即将接管家务。但楼梯和门?现在可是2026年。为何人形机器人仍如此……艰难?
快速、廉价且基本可控
平心而论,范式变革确实发生了。准确说是三次。
首先,基于高速GPU芯片的深度学习技术,极大增强了计算机视觉与强化学习能力,使机器人感知环境与交互的速度及精细度发生质的飞跃。接着在2016年,驱动方式(机器人学术语指"让部件运动")迎来革命:笨重的液压机构被更小巧的"本体感知"电机取代,赋予腿足机器人动物般的敏捷性。最近则是大语言模型的兴起。将聊天机器人技术适配于机器人后,它们竟能自主规划并执行多步骤任务,例如给苹果去核或清空洗碗机(至少在演示中如此)。
这些进步造就了天壤之别:2015年DARPA机器人挑战赛中荣获亚军、笨重迟滞的初代Atlas"奔跑者",与近期展示霹雳舞技、自主将不规则物品在货箱间转移(同时应对持冰球棍人类的干扰)的轻盈流畅新版Atlas,已是云泥之别。
那种流畅步态便源自深度强化学习。以往机器人专家需用各种手工算法协调每个动作,通过方程模拟(简化版)机器人物理特性。如今他们通过运行无数数字仿真,训练神经网络充当"全身控制器"。这一过程让网络习得将环境反馈转化为行动的"策略"。
"我们运用强化学习构建处理身体协调、避障、平衡等所有环节的策略。"昆德斯玛解释道。例如,不再需要将机器人腿部建模为线性倒立摆。"那种方法已被淘汰。"
此策略得益于麻省理工学院金尚培在猎豹系列机器人中首创的本体感知驱动器。"强化学习已存在多年,人们早先尝试过。"金尚培指出,"但若使用传统电机,每当机器人在现实世界无法完美执行策略——或遭遇障碍干扰——就会损坏。"
金尚培的驱动器通过可控"柔顺性"(即弹性形变能力)解决了此难题。过去十年间,这类器件成本降低且更易获取。"强化学习解决了大量双足运动问题,但硬件才是关键赋能者。"他强调。
如果说强化学习与柔顺驱动是人形机器人学的厚礼,多模态AI则为其系上缎带。2023年,谷歌DeepMind推出"视觉-语言-动作"模型,可接收视频与自然语言,并输出运动指令。
"如果你说'我渴了',它能理解你可能想喝水,并生成机器人需执行的步骤:寻找物品,以特定方式拾取。"谷歌DeepMind机器人部门负责人卡罗莱娜·帕拉达表示,"这在三年前还需手工编码完成。"VLA模型一举将此前割裂的机器人感知、规划与控制方法,整合为通用流程。
强健的实体躯干?已具备。可泛化的智能?初具雏形。那为何人形机器人仍未在科学意义上被"攻克"——至少在原理层面?
愿力与你同在
麻省理工学院"不可思议AI实验室"研究机器人学习的普尔基特·阿格拉瓦尔,上月接受采访时给出了答案:"要让机器人像人类般工作,我认为必须掌握物理学。"
他指的非广义相对论或量子引力等宇宙奥秘,也非当前让杨立昆等AI先驱兴奋的虚拟"世界模型"。阿格拉瓦尔谈论的是高中生就该熟悉的领域:力与惯性。
毕竟,人形设计的核心目标在于实现金尚培所称的"多用途移动操控"——既能抵达几乎所有地点(包括上下楼梯、穿门越户),又能处理绝大多数物品(从卸货盘到拧灯泡),且在此过程中不伤及他人。简言之,就是人类日常所为。"若要以人类速度完成这些,关键在于控制力。"阿格拉瓦尔指出,"力控在传统机器人学中早有研究,但在现代机器学习领域尚未普及。"
力控原理其实简单。想象机械臂在白板上书写——需避免压碎马克笔尖。四十多年前机器人专家就已掌握实现方法:将机械臂编程为仿佛装有虚拟弹簧与减震器。"可以让指向白板方向的弹簧非常柔软,沿板面方向则更刚硬。"昆德斯玛解释道,"这样机器人在精确书写字母笔画时,能保持恰当的笔压。"他进一步说明,这种反馈可由机器人关节内置的力传感器驱动,但传统方法需要大量关于机器人、环境与任务的知识才能奏效。
这种力控方式对执行特定任务的工业机器人效果显著,甚至助力了人形机器人运动。但其难以泛化。金尚培的本体感知电机(亦称准直驱驱动器)简化了难题。它们不仅设计成能承受意外冲击而无损,还具备高度"透明性"——电机能将电流按比例转化为力(反之亦然),误差极小。本质上,电机自身成了力传感器,这意味着"可通过取消专用力传感器来降低机器人成本与复杂度"。
随着强化学习取代手动编程成为人形运动控制主流,"经典"力控并未被遗忘,而是以某种方式被抽象化并委托给硬件与AI共同承担。
"从AI视角看,你无需刻意考虑力控。"赫斯特说,"更像是需要准直驱电机来逼近必要的力调节,然后将神经网络置入仿真环境迭代百万次——之后就能部署到机器人上获得惊艳表现。"
这些神经网络学习的是控制机器人身体部位位置的泛化策略。力调节通常在仿真训练中间接实现,有时作为从视频或人类输入中学习时的副产品。
但这些方法并未明确教授力的物理原理——至少目前如此。"实现智能力控所需的许多信号并未体现在视频与人类示范数据中。"昆德斯玛指出。DeepMind的帕拉达承认,VLA模型基本只学习在特定定义姿态间移动——但这种方法已走得很远。"仅凭此就能达到的成就令我们自己都惊讶。"她说。
但终究有限度。只要机器人身体相对人类仍显僵硬沉重,"它们惯性大,柔顺性不足。"阿格拉瓦尔解释道。这意味着若无力控,它们在复杂环境中执行精细任务时将举步维艰。"若要触碰易碎物,微小误差就会导致恶果。"试想普通鸡蛋与实心钢蛋:其中一个需要更谨慎拾取。
许多杰出系统在保持位置精度的同时,采用的一种解决方案是:慢速操作。"想象用汽车挪动椅子。"阿格拉瓦尔比喻,"缓慢移动时,我能精准控制自身位置,从而掌控椅子移动轨迹——力的问题便消失了。"这解释了为何Atlas抓取汽车零件时如糖浆般迟缓,而仅接触地面时却能如体操选手般滑行。
"若说每个有用的操控任务都绝对需要力控,未免言过其实。"昆德斯玛坦言。但他与赫斯特、帕拉达都 readily 承认,巧妙的力控变通方案无法赋予机器人管家所需的通用移动灵巧性。帕拉达指出,即便当今经强化学习优化的VLA智能机器人拥有"互联网规模"的位置数据训练,"很可能仍需额外工作。人类拧瓶盖时能感知阻力。"而人形机器人大多仍不能,这意味着它们尚未掌握物理学——至少未以人类方式掌握。我们通过进化赋予的极其复杂的肌肉骨骼与神经系统,在终生环境互动中习得了这种能力。
这正是当今人形机器人连门廊楼梯都未完全"攻克"的主因。特定楼梯或门?或许可以。但所有楼梯门廊及万物?"绝不存在仅靠基于位置的控制就能实现真正实用的自主人形机器人的世界。"昆德斯玛断言,"力必须作为首要考量。"
变得更智能?还是推倒重来?
那么从科学角度,我们如何突破壁垒?多数受访专家认为,需要硬件与软件进步的新融合。用于提升数据采集的触觉传感器,兼具高功率、柔顺性、透明性与低惯性的机械手,都将大有裨益。但无人认为需要真正的材料突破(如用人造肌肉替代电机)。
"现有硬件已非常出色,若归咎于此只是找借口。"另一位麻省理工学院资深机器人专家拉斯·泰德雷克告诉我,"若将人脑通过远程操控接入现有硬件,表现将极其强大。"关键在于找到更智能的控制方式。
被问及实现路径时,众人见解各异。阿格拉瓦尔研究如何通过让机器人在仿真中学习柔顺行为(而非在刚性定义位置间移动),将力控与强化学习结合。泰德雷克的研究催生了苹果去核机器人演示,他近期在《科学·机器人学》主张采用ChatGPT式的"大规模数据收集与大预训练模型"范式。撰写现代机器人学教科书《现代机器人学》的弗兰克·帕克则认为,当前AI方法需彻底重构,替换为能在基础层面学习物理原理(如力与加速度)的新范式。"VLA架构完全错误。"他直言,"我相信那条路注定失败。"
在所有对话中,最触动我的并非关于传感器类型、数据或AI架构能否"攻克"人形机器人的争论,而是该领域科学精神的转变。赫斯特(我们初次交谈时他刚将敏捷机器人公司从俄勒冈州立大学实验室剥离)精准概括了这种变化。
"我记得麻省理工学院腿足实验室前主任、DARPA机器人挑战赛项目经理吉尔·普拉特曾担忧:我们最终会在真正理解原理前,就用强化学习和AI让机器人行走奔跑。"赫斯特回忆,"而在很多方面,我们正在这样做。"
泰德雷克认同此观点,但指出人类在未牢固掌握基础时实现科技飞跃早有先例。"回顾电磁学发展:伏打阶段人们还在用青蛙腿做实验。"他说,"接着法拉第进行了精准实验,最终麦克斯韦给出控制方程。我认为我们正处于伏打阶段。"
那么人形机器人何时能被攻克?
"机器人仍不成熟,需要时间。但基础架构良好。两者皆真。"泰德雷克总结,"而且这依然很难。"
英文来源:
Why Do Humanoid Robots Still Struggle With the Small Stuff?
Introduction
The last time I covered the science of humanoid robots, the state of the art looked downright Orwellian — by which I mean, “four legs good, two legs bad.” It was 2015. Boston Dynamics’ first “Spot” quadruped had taken YouTube by storm, confidently trotting up stairs and recovering from vicious kicks. Also popular at the time: humanoids falling down. Constantly. I felt sorrier for those tottering metal lobsters than I ever did for Spot. Bipedal locomotion is hard.
Cut to now. Humanoids have apparently become so advanced that Tesla is mothballing some electric car models to make way for its Optimus humanoid robot, and start-ups are preselling android butlers with a straight face. Hype aside, I was genuinely curious: Did a paradigm shift happen in the field when I wasn’t looking? Sure, “AI” happened (that is, in the post-ChatGPT sense). I certainly hadn’t overlooked that. But I had no idea what it possibly had to do with robots not falling down anymore.
For a reality check, I called Scott Kuindersma, who recently left Boston Dynamics after many years there, and Jonathan Hurst of Agility Robotics. Both scientists had been present and involved during the robot-faceplant days. Surely today’s robotic bipedal marvels can ascend a few stairs and open a door without breaking a nonexistent sweat, something they famously struggled with a decade ago. I asked each researcher: Can your flagship robot — Boston Dynamics’ Atlas or Agility’s Digit, two of the most credible and pedigreed humanoids on Earth — handle any set of stairs or doorway?
“Not reliably,” Hurst said.
“I don’t think it’s totally solved,” Kuindersma said.
Don’t get me wrong: I don’t believe that some sock-faced robot zombie is close to taking over my household chores. But stairs and doors? It’s 2026. Why are humanoids still this … hard?
Fast, Cheap, and Mostly Under Control
To be fair, a paradigm shift did happen. Three, actually.
First, deep learning — neural networks running on fast GPU chips — turbocharged computer vision and reinforcement learning, which radically improved the speed and sophistication with which robots could perceive and interact with their environments. Then in 2016, a revolution in actuation (roboticist-speak for “making parts move”) began: Heavy hydraulic mechanisms were replaced by smaller, “proprioceptive” electric motors that gave legged robots animal-like nimbleness. Most recently came the large language models. Adapting chatbot technology for robots, it turns out, lets them autonomously plan and perform multistep tasks, such as coring an apple or emptying a dishwasher (in demos, at least).
These advances created the night-and-day difference between “Running Man,” the hulking, halting version of Atlas that won second place in 2015’s DARPA Robotics Challenge, and the svelte, smooth Atlas recently shown breakdancing and autonomously moving irregular items from one bin to another (while dealing with interference from a hockey stick–wielding human).
That fluid gait, for example, comes from deep reinforcement learning. Roboticists once coordinated each movement with various hand-engineered algorithms, using equations to model the (simplified) physics of the robot. Now they train neural networks to act as “whole-body controllers” by running countless digital simulations of the humanoid. This process teaches the network a “policy” for how to translate feedback from its environment into actions.
“We use reinforcement learning to build a policy that’s handling the body coordination, collision avoidance, balance, all that stuff,” Kuindersma said. There’s no longer any need to model a robot’s leg as a linear inverted pendulum, for example. “That’s just gone by the wayside,” he said.
This strategy was aided by the proprioceptive actuators pioneered by Sangbae Kim of the Massachusetts Institute of Technology in his Cheetah series of robots. “Reinforcement learning has existed for a long time, you know. People tried it before,” Kim said. “But if you use conventional [motors], the robot just breaks” every time it fails to perfectly execute a policy in the real world — or encounters an obstacle or disturbance.
Kim’s actuators got around the problem with controllable “compliance,” or flexible springiness. Over the past decade, they’ve gotten cheaper and more widely accessible. “Reinforcement learning solved a lot of the [bipedal] locomotion problem, but the hardware was the enabler,” Kim said.
If reinforcement learning and compliant actuation were gifts to humanoid robotics, multimodal AI put a bow on it. In 2023, Google DeepMind introduced “vision-language-action” (VLA) models, which can take in video and natural language and produce movement commands as outputs.
“If you say ‘I’m thirsty,’ it knows you probably want to drink, and it can [generate] the steps that [the robot] needs to take: Go find a thing, and then pick it up in this way,” said Carolina Parada, head of robotics at Google DeepMind. “This is something that, before three years ago, you would have to go hard-code.” In a stroke, VLAs united previously disparate approaches to robotic perception, planning, and control into one general-purpose pipeline.
Robust embodiment, check. Generalizable intelligence, check. (A start, anyway.) So why don’t they add up to humanoids being scientifically “solved” — at least in principle?
May the Force Be With You
Pulkit Agrawal, who studies robot learning at the appropriately named Improbable AI Lab at MIT, had an answer when I reached him there last month. “To have robots which work like humans,” he said, “I think we have to master physics.”
He wasn’t referring to cosmic matters like general relativity or quantum gravity, nor to the virtual “world models” that currently excite leading AI researchers such as Yann LeCun. Instead, Agrawal is talking about mastering something a high school science student ought to be familiar with: force and inertia.
Courtesy of 1X; Tesla
The whole point of the humanoid form factor, after all, is to deliver what Kim calls “multipurpose mobile manipulation,” or the ability to move almost anywhere (including on stairs and through doors) and handle almost anything (from unloading pallets to screwing in light bulbs), without hurting anyone in the process. In short, what we do every day. “These things are about [controlling] forces, if you want to do them at speeds of a human,” Agrawal said. “Force control has been a thing in classical [robotics]. But in modern machine learning land, it’s not been that widespread.”
Force control is simple in principle. Picture a robot arm drawing on a whiteboard — without smashing the tip of the marker. Roboticists have known how to make this happen for more than 40 years: They program the arm to behave as if it has an imaginary spring and shock absorber attached to it. “One can make the spring really soft in the direction pointing into the whiteboard, and stiffer along the surface of the whiteboard,” Kuindersma said. “That way the robot maintains the right pressure with the marker while precisely writing the lines and curves of the letters.” This feedback can be driven by force sensors built into the robot’s joints, but the catch is that the classical approaches require a lot of knowledge about the robot, environment, and task in order to work, he further explained.
That approach to controlling force works great for industrial robots with specific tasks to perform, and it even helped with humanoid locomotion. But it was impossible to generalize. Kim’s proprioceptive electric actuators, also called quasi-direct drive actuators, simplified things. Not only were they designed to absorb unexpected impacts without damage, they were also very “transparent,” which meant that the motor converted electrical current into a proportional amount of force (and vice versa) with relatively little error. In essence, the motor itself became a force sensor, which meant “you can remove cost and complexity from your robot by eliminating dedicated force sensors,” Kuindersma said.
As reinforcement learning eclipsed manual programming as a way of controlling humanoid movement, “classic” force control was not forgotten. It just got abstracted and delegated, in a way, to both hardware and AI.
“From an AI point of view, it’s not like you have to be thinking about force control,” Hurst said. “It’s more like you kind of know that you need a quasi-direct drive motor to get close [to the force regulation necessary], then put [the neural network] in simulation and iterate a million times — and then you can put it on the robot and get cool behaviors.”
Those neural networks are learning generalized policies that control the positions of a robot’s body parts. Force regulation often happens only indirectly in simulation training, or sometimes as a side effect when learned from video or human input.
But those methods don’t explicitly teach the physics of force — at least, not yet. “A lot of the signals that are required for doing intelligent force control are not present in [video and human demonstration] data,” Kuindersma said. DeepMind’s Parada acknowledged that the VLA models basically just learn to move between specifically defined poses — and this approach goes a long way. “We’ve been surprised ourselves at how far you can push it, without any other sensing,” she said.
But only so far. As long as robot bodies remain relatively stiff and heavy compared to ours, “they have high inertia, and they’re not [as] compliant,” Agrawal said, which means that without force control, they will struggle with precision tasks in complicated environments. “If you’re going to touch delicate objects and you have small errors, bad things are going to happen.” Picture a regular egg and another made of solid steel: One of them needs to be picked up much more carefully.
One way to get around this problem, used by many impressive systems alongside positional accuracy, is just to go slow. Imagine trying to move a chair with your car, Agrawal said: “If I go slowly, I can be precise on how I move [my position], and then I can control where the chair goes, so the [force] problem goes away.” That’s part of why Atlas moves like molasses while grasping auto parts but glides like a gymnast when it’s not touching anything except the floor.
“It would be an overstatement to say that force control is absolutely required in every useful manipulation task — that’s just not true,” Kuindersma said. But he, Hurst, and Parada all readily grant that clever force workarounds won’t deliver the all-purpose mobile dexterity our robot butlers need. Even if today’s VLA-brained bots, refined by reinforcement learning, had “an internet-sized” amount of positional data to train on, “it’s very likely you [would] have to do some additional work,” Parada said. “Humans feel the forces that are working against you when you’re trying to open a bottle.” Humanoids, for the most part, still don’t, which means they have not mastered physics — at least not in the way we have, from a lifetime of interacting with our environments through the extraordinarily complex musculoskeletal and nervous systems gifted to us by evolution.
That’s a big reason why even doors and stairs aren’t fully “solved” for present-day humanoids. These stairs, that door? Probably. But all stairs and doors, plus everything else? “There’s no world in which there are actually useful, autonomous [humanoid] robots that are only doing position-based control,” Kuindersma said. “Force as a first-class citizen is absolutely required.”
Get Smart (or Start Over)?
So how do we get over the wall, scientifically speaking? Most of the experts I asked suspect that it will take a new blend of hardware and software advances. Tactile sensors for better data collection and robot hands that combine high power, compliance, and transparency with low inertia would accomplish a lot, and nobody believes that true material breakthroughs (like replacing motors with artificial muscles) will be necessary.
“The hardware is exceptional, and if you’re blaming [it], you’re making excuses,” said Russ Tedrake, another longtime MIT roboticist I spoke to. “If you put a human brain through the hardware we have today — by teleoperating it, for instance — it’s incredibly capable.” Finding more intelligent ways to control it is key.
When asked how to achieve that, everyone had a different answer. Agrawal is studying how to combine force control with reinforcement learning by having humanoids learn compliant behaviors in simulation, instead of moving between rigidly defined positions. Tedrake, whose work on “large behavior models” (a cousin of VLAs) produced the apple-coring robot demo, recently argued in Science Robotics for a ChatGPT-style regime of “large-scale data collection and large pretrained models.” Frank Park, who wrote the book on modern robotics — literally, the textbook titled Modern Robotics — believes that current AI approaches should be torn down to the studs and replaced with ones that make physics fundamentals (such as force and acceleration) learnable at a foundational level. “The VLA architecture is just all wrong,” he told me. “I believe that approach is doomed to fail.”
In all these conversations, what struck me most wasn’t the debates about which kinds of sensors, data, or AI architecture could “solve” humanoid robotics. Rather, it was the sense that the scientific ethos of the field had changed. Hurst, who had just spun Agility Robotics out of his Oregon State University lab when we first spoke, put a fine point on it.
“I remember Gill Pratt, who was the director of the MIT Leg Lab and then the program manager for the DARPA Robotics Challenge, saying that his big worry was that we’d end up using reinforcement learning and AI to make robots walk and run before we ever actually understood how it works,” he said. “And in a lot of ways, we’re kind of doing that.”
Tedrake agreed but said that it’s hardly the first time we’ve taken scientific and engineering leaps without a firm grip on the fundamentals. “If you look at electricity and magnetism, there was the Volta stage where you’re sticking electrodes in frogs,” he said. “And then we had Faraday, who did exactly the right experiments, and then eventually we had Maxwell tell us the governing equations. I think we’re in the Volta stage.”
So when will humanoids be solved?
“Robots are still bad, and it will take time. But the bones are good. Both are true,” Tedrake said. “And it’s still hard.”