自组装实现自动化,逆转“生命游戏”规则
内容来源:https://www.quantamagazine.org/self-assembly-gets-automated-in-reverse-of-game-of-life-20250910/
内容总结:
谷歌科学家开发新型"神经细胞自动机",为人工智能与生物自组织机制架起桥梁
苏黎世谷歌研究院科学家亚历山大·莫德文采夫近日展示了其团队开发的革命性计算模型——神经细胞自动机(NCA)。这项研究突破性地将上世纪70年代经典的"生命游戏"细胞自动机与深度学习技术相结合,实现了从目标图案自动反推生成规则的全新范式。
在演示过程中,由像素构成的虚拟蝴蝶不仅能够自主生长,更展现出惊人的再生能力:当一侧翅膀被破坏后,系统能自动修复损伤,其过程类似蝾螈断肢再生。这种自组织特性并非预先编程,而是通过神经网络自主学习产生。
与传统细胞自动机不同,NCA采用"逆向工程"思维:先设定目标图案,再由神经网络通过反复迭代自主推导出实现该图案的微观规则。这种方法使科学家能够进行"复杂性工程",即通过设计基础构建模块,让系统自组织成目标形态,类似于通过设计特定砖块让其在震动中自主组装成大教堂。
该技术展现出多重应用前景:在医疗领域,或可启发人体器官再生研究;在计算领域,为完全分布式计算系统提供新范式;在机器人领域,能指导群体机器人实现有机协同。研究表明,由于缺乏长程连接,NCA系统无法记忆具体像素排列,反而迫使它学习抽象规则,这种特性使其在需要推理能力的任务中表现优于传统神经网络。
目前全球多个研究团队正推进NCA应用研究,包括矩阵运算、手写识别、机器人集群控制等领域。这项研究标志着自组织、生命体与计算科学在分离数十年后正在重新走向融合,为理解生物形态发生和开发新型计算架构开辟了新路径。
中文翻译:
《生命游戏》逆转:自组装实现自动化
亚历山大·莫德文采夫向我展示了他屏幕上的两簇像素。它们脉动着、生长着,最终绽放成帝王斑蝶。当两只蝴蝶逐渐长大并相互碰撞时,其中一只翅膀枯萎受损。但就在濒临消亡之际,这只残缺的蝴蝶完成了一个后空翻般的动作,像蝾螈再生断肢般长出了新翅膀。
这位苏黎世谷歌研究院的科学家并未刻意编程让虚拟蝴蝶再生——这一切是自发发生的。他说,这正是灵光闪现的第一个信号。他的项目建立在细胞自动机数十年的研究传统之上:这种棋盘格般的微型计算世界由最简规则支配。最著名的"生命游戏"自1970年推广以来,吸引了无数计算机科学家、生物学家和物理学家,他们将其视为物理学基础法则如何孕育自然世界多样性的隐喻。
2020年,莫德文采夫通过创建神经细胞自动机(NCA)将其带入深度学习时代。他的方法不再从规则出发观察结果,而是从目标图案出发反推生成规则。"我想逆转这个过程:先明确目标再寻找实现途径",他说。这种逆向思维实现了物理学家斯蒂芬·沃尔夫勒姆1986年提出的"复杂性工程"——编程设计系统基础模块,使其自组装成任意目标形态。"设想建造大教堂时不需要设计整体建筑,只需设计砖块。什么样的砖块能在充分震荡后自主构建成大教堂?"
这种看似神奇的砖块在生物学中比比皆是。椋鸟群飞或蚁群活动都呈现整体协调性,科学家已用简单规则解释这种集体行为。同样,人体细胞通过相互协作构建完整生命体。NCA正是对此过程的模拟,只不过它从集体行为出发自动推导规则。
这项技术潜力无限。若生物学家能破译虚拟蝴蝶的翅膀再生机制,医生或能引导人体再生肢体。对善于从生物获取灵感的工程师而言,NCA为创建完全分布式计算机提供了新模型。在某些方面,NCA解决问题的先天能力可能优于神经网络。
莫德文采夫1985年出生于乌拉尔山脉东麓的米阿斯市。他在苏联时期的IBM兼容机上自学编程,模拟行星动力学、气体扩散和蚁群行为。"在计算机中创建微型宇宙并操控模拟现实的想法始终令我着迷",他说。
2014年他加入谷歌苏黎世实验室时,正值基于多层神经网络的新图像识别技术席卷行业。尽管功能强大,这些系统(至今仍)存在难以解释的缺陷。"我意识到必须弄清其运作机理",他说。他提出的"深度梦境"技术能放大神经网络识别的图像模式,一度使网络充斥普通照片幻化出的狗鼻、鱼鳞与鹦鹉羽毛的迷幻图像,莫德文采夫也因此声名鹊起。
塔夫茨大学发育生物学家迈克尔·莱文联系他时提出:既然神经网络难以解读,生物机体同样复杂,深度梦境技术能否助其破译?这封邮件重新点燃了莫德文采夫对自然模拟的热情,尤其是细胞自动机领域。
莫德文采夫与莱文及两位谷歌研究员的核心创新在于用神经网络定义细胞自动机的物理规则。在"生命游戏"中,网格细胞非生即死,随时间推移生死更迭。而新系统中,神经网络根据细胞及周边状态预测其演变,同类网络原本用于图像分类,在此则用于细胞状态判定,且无需人工设定规则——神经网络能在训练过程中自主学习。
训练从单个"活细胞"开始,通过网络反复更新细胞状态数十至数千次,将结果与目标比对后调整参数循环优化。若存在生成该图案的规则,此过程终将将其揭示。
参数调整可采用反向传播(现代深度学习主流技术)或遗传算法(模拟达尔文进化的传统技术)。反向传播虽快但需改造传统细胞自动机:将二元细胞状态改为0-1连续值,使状态转换平滑化。他还借鉴了谷歌东京实验室伯特·陈2010年代中期的方案。
莫德文采夫发现必须为细胞添加"隐藏"变量(不指示生死状态但引导发育),并采用随机更新时序避免图案失真。最终他构建了含8000个参数的庞大网络——尽管模拟显示"生命游戏"仅需25个参数,但深度学习常需超大规模网络,因为学习任务比执行任务更复杂。额外参数意味着更强能力,使莫德文采夫的创造物展现出超越"生命游戏"的丰富行为。
2020年介绍NCA的论文附带的程序能生成绿色蜥蜴图像。用鼠标划伤蜥蜴身体后,缺损像素会自主修复。这种创生与再生能力令生物学家着迷。巴塞罗那进化生物学研究所的里卡德·索列表示:"NCA具有惊人的再生潜力。"
蝴蝶与蜥蜴图像并非真实生物模拟,而是动物形态的彩色细胞图案。但莱文等人认为其捕捉了形态发生(生物细胞自组织成组织与躯体的过程)的关键特征。细胞自动机中的每个细胞仅响应周边细胞,而非遵循总体规划部署——这与活细胞特性一致。既然细胞能自组织,自然也能自重组。
莫德文采夫发现再生有时会自主发生:生成蜥蜴的规则同样能修复严重损伤。有时则需专门训练再生能力:刻意破坏图案后调整规则直至系统恢复。冗余性是实现鲁棒性的途径之一——为增强眼部稳定性,系统可能增生备用眼睛"最终长出三只眼睛"。
哥本哈根IT大学计算机科学家塞巴斯蒂安·里西探究NCA再生能力的本质,发现随机更新等不可预测性设计迫使系统发展出应对机制,使其能从容应对机体损伤。自然物种同样遵循此理:"生物系统因适应嘈杂环境而具备强鲁棒性"。
2023年里西、莱文与物理学家本·哈特尔通过添加记忆功能研究噪声如何增强鲁棒性。实验表明:在噪声训练中,神经网络能发展抗干扰能力;切换目标图案时,因掌握可转移技能(如画线),网络学习速度远超需推倒重来的记忆方法。简言之,抗噪声系统更具灵活性。研究者认为其装置是自然进化的缩影——基因组不直接规定机体形态,而是指定形态生成机制,使物种能通过能力重配快速适应新环境。
研究计算与自然进化的人工智能专家肯·斯坦利提醒:尽管强大,NCA仍是生物的不完美模型。与机器学习不同,自然进化没有特定目标。"进化并非先获得理想鱼类模板再编码实现",因此NCA的启示未必适用于自然界。
NCA在机体再生中展现的问题解决能力,使其可能成为新型计算模型。自动机形成的视觉图案本质上是按算法处理的数值,在适当条件下,细胞自动机与其他计算机同样具有普适性。
约翰·冯·诺依曼1940年代提出的标准计算机模型是中央处理器与内存的结合,按序执行指令。神经网络将计算与存储分布於数十亿并行互联单元。细胞自动机更彻底分布式——每个细胞仅连接邻近单元,缺乏冯·诺依曼架构与神经网络中的长程连接。(莫德文采夫的神经细胞自动机在每个细胞嵌入小型神经网络,但细胞间仍仅限邻近通信。)
谷歌技术与社会部门首席技术官布莱斯·阿杰拉·阿卡斯指出:长程连接是主要耗能源,若细胞自动机能替代其他系统将显著节能。圣塔菲研究所的梅拉妮·米切尔表示:"为这种系统编程需要建立相关抽象概念(正如编程语言对冯·诺依曼计算的作用),但我们尚未掌握大规模分布式并行计算的实现方法。"
神经网络本身不被编程,而是通过训练获得功能。1990年代米切尔等人已展示细胞自动机可通过遗传算法训练执行特定计算操作(如多数判决:若多数细胞死亡则余者皆亡,若多数存活则亡者复苏)。细胞在无法纵览全局的情况下(仅知邻近细胞生死数量),自发发展出新计算范式:生死细胞区域扩张收缩,最终优势状态覆盖整个自动机。"它们创造了堪称算法的有趣方案",米切尔说。
虽未深入发展该理念,但莫德文采夫的系统重燃了细胞自动机编程的热情。2020年他与同事创建了识别手写数字的NCA(经典机器学习测试):在自动机内书写数字,细胞逐渐变色直至统一,完成识别。2023年伦敦帝国学院加布里埃尔·贝纳等人基于软件工程师彼得惠登未发表成果,创建了矩阵乘法等数学运算算法。"肉眼可见其学会了真实矩阵乘法",贝纳说。
挪威奥斯特福德大学学院教授斯蒂法诺·尼凯莱最近采用NCA解决"抽象与推理语料库"(衡量通用智能的机器学习基准)中的问题。这些类似智商测试的问题要求从成对线图发现转换规则并应用至新例。神经网络因惯于记忆像素排列而非提取规则表现糟糕,而细胞自动机因无法整体记忆图像(缺乏长程连接),必须通过生长过程匹配线图,从而自动识别规则处理新案例。"这迫使系统不记忆答案而学习解决方案的生成过程",尼凯莱说。
其他研究者开始用NCA编程机器人群。佛蒙特大学机器人专家乔什·邦加德认为NCA能设计出紧密协作的机器人,使集群升华为有机整体。"想象昆虫或细胞组成的蠕动球体——它们不断爬行重组,这才是真正的多细胞特性。虽然早期阶段,但这可能是机器人的发展方向。"
为此,哈特尔、莱文与物理学家安德烈亚斯·泽特尔训练了虚拟机器人在模拟池塘中像蝾螈般游动。"这是让它们游泳的超强鲁棒架构",哈特尔说。
对莫德文采夫而言,生物、计算机与机器人的跨界延续了1940年代计算技术早期的传统——当时冯·诺依曼等先驱自由借鉴生物灵感。"对那些人而言,自组织、生命与计算的关系显而易见。这些领域曾一度分化,如今正在重新融合。"
英文来源:
Self-Assembly Gets Automated in Reverse of ‘Game of Life’
Introduction
Alexander Mordvintsev showed me two clumps of pixels on his screen. They pulsed, grew and blossomed into monarch butterflies. As the two butterflies grew, they smashed into each other, and one got the worst of it; its wing withered away. But just as it seemed like a goner, the mutilated butterfly did a kind of backflip and grew a new wing like a salamander regrowing a lost leg.
Mordvintsev, a research scientist at Google Research in Zurich, had not deliberately bred his virtual butterflies to regenerate lost body parts; it happened spontaneously. That was his first inkling, he said, that he was onto something. His project built on a decades-old tradition of creating cellular automata: miniature, chessboard-like computational worlds governed by bare-bones rules. The most famous, the Game of Life, first popularized in 1970, has captivated generations of computer scientists, biologists and physicists, who see it as a metaphor for how a few basic laws of physics can give rise to the vast diversity of the natural world.
In 2020, Mordvintsev brought this into the era of deep learning by creating neural cellular automata, or NCAs. Instead of starting with rules and applying them to see what happened, his approach started with a desired pattern and figured out what simple rules would produce it. “I wanted to reverse this process: to say that here is my objective,” he said. With this inversion, he has made it possible to do “complexity engineering,” as the physicist and cellular-automata researcher Stephen Wolfram proposed in 1986 — namely, to program the building blocks of a system so that they will self-assemble into whatever form you want. “Imagine you want to build a cathedral, but you don’t design a cathedral,” Mordvintsev said. “You design a brick. What shape should your brick be that, if you take a lot of them and shake them long enough, they build a cathedral for you?”
Such a brick sounds almost magical, but biology is replete with examples of basically that. A starling murmuration or ant colony acts as a coherent whole, and scientists have postulated simple rules that, if each bird or ant follows them, explain the collective behavior. Similarly, the cells of your body play off one another to shape themselves into a single organism. NCAs are a model for that process, except that they start with the collective behavior and automatically arrive at the rules.
The possibilities this presents are potentially boundless. If biologists can figure out how Mordvintsev’s butterfly can so ingeniously regenerate a wing, maybe doctors can coax our bodies to regrow a lost limb. For engineers, who often find inspiration in biology, these NCAs are a potential new model for creating fully distributed computers that perform a task without central coordination. In some ways, NCAs may be innately better at problem-solving than neural networks.
Life’s Dreams
Mordvintsev was born in 1985 and grew up in the Russian city of Miass, on the eastern flanks of the Ural Mountains. He taught himself to code on a Soviet-era IBM PC clone by writing simulations of planetary dynamics, gas diffusion and ant colonies. “The idea that you can create a tiny universe inside your computer and then let it run, and have this simulated reality where you have full control, always fascinated me,” he said.
He landed a job at Google’s lab in Zurich in 2014, just as a new image-recognition technology based on multilayer, or “deep,” neural networks was sweeping the tech industry. For all their power, these systems were (and arguably still are) troublingly inscrutable. “I realized that, OK, I need to figure out how it works,” he said.
He came up with “deep dreaming,” a process that takes whatever patterns a neural network discerns in an image, then exaggerates them for effect. For a while, the phantasmagoria that resulted — ordinary photos turned into a psychedelic trip of dog snouts, fish scales and parrot feathers — filled the internet. Mordvintsev became an instant software celebrity.
Among the many scientists who reached out to him was Michael Levin of Tufts University, a leading developmental biologist. If neural networks are inscrutable, so are biological organisms, and Levin was curious whether something like deep dreaming might help to make sense of them, too. Levin’s email reawakened Mordvintsev’s fascination with simulating nature, especially with cellular automata.
The core innovation made by Mordvintsev, Levin and two other Google researchers, Ettore Randazzo and Eyvind Niklasson, was to use a neural network to define the physics of the cellular automaton. In the Game of Life (or just “Life” as it’s commonly called), each cell in the grid is either alive or dead and, at each tick of the simulation clock, either spawns, dies or stays as is. The rules for how each cell behaves appear as a list of conditions: “If a cell has more than three neighbors, it dies,” for example. In Mordvintsev’s system, the neural network takes over that function. Based on the current condition of a cell and its neighbors, the network tells you what will happen to that cell. The same type of network is used to classify an image as, say, a dog or cat, but here it classifies the state of cells. Moreover, you don’t need to specify the rules yourself; the neural network can learn them during the training process.
To start training, you seed the automaton with a single “live” cell. Then you use the network to update the cells over and over again for dozens to thousands of times. You compare the resulting pattern to the desired one. The first time you do this, the result will look nothing like what you intended. So you adjust the neural network’s parameters, rerun the network to see whether it does any better now, make further adjustments, and repeat. If rules exist that can generate the pattern, this procedure should eventually find them.
The adjustments can be made using either backpropagation, the technique that powers most modern deep learning, or a genetic algorithm, an older technique that mimics Darwinian evolution. Backpropagation is much faster, but it doesn’t work in every situation, and it required Mordvintsev to adapt the traditional design of cellular automata. Cell states in Life are binary — dead or alive — and transitions from one state to the other are abrupt jumps, whereas backpropagation demands that all transitions be smooth. So he adopted an approach developed by, among others, Bert Chan at Google’s Tokyo lab in the mid-2010s. Mordvintsev made the cell states continuous values, anything from 0 to 1, so they are never strictly dead or alive, but always somewhere in between.
Mordvintsev also found that he had to endow each cell with “hidden” variables, which do not indicate whether that cell is alive or dead, or what type of cell it is, but nonetheless guide its development. “If you don’t do that, it just doesn’t work,” he said. In addition, he noted that if all the cells updated at the same time, as in Life, the resulting patterns lacked the organic quality he was seeking. “It looked very unnatural,” he said. So he began to update at random intervals.
Finally, he made his neural network fairly beefy — 8,000 parameters. On the face of it, that seems perplexing. A direct translation of Life into a neural network would require just 25 parameters, according to simulations done in 2020 by Jacob Springer, who is now a doctoral student at Carnegie Mellon University, and Garrett Kenyon of Los Alamos National Laboratory. But deep learning practitioners often have to supersize their networks, because learning to perform a task is harder than actually performing it.
Moreover, extra parameters mean extra capability. Although Life can generate immensely rich behaviors, Mordvintsev’s monsters reached another level entirely.
Fixer Upper
The paper that introduced NCAs to the world in 2020 included an applet that generated the image of a green lizard. If you swept your mouse through the lizard’s body, you left a trail of erased pixels, but the animal pattern soon rebuilt itself. The power of NCAs not just to create patterns, but to re-create them if they got damaged, entranced biologists. “NCAs have an amazing potential for regeneration,” said Ricard Solé of the Institute of Evolutionary Biology in Barcelona, who was not directly involved in the work.
The butterfly and lizard images are not realistic animal simulations; they do not have hearts, nerves or muscles. They are simply colorful patterns of cells in the shape of an animal. But Levin and others said they do capture key aspects of morphogenesis, the process whereby biological cells form themselves into tissues and bodies. Each cell in a cellular automaton responds only to its neighbors; it does not fall into place under the direction of a master blueprint. Broadly, the same is true of living cells. And if cells can self-organize, it stands to reason that they can self-reorganize.
Sometimes, Mordvintsev found, regeneration came for free. If the rules shaped single pixels into a lizard, they also shaped a lizard with a big gash through it into an intact animal again. Other times, he expressly trained his network to regenerate. He deliberately damaged a pattern and tweaked the rules until the system was able to recover. Redundancy was one way to achieve robustness. For example, if trained to guard against damage to the animal’s eyes, a system might grow backup copies. “It couldn’t make eyes stable enough, so they started proliferating — like, you had three eyes,” he said.
Sebastian Risi, a computer scientist at the IT University of Copenhagen, has sought to understand what exactly gives NCAs their regenerative powers. One factor, he said, is the unpredictability that Mordvintsev built into the automaton through features such as random update intervals. This unpredictability forces the system to develop mechanisms to cope with whatever life throws at it, so it will take the loss of a body part in stride. A similar principle holds for natural species. “Biological systems are so robust because the substrate they work on is so noisy,” Risi said.
Last year, Risi, Levin and Ben Hartl, a physicist at Tufts and the Vienna University of Technology, used NCAs to investigate how noise leads to robustness. They added one feature to the usual NCA architecture: a memory. This system could reproduce a desired pattern either by adjusting the network parameters or by storing it pixel-by-pixel in its memory. The researchers trained it under various conditions to see which method it adopted.
If all the system had to do was reproduce a pattern, it opted for memorization; fussing with the neural network would have been overkill. But when the researchers added noise to the training process, the network came into play, since it could develop ways to resist noise. And when the researchers switched the target pattern, the network was able to learn it much more rapidly because it had developed transferable skills such as drawing lines, whereas the memorization approach had to start from scratch. In short, systems that are resilient to noise are more flexible in general.
The researchers argued that their setup is a model for natural evolution. The genome does not prescribe the shape of an organism directly; instead, it specifies a mechanism that generates the shape. That enables species to adapt more quickly to new situations, since they can repurpose existing capabilities. “This can tremendously speed up an evolutionary process,” Hartl said.
Ken Stanley, an artificial intelligence researcher at Lila Sciences who has studied computational and natural evolution, cautioned that NCAs, powerful though they are, are still an imperfect model for biology. Unlike machine learning, natural evolution does not work toward a specific goal. “It’s not like there was an ideal form of a fish or something which was somehow shown to evolution, and then it figured out how to encode a fish,” he noted. So the lessons from NCAs may not carry over to nature.
Auto Code
In regenerating lost body parts, NCAs demonstrate a kind of problem-solving capability, and Mordvintsev argues that they could be a new model for computation in general. Automata may form visual patterns, but their cell states are ultimately just numerical values processed according to an algorithm. Under the right conditions, a cellular automaton is as fully general as any other type of computer.
The standard model of a computer, developed by John von Neumann in the 1940s, is a central processing unit combined with memory; it executes a series of instructions one after another. Neural networks are a second architecture that distributes computation and memory storage over thousands to billions of interconnected units operating in parallel. Cellular automata are like that, but even more radically distributed. Each cell is linked only to its neighbors, lacking the long-range connections that are found in both the von Neumann and the neural network architectures. (Mordvintsev’s neural cellular automata incorporate a smallish neural network into each cell, but cells still communicate only with their neighbors.)
Long-range connections are a major power drain, so if a cellular automaton could do the job of those other systems, it would save energy. “A kind of computer that looks like an NCA instead would be a vastly more efficient kind of computer,” said Blaise Agüera y Arcas, the chief technology officer of the Technology and Society division at Google.
But how do you write code for such a system? “What you really need to do is come up with [relevant] abstractions, which is what programming languages do for von Neumann–style computation,” said Melanie Mitchell of the Santa Fe Institute. “But we don’t really know how to do that for these massively distributed parallel computations.”
A neural network is not programmed per se. The network acquires its function through a training process. In the 1990s Mitchell, Jim Crutchfield of the University of California, Davis, and Peter Hraber at the Santa Fe Institute showed how cellular automata could do the same. Using a genetic algorithm, they trained automata to perform a particular computational operation, the majority operation: If a majority of the cells are dead, the rest should die too, and if the majority are alive, all the dead cells should come back to life. The cells had to do this without any way to see the big picture. Each could tell how many of its neighbors were alive and how many were dead, but it couldn’t see beyond that. During training, the system spontaneously developed a new computational paradigm. Regions of dead or living cells enlarged or contracted, so that whichever predominated eventually took over the entire automaton. “They came up with a really interesting algorithm, if you want to call it an algorithm,” Mitchell said.
She and her co-authors didn’t develop these ideas further, but Mordvintsev’s system has reinvigorated the programming of cellular automata. In 2020 he and his colleagues created an NCA that read handwritten digits, a classic machine learning test case. If you draw a digit within the automaton, the cells gradually change in color until they all have the same color, identifying the digit. This year, Gabriel Béna of Imperial College London and his authors, building on unpublished work by the software engineer Peter Whidden, created algorithms for matrix multiplication and other mathematical operations. “You can see by eye that it’s learned to do actual matrix multiplication,” Béna said.
Stefano Nichele, a professor at Østfold University College in Norway who specializes in unconventional computer architectures, and his co-authors recently adapted NCAs to solve problems from the Abstraction and Reasoning Corpus, a machine learning benchmark aimed at measuring progress toward general intelligence. These problems look like a classic IQ test. Many consist of pairs of line drawings; you have to figure out how the first drawing is transformed into the second and then apply that rule to a new example. For instance, the first might be a short diagonal line and the second a longer diagonal line, so the rule is to extend the line.
Neural networks typically do horribly, because they are apt to memorize the arrangement of pixels rather than extract the rule. A cellular automaton can’t memorize because, lacking long-range connections, it can’t take in the whole image at once. In the above example, it can’t see that one line is longer than the other. The only way it can relate them is to go through a process of growing the first line to match the second. So it automatically discerns a rule, and that enables it to handle new examples. “You are forcing it not to memorize that answer, but to learn a process to develop the solution,” Nichele said.
Other researchers are starting to use NCAs to program robot swarms. Robot collectives were envisioned by science fiction writers such as Stanisłav Lem in the 1960s and started to become reality in the ’90s. Josh Bongard, a robotics researcher at the University of Vermont, said NCAs could design robots that work so closely together that they cease to be a mere swarm and become a unified organism. “You imagine, like, a writhing ball of insects or bugs or cells,” he said. “They’re crawling over each other and remodeling all the time. That’s what multicellularity is really like. And it seems — I mean, it’s still early days — but it seems like that might be a good way to go for robotics.”
To that end, Hartl, Levin and Andreas Zöttl, a physicist at the University of Vienna, have trained virtual robots — a string of beads in a simulated pond — to wriggle like a tadpole. “This is a super-robust architecture for letting them swim,” Hartl said.
For Mordvintsev, the crossover between biology, computers and robots continues a tradition dating to the early days of computing in the 1940s, when von Neumann and other pioneers freely borrowed ideas from living things. “To these people, the relation between self-organization, life and computing was obvious,” he said. “Those things somehow diverged, and now they are being reunified.”