
It’s not a metaphor to call DNA a “program” — that’s literally the case.
According to, Alan Turing e John von Neumann They saw it early: the logic of life and the logic of code can be one and the same thing.
Of course, there are significant differences between biological computing and the type of digital computing performed by a personal computer or its computer. smartphone. DNA is subtle and multi-layered, including phenomena such as epigenetics and gene proximity effects.
Cellular DNA is also nowhere near the whole story. Our bodies contain (and continually exchange) countless bacteria and viruses, each with its own code.
Biological computing is “massively parallel,” decentralized, and noisy. The cells are about 300 quintillion ribosomesall working at the same time.
Each of these floating protein factoriesexquisitely complex, is, in fact, a small computer — although stochastic, meaning it’s not entirely predictable.
The movements of articulated components, the capture and release of smaller molecules, and the manipulation of chemical bonds are all individually random, reversible and inaccuratedriven one way and the other by a constant thermal shock.
Only a statistical asymmetry favors one direction over another, with clever origami movements that tend to “block” certain steps, so that the next step is more likely to occur.
This is very different from the functioning of “logic gates” in a computer, basic components that transform binary inputs into outputs using fixed rules. They are irreversible and designed to be 99.99% reliable and reproducible.
Parallelism is also increasingly fundamental to today’s computing. Modern Artificial Intelligence (AI), for example, depends on both massive parallelism and randomness — as in the parallelized “stochastic gradient descent” (SGD) algorithm used to train most current neural networks, the definition of “temperature” used in chatbots to introduce a degree of randomness into your results and the parallelism of the graphics processing units (GPU), which power most AI in data centers.
Traditional digital computing, which is based on the centralized and sequential execution of instructions, was a product of technological restrictions. The first computers needed to perform long calculations using as few parts as possible.
Originally, these parts were expensive and unstable vacuum tubes that were prone to burning out and required frequent hand replacement.
The natural design was, then, a minimal “Central Processing Unit” (CPU), which worked based on sequences of bits transported back and forth from an external memory — which became known as the “von Neumann architecture”.
However, Turing and von Neumann were both aware that computation could be done by other means. Turing, toward the end of his life, explored how biological patterns, such as leopard spots, could arise from simple chemical rules, in a field he called morphogenesis.
Turing’s model of morphogenesis was a biologically inspired form of massively parallel, distributed computing. The same happened with his previous concept of a “disorganized machine”, a randomly linked neural network inspired by a child’s brain.
These were visions of what could be computing without a central processor — and what it actually is, in living systems.
Von Neumann also began exploring massively parallel approaches to computing as early as the 1940s. In discussions with Polish mathematician Stanisław Ulamat Los Alamos, conceived the idea of “cellular automata”, grids of simple computational units similar to pixels, all obeying the same rule and changing their states simultaneously, communicating only with their immediate neighbors.
von Neumann went so far as to draw, on paper, the Key components of a self-reproducing cellular automatonincluding a horizontal “ribbon” of cells containing instructions and blocks of cellular “circuits” to read, copy and execute them.
Designing a cellular automaton is much more difficult than normal programming, because each cell or “pixel” is simultaneously changing its own state and its environment. Add randomness and subtle effects of feedbackas in biology, and becomes even more difficult to reason, “program” or “debug”.
However, Turing and von Neumann understood something fundamental: Computing does not require a central processor, logic gates, binary arithmetic, or sequential programs. They exist infinite forms of computing and, crucially, they are all equivalent. This perception is one of the greatest achievements of theoretical computer science.
This “platform independence” or “multiple realizability” means that any computer can emulate any other. However, if the computers are different models, emulation may be glacially slow. For this reason, von Neumann’s self-reproducing cellular automaton was never physically constructed.
This demonstration in 1994 — the first successful emulation von Neumann’s self-reproducing automaton — could not have happened much earlier. A series computer requires a large processing capacity to cycle through the 6,329 cells of the automaton over the 63 billion time steps needed for the automaton to complete its reproductive cycle.
On the screen, it worked as advertised: a pixelated, two-dimensional Rube Goldberg machine, crouched over a tape of instruction sheets. 145,315 cells which extended to the right, pumping information out of the tape and reaching out with a “writing arm” to slowly print a working clone of yourselfjust above and to the right of the original.
Likewise, it is inefficient for a serial computer to emulate a parallel neural network, heir to Turing’s “disorganized machine”. Consequently, running large neural networks like those in chatbots Transformer-based computing has only recently become practical, thanks to continued progress in the miniaturization, speed, and parallelism of digital computers.
Em 2020, Alex Mordvintsev combined modern neural networks, Turing morphogenesis, and von Neumann cellular automata into the “neural cellular automaton” (NCA), replacing the simple per-pixel rule of a classical cellular automaton with a neural network.
This network, capable of detecting and affecting some values that represent local concentrations of morphogens, can be trained to “grow” any desired pattern or image, and not just zebra stripes or leopard spots.
Real cells do not literally have neural networks inside them, but they execute highly evolved, non-linear and intentional “programs” to decide on the actions they will take in the world, given an external stimulus and an internal state.
NCAs offer a general way of modeling the range of possible behaviors of cells whose actions do not involve movement, but only changes of state (here, represented as color) and the absorption or release of chemicals.
According to Blaise Agüera y Arcas, author of the book ”, the first NCA that Alex showed was that of a lizard emoji, which could regenerate not only its tail, but also its limbs and head. It was a powerful demonstration of how complex multicellular life can “think locally” but “act globally,” even when each cell (or pixel) is running the same program — just as each of its cells is running the same DNA.
Simulations like these show how computing can produce realistic behavior at various scales. Building on von Neumann’s designs and extending to modern neural cellular automata, they offer a glimpse into the computational foundations of living systems.