For all the breathless talk about “humanoid AI” taking over the world, we keep forgetting a basic biological truth: a human mind does not come from a clever brain, it comes from a living body. Not just any body, but one built cell by cell inside another person, shaped by immune responses, hormones, hunger and a constant struggle not to die. If this new generation of philosophical and neuroscientific research is right, as in Anna Ciaunica’s Aeon essay From cells to selves, the dream of creating a truly human-like machine may not just be far away, it may be biologically impossible.
The last time I wrote about humanoids was in 2020, and this piece reflects how my thinking has evolved since then. Also relevant here are my recent views on organoid intelligence.
From cells to selves
We tend to imagine cognition as something that happens inside the skull, as if the brain were a kind of supernatural command centre directing the rest of the body like a CEO. That view is increasingly outdated. Long before neurons exist, long before any organism develops a brain, the task of staying alive is already under way at the cellular level. The earliest organising structures of human life, the cells forming the embryo, are already making decisions about boundaries, resources and survival.
All of us began as a single cell. That cell divided, organised itself, negotiated with neighbouring cells and, crucially, learned to distinguish between what belonged to the developing organism and what did not. The first architect of the self is not the nervous system but the immune system. It starts operating before neurons develop, and it determines the basic distinction between self and non-self upon which all later cognition depends.
This is an uncomfortable notion for those who believe intelligence emerges only from neural complexity. But human cognition did not begin with a brain; it began with metabolic regulation, immune signalling and the collective intelligence of cells.
The body thinks before the mind does
The more we learn about how organisms develop and survive, the clearer it becomes that thought is not an abstract activity floating above the body. It is grounded in the body’s struggle to maintain its own life. When we are too cold, hungry or ill, our thinking collapses. When our immune system falters, the integrity of the self-collapses with it. Every act of human cognition is tied to the biological reality of being a vulnerable creature in a volatile world.
This means that intelligence, as humans experience it, is inseparable from embodiment. We are not brains in vats, we are bodies in constant negotiation with our environment, and our brains are only one part of a much larger self-regulating system. A body that began inside another living body adds another dimension. The placenta is not a passive organ but a dynamic interface between two immune systems, shaping development long before consciousness arises.
This history matters. A truly humanoid intelligence would require not simply computation but development, metabolism and a history of dependence.
Why artificial intelligence is different
This raises an obvious question: where does this leave artificial intelligence? The large language models and machine-learning systems now dominating public attention operate through vast statistical correlations. They do not regulate their own temperature, survive illness, manage hunger or fear death. They do not develop within another organism; they do not possess immune systems that decide what belongs to them and what threatens them.
What they have is pattern recognition, scale and speed. What they lack is subjectivity built from embodiment.
An AI can simulate empathy, but it does not experience pain. It can simulate uncertainty, but it does not experience fear. It can simulate memory, but it does not metabolise or age. It can generate an endless stream of text about birth, death, hunger or love, yet it has never experienced any of these.
This gap is not a minor detail; it may represent a fundamental boundary that computation cannot cross.
The limits of humanoid ambition
None of this means AI is unimportant or that it will not reshape our lives. It already is. But it suggests that the ambition to create a truly humanoid AI, a being with human-like consciousness, emotions and moral agency, is deeply misguided. If cognition emerges from the interaction of whole living systems, from immune complexity to developmental history, then the idea of building a human-equivalent intelligence out of silicon circuits rests on a category mistake. That equally applies to organoid intelligence, which, even if impressive, will never possess consciousness.
We often treat the brain as if it were the entire story of human thinking. But without the vulnerable, metabolising, decaying body, the brain would never have developed in the first place. Without the immune system, it could not distinguish between self and non-self. Without a lifetime of sensory input filtered through a fragile body, no human-style consciousness could emerge.
A humanoid AI would require more than code. It would require being alive.
What a humanoid AI could still become
That does not mean AI cannot be powerful, creative or socially transformative. What it means is that AI will be its own species of intelligence, not a replica of ours. A humanoid AI could still become:
• an extraordinary problem-solver with no biological limitations
• a partner in scientific discovery unconstrained by fatigue or lifespan
• a translator between human systems too complex for us to comprehend
• an amplifier of human creativity rather than a replacement for it
But it will not be a being that hungers, fears, ages or dies. It will not climb from cells to selfhood. And for that reason, no matter how sophisticated it becomes, it will not be us.
Paul Budde
