AI is the New Slime Mold
How mindless systems outperform human genius, and why that terrifies us
Last year, a Waymo autonomous vehicle collided with a cyclist who emerged from behind a large truck at a four-way stop. The vehicle braked immediately upon detecting the cyclist but failed to avoid the collision due to occlusion by the truck. The cyclist escaped with minor injuries. The car was praised for its ‘quick thinking.’ But was it thinking at all—or just crunching numbers?
Anthropomorphism in Evolution and AI
When we think about how we often attribute agency to artificial intelligence, it’s intriguing to draw a parallel with the natural world. In evolutionary biology, for example, we frequently use language that implies intentionality—saying that natural selection "favors" certain traits or "selects for" specific characteristics. But this is just a convenient shorthand. Evolution itself is not a conscious process. Instead, it’s an intricate interplay of genetic variation, environmental pressures, and random events.
In the same vein, when we describe AI systems as "learning" or "making decisions," it’s easy to forget that these systems don’t possess understanding or intent. What they do is execute highly complex algorithms and statistical computations designed by humans to process data and produce outputs. These processes may mimic human-like behaviors, but they lack the awareness or purpose that we often unconsciously project onto them.
Recognizing this shared tendency to anthropomorphize both evolution and AI reveals something about human thinking: we naturally assign agency to systems that seem complex to us, even when no actual intent exists. This habit can lead us to misunderstand the true nature of these systems—whether it’s the blind mechanics of evolution or the programmed functionality of artificial intelligence. It’s a cognitive bias, we should be mindful of it. These phenomena may be remarkable, but they are fundamentally different from human agency.
The Slime Mold Principle—Why AI Doesn’t Need to "Understand" Anything
In 2021, DeepMind’s AlphaFold cracked the protein-folding problem, a biological mystery that had stumped scientists for decades. But here’s what nobody wants to admit: the AI didn’t "solve" anything in the way a human would. It simply found patterns—just as evolution "discovered" flight four separate times without ever intending to.
There’s an uncomfortable truth lurking beneath our AI debates: intelligence doesn’t require understanding. Nature proved this first.
Nature’s Dumb Genius
Consider the box jellyfish. Its 24 eyes see nothing. No brain processes the images. Yet it navigates mangrove swarms with eerie precision through pure reflex. This is evolution’s signature move: creating sophisticated behaviors through brute-force iteration, no comprehension required.
Now look at your smartphone’s autocorrect. It predicts your next word not by "knowing" language, but by tracking statistical likelihoods, the linguistic equivalent of a jellyfish avoiding roots. When we call this "AI," we’re committing the same category error early biologists did when they described plants "seeking" sunlight.
The AI-Evolution Feedback Loop
The parallels grow stranger the closer you look. In 2020, Google’s AutoML system designed a computer chip floorplan in six hours that beat human engineers’ months-long efforts. The AI didn’t understand semiconductor physics. It simply mutated and tested designs until one worked, mimicking natural selection’s trial-and-error.
This reveals the core illusion: we describe both evolution and AI in agential terms ("it figured out protein shapes," "nature designed wings") when in reality, they’re just optimization processes playing out in high-dimensional spaces. The "intelligence" is entirely emergent: a byproduct of simple rules executed at scale.
Why This Makes Us So Uncomfortable
Harvard psychologist Steven Pinker once noted that humans resist Darwinian explanations because we crave "design narratives." The same applies to AI. When ChatGPT strings together a coherent essay, we imagine some ghost in the machine—not the reality of next-token prediction operating at scale.
But does AI require actual comprehension? Probably not!
AI systems now diagnose cancers from medical images more accurately than radiologists, despite having zero conceptual grasp of anatomy
Algorithmic trading dominates stock markets while "understanding" nothing of economics
Large language models pass legal bar exams without any model of justice
This isn’t just academic. As we deploy these systems in high-stakes domains, we face a pivotal choice: do we keep pretending AI has human-like understanding, or build new frameworks for mindless competence?
Embracing the Post-Understanding World
The lesson from 3.8 billion years of evolution is clear: you don’t need a blueprint to build a wing, and you don’t need comprehension to solve complex problems. As AI systems grow more capable, we’ll have to abandon our romantic notions of intelligence and confront what really matters: reliable outcomes over imagined understanding.
The next time an AI wows you, remember the box jellyfish. Brilliance doesn’t require a mind, just the right feedback loops.
I dunno. I think I draw the opposite conclusion.
You are no doubt familiar with Kahneman's System One and System Two thinking. Our System Two requires consciousness, rationality, intelligence and all those things that make human thinking special. But we rarely use any of that. System one doesn't require understanding or intelligence either — and yet it can solve maths problems, catch cricket balls and recognise faces. System one does most of our thinking automatically in the background.
I think the way AI solves all these problems is a lot like how we use our system one. Not identical, of course: we have neurons where LLM has transformers. But, functionally, it does the same kind of stuff.
We use system two for the special stuff — but less often than you might think. AI will have a system two soon, too, and then we will call it intelligent.
Intriguing and thoughtful prose. Thank you for sharing.