
On Intelligence: human brains vs LLMs
On Intelligence by Jeff Hawkins was published in 2005, and I read it shortly after. At the time, it resonated deeply. Hawkins' description of the brain as a highly efficient pattern-matching and prediction machine aligned closely with my own experience learning natural languages. The idea that intelligence emerges from prediction felt intuitively right, long before today's AI boom.
Revisiting in the Age of LLMs
Coming back to the book now, in the era of large language models, is fascinating. Hawkins was highly critical of the symbolic, rules-based AI dominant at the time, arguing instead for data-driven learning grounded in prediction and memory. On that front, he was clearly onto something.
Having read this book years ago, I needed little convincing when LLMs emerged that systems based on large-scale pattern matching and prediction could exhibit real intelligence, even if intelligence and consciousness are not the same thing.
Where the Book Falls Short
Where the book feels weaker in hindsight is Hawkins' insistence that true AGI can only be achieved by closely replicating the brain's exact mechanisms. The brain is the product of a messy evolutionary process, a local maximum of form and function, not necessarily the only or best one.
Just as planes don't need to flap their wings to fly, intelligent systems may not need to mirror biology to succeed.
Verdict: 3/5
A very interesting exploration of how the brain works for anyone interested in cognition, but one that, in retrospect, gets quite a lot wrong. Notably, Hawkins still holds many of these views today, despite the progress of modern AI.
If you're curious about theories of intelligence and how we got to where we are today, On Intelligence is worth a read, just keep your critical thinking hat on.
I applied Hawkins' prediction-based view of cognition to language learning when building Babblo. For more on what actually counts as intelligence, see Opus 4.5 is smarter than my dog.

