Critiqs

AI Learns Like Brains Not Bots New Tech Takes Inspiration from Nature

ai-learns-like-brains-not-bots-new-tech-takes-inspiration-from-nature
  • A maggot’s brain learns from real world experience, while AI only processes data without genuine curiosity.
  • Brains adapt by constantly predicting and minimizing surprise, unlike static pattern-matching AI models.
  • Friston’s brain-inspired theory pushes AI to experiment and learn actively, bridging digital and biological gaps.

Back in 1943, Warren McCulloch and Walter Pitts imagined neurons functioning like simple switches that could be on or off. These early mathematical models of neural computation shaped today’s artificial intelligence, but what we call AI is really an ocean away from the tangled, pulsing reality of a human brain.

Instead of curiosity or emotion, AI relies on sheer calculation. Machines crunch data, scan for patterns, and guess the next word or image by matching numbers, not by feeling or wanting anything. That explains why tools like ChatGPT can sound conversational but lack a sense of wonder about the world they describe.

In nature, even the brain of a slimy slug adapts and learns in ways AI cannot copy. Maggots respond to lived experience, adjusting how they act in strange, nuanced ways. The so-called “neurons” in digital systems are only math. They process information but never truly adapt.

“Hinton’s work could be more influential than the discovery of fire,” say industry insiders, awed by the ways modern AI reshapes science and art. Yet, even Geoffrey Hinton, one of deep learning’s pioneers, acknowledges biology’s mysterious flexibility.

Brains That Predict, Machines That Imitate

At Cambridge, two researchers walked parallel paths. Hinton focused on building artificial computer brains through so-called deep learning. Down the hallway, neuroscientist Karl Friston developed the Free Energy Principle, a theory that sees the brain as an expert in minimizing surprise.

Friston’s model suggests that brains are prediction engines. They constantly measure the gap between what they expect and what unfolds, fine-tuning themselves to keep up with reality. “Our brains learn by reducing surprise,” Friston argued, a process he describes as active inference — a never-ending feedback loop with the world.

Standard AI doesn’t learn like that. Models such as ChatGPT generate responses by guessing what sounds most plausible based on training data. As the “P” in GPT — pre-trained — suggests, these systems work from archives of scraped text. They do not explore or react in genuine real time, and they can’t probe their environment when confused.

Friston’s approach flips the script. Inspired by brains, his AI models reason, experiment, plan, and learn from fresh feedback in real moments. When stumped, these intelligent systems don’t fake confidence — they investigate. Faced with “ruff ruff,” they might question if that sound was a real dog, a fox, or just a person pretending. They search for more clues and update what they know with each new hint.

This loop of prediction, action, observation, and learning happens again and again, letting the AI build up a working picture of the world not just from past data but from lived experience.

As this brain-inspired technology spreads, AI adapts in ways unlike anything before. Moving beyond mimicry, it reacts, reasons, and responds, bridging the divide between biological curiosity and digital logic, and this is further explored in limitations of AI thinking speed.

SHARE

Add a Comment

What’s Happening in AI?

Stay ahead with daily AI tools, updates, and insights that matter.

Listen to AIBuzzNow - Pick Your Platform

This looks better in the app

We use cookies to improve your experience on our site. If you continue to use this site we will assume that you are happy with it.

Log in / Register

Join the AI Community That’s Always One Step Ahead