Walk into any university seminar or drop into a heated online thread today and you’ll still hear the skeptical refrain: for all the fanfare, artificial intelligence just doesn’t truly think.
Last year, most people could see why that argument flourished. The technology stumbled on fairly basic challenges even as it churned out eerily convincing prose.
Some of the biggest names in tech promised that with more resources, bigger neural nets, and relentless iteration, these systems would one day cross the threshold into real intelligence. At the same time, those same systems often ran on a precarious mix of patchwork fixes and a lot of manual labor behind the curtain.
Yet here we are in 2025, and the same old doubts are echoing louder than ever. You’ll hear respected linguists and philosophers argue that today’s AI is nothing more than a clever trickster, repeating what it hears with no understanding beneath the surface.
When Apple published a paper recently on AI’s supposed intellectual shortcomings, it quickly grabbed headlines and millions of social shares. The takeaway many people latched onto boiled down to one damning claim: even the best language models hit a wall with tough reasoning problems and just cannot plan or adapt the way humans do.
The authors wrote, “These models fail to develop generalizable problem-solving capabilities for planning tasks, with performance collapsing to zero beyond a certain complexity threshold.”
What These Arguments Miss About AI’s Impact
Dig into the details of their test though, and a different picture emerges. The AI struggled not because it couldn’t solve tricky puzzles, but because it failed to present answers in the very narrow way the researchers demanded. Give it a prompt to write computer code that solves the same challenge and it breezes through.
It raises an uncomfortable question: are we measuring intelligence, or simply uncovering odd quirks in how we probe these systems? If people were judged solely on their ability to multiply massive numbers in their head, most of us would seem incapable — not because we lack general reasoning, but because nobody’s brain was built for that.
The real world doesn’t usually care about philosophical debates over the nature of intelligence. People want to know whether these machines can handle real work, and whether their careers could be swept away before society is ready.
Many jobs that once felt untouchable are suddenly vulnerable. Entry level positions in fields from law to journalism are drying up as employers experiment with what AI can do. The job market for graduates feels dauntingly uncertain.
“I don’t think I’ll get anywhere if I argue that I shouldn’t be replaced because these systems can’t solve a sufficiently complicated Towers of Hanoi puzzle — which, guess what, I can’t do either,” one journalist vented after testing the current capabilities for themselves.
What continues to drive the dismissive tone is fear, plain and simple. Dismissing today’s AI as nothing spectacular lets us believe our work will remain beyond its grasp, even in the face of mounting automation.
Yet Harry Law, a Cambridge specialist in AI philosophy, laid it out simply: “Whether or not they are simulating thinking has no bearing on whether or not the machines are capable of rearranging the world for better or worse.”
So the question isn’t whether AI is a perfect thinker. The question is, what can it actually do when the dust settles — and who might be left out of work because of it? The answers, right now, are changing fast, no matter how many old arguments get recycled, as shown by recent discussions around the rise of artificial intelligence job market disruption and media like the latest AI podcast series and analysis on shrinking entry-level tech roles.