Not long ago, artificial general intelligence was the only thing anyone in tech wanted to talk about.
Claims about soon-to-arrive machines equal to or smarter than humans flooded Silicon Valley, with leaders like OpenAI’s Sam Altman declaring AGI within reach as recently as this year.
OpenAI’s enthusiasm echoed everywhere from its internal lingo — sales teams jokingly called themselves “AGI Sherpas” — to Microsoft’s 2024 white paper touting the “sparks of AGI” in GPT-4.
Elon Musk launched xAI in 2023 with AGI as its mission, suggesting timelines as aggressive as 2025 or 2026.
Other big names joined the chorus. Demis Hassabis from DeepMind told reporters the world was “on the cusp” of AGI, while Mark Zuckerberg insisted Meta would pursue “full general intelligence” to revolutionize everything from social media to virtual assistants.
Dario Amodei from Anthropic distanced himself from the term but predicted “powerful AI” could arrive by 2027, warning in the same breath of both utopia and catastrophe as possibilities.
Even Eric Schmidt, the former Google CEO, told audiences this spring that AGI’s arrival was just a few years away.
Silicon Valley’s Change in Mood
A few short months later, the mood has shifted sharply. Utterances about imminent machine intelligence have been replaced by a more restrained realism.
At a CNBC interview this summer, Altman downplayed AGI itself, calling it “not a super-useful term.” Eric Schmidt, once bullish on AGI, wrote in the New York Times that the fixation distracts from making truly useful tech.
AI experts like Andrew Ng and US AI advisor David Sacks have dismissed AGI as “overhyped.”
Why the sudden change? The answer might be as simple as the technology itself hitting a wall. The launch of the latest Claude advancements arrived with less excitement than expected, a reminder that the leap to AGI is daunting. Ben Goertzel, one of the people who gave AGI its name, pointed out that GPT-5, though impressive, shows no sign of the kind of broad, human-level intelligence that the original term described.
Part of the confusion traces back to AGI itself being an ill-defined concept. Early versions envisioned a machine that could learn any cognitive task a human could, but even the definition of “competent human” is fuzzy. OpenAI publicly described AGI as a system that outperforms humans at economically important jobs, shifting the bar once again.
Company ambitions and financial stakes grew along with the hype. OpenAI and Microsoft have structured their deals around the idea of AGI’s arrival. Microsoft invested billions, but is reportedly pressing for changes in its agreements as faith in fast AGI wavers.
A growing crowd now says the pragmatism is overdue. “Very healthy,” is how Shay Boloor, market strategist at Futurum Equities, put it, noting that investors reward follow-through rather than dream-chasing.
Entrepreneurs and scientists are pivoting from the all-purpose AGI fantasy to supporting specialized models built for individual industries. Some, like Daniel Saks of Landbase, believe smarter, field-specific AI will ultimately redefine the sector, rather than a single, all-knowing machine.
Yet AGI rhetoric hasn’t vanished entirely. Executives at Anthropic and DeepMind still refer to being “AGI-pilled,” though what that means is hotly debated — for some, it signals belief in imminent breakthrough; for others, it is shorthand for the idea that AI will just keep getting better. What’s obvious now is a hesitance to make concrete predictions.
Not everyone is reassured by the lowered expectations or the ambiguous language. Steven Adler, a former OpenAI researcher, warned, “We shouldn’t lose sight that some AI companies are explicitly aiming to build systems smarter than any human. AI isn’t there yet, but whatever you call this, it’s dangerous and demands real seriousness.”
Critics like Max Tegmark argue that the new modesty serves mostly as a shield from regulation. He said, “It’s smarter for them to just talk about AGI in private with their investors,” warning that the rebranding effort is as much political as scientific.
In this fast-evolving landscape, AGI’s meaning may be lost, but the debate — and the risks — are still very real.