Google DeepMind is once again making a bold statement in the global race toward artificial general intelligence (AGI). Demis Hassabis, co-founder and CEO of the Alphabet-owned AI powerhouse, has called for the industry to push scaling efforts “to the maximum” if it hopes to reach truly human-level AI.
Speaking at Axios’ AI+ Summit in San Francisco, Hassabis emphasized that expanding today’s AI models—especially in terms of computational power—remains one of the most crucial strategies for eventually achieving AGI. DeepMind’s recent launch of Gemini 3, which has already drawn significant praise, reflects that very direction.
“The scaling of the current systems, we must push that to the maximum, because at the minimum, it will be a key component of the final AGI system. It could be the entirety of the AGI system,” Hassabis said, underlining the importance of maximising compute and data.
AGI, often described as the holy grail of artificial intelligence, refers to systems capable of human-like reasoning, learning and problem-solving. While no company has yet achieved it, the pursuit has fueled a global race, driving massive investments into cloud infrastructure, high-performance data centers, and advanced chips designed specifically for AI workloads.
Hassabis believes that the rapid advances observed in the past few years support the idea that scaling laws—where models improve as data, parameters and compute increase—may be the most reliable path to AGI. However, he also acknowledged that pure scaling may not be enough, noting that “one or two” key innovations will likely still be required.
Still, expanding large-scale AI systems comes with its own set of hurdles. The amount of publicly available training data is finite, and the energy demands of next-generation compute clusters are pushing companies to rethink efficiency and sustainability. The cost of building and maintaining such infrastructure also continues to rise sharply.
At the same time, not everyone in the AI field agrees with the “scale-is-everything” philosophy. Yann LeCun, former Chief AI Scientist at Meta and a pioneering figure in deep learning, has been one of the most vocal critics of the current scaling race. According to LeCun, the belief that increasing compute and data will naturally lead to intelligence is misguided.
“Most interesting problems scale extremely badly. You cannot just assume that more data and more compute means smarter AI,” he said during a talk at the National University of Singapore earlier this year.
LeCun is now building his own venture focused on developing “world models”—AI systems that learn through spatial, physical and contextual understanding rather than relying mainly on language-based training. These models aim to replicate aspects of human learning, such as memory, reasoning, and long-horizon planning, pushing for an entirely different route to future AI breakthroughs.
The tension between these two schools of thought—scaling versus rethinking AI architecture—captures one of the most important debates in today’s tech landscape. As companies pour billions into ever-bigger models, the sector faces a pivotal question: keep pushing the limits or rethink the fundamentals?
For now, Hassabis is clear that DeepMind will continue leaning into scale, betting that its approach will bring the world one step closer to AGI.

Leave a Reply