DeepMind’s Kohli Advocates Caution Over Haste in AI Development at HTLS


The mission of DeepMind continues to be to build artificial intelligence (AI) responsibly to the benefit of humanity, said Pushmeet Kohli, Vice President of Science and Strategic Initiatives of Google DeepMind, while speaking at the Hindustan Times Leadership Summit 2025.

Pushmeet Kohli, Vice President of Science and Strategic Initiatives, Google DeepMind. (HT PHOTO)
Pushmeet Kohli, Vice President of Science and Strategic Initiatives, Google DeepMind. (HT PHOTO)

He said that DeepMind, which has completed 15 years, believes that AI can push the boundaries of human knowledge. Approaching these as a “scientific problem” has been key to DeepMind’s persistence with the intersection of science and AI, at a time when many frontier AI companies are experimenting with similar research. “The organisation has science embedded in its DNA,” Kohli said.

“That we do through science, by making progress in many scientific areas, and we have been fortunate to have been able to show the potential of AI in problems such as protein structure prediction with AlphaFold. Our focus is on these problems where AI can have a transformational impact,” said Kohli. He insisted that these improvements cannot be incremental, but would “transform the way society does something.”

AlphaFold is an AI program developed by DeepMind, which uses a predictive method to understand protein structure — some examples include engineering more resilient crops in a warming climate, and understanding a key protein behind heat diseases.

“As a case study, before AlphaFold was released by Google DeepMind, it used to take almost five years sometimes to figure out the structure of a single protein. And these proteins are the building blocks of life. Everything to do with drug discovery to designing new enzymes, to dealing with pollution is essentially made up of proteins. And yet, we did not know the structure of these proteins. It would have taken a huge amount of effort and work to uncover this knowledge,” Kohli said.

Large versus narrow focus

Conventional wisdom suggests that large language models (LLMs) are getting better and therefore must be applied to more problems. This differs from a comparatively more narrow approach where DeepMind tailors an AI towards what is necessary in the domain.

Kohli insisted that DeepMind’s core focus is on pushing the boundaries and developing the most powerful AI models.

“The questions we ask are, what are the most powerful and most competent models? Essentially, the way to measure intelligence is to find out how quickly a model is able to accomplish a task. We have to develop models that are ever more competent in solving harder problems and in a more general way,” he said.

Kohli said less data and less supervision will be the keys to success. “In a way, LLMs are a natural progression of that longer-term focus that we have. We are working on building fundamental breakthroughs that push the efficiency of AI models, to special models such as AlphaFold,” he said.

In 2016, Google’s AI program, AlphaGo, competed against Lee Sedol, a legendary South Korean player of the ancient board game Go. AlphaGo won the match, at the time marking a major milestone for AI, considering Go is a significantly more complex game than chess for computers to master.

Kohli pointed to Google’s Gemini family of models, which he said have shown a level of competence in a variety of tasks, and not just in answering questions in an everyday chatbot usage scenario. This week, Google rolled out its latest, most capable model, the Gemini 3 Deep Think, to Ultra subscribers. Last month, the Gemini 3 and Gemini 3 Pro models were released — Gemini 3 Pro within the Gemini app, Gemini 3 in AI Mode in Search for complex reasoning, and the latest Gemini model as part of search.

“If you look at the spectrum, yes, we are working on specialised models, but we are also improving general models,” said Kohli. “At the end of the day, it’s all about the problem and how you solve the most impactful problems.”

Scientists trusting AI

Responding to a question on whether we have reached a stage where scientists can trust the outputs of AI to use in their work, Kohli said, AI is still making some mistakes. He added that it is a key element to figure out when AI is failing. “If you are a biologist, you’d say that while it is extremely accurate, it still makes mistakes sometimes. You wouldn’t want to spend many years of your life thinking it is correct only to later find it is not.”

He said that when AlphaFold made a mistake, it would “raise its hand and say I might have been uncertain about this, so don’t trust it that much.”

Kohli said that to tackle the problem of modern LLMs hallucinating, they are building tools that can identify such instances and warn the users. “Can we solve the energy problem using AI? Can we discover new types of materials?”

Future vision

Kohli said that since fundamental advances have been made, there will be a lot more emphasis on structural biology. He added key to this will be the democratisation of the technology, so that more and more users are able to derive benefits. Kohli said acceleration of science and agentic systems that give AI capabilities to more tasks will be key themes for the next year.

“The implications in healthcare, in drug discovery, are some things we will see really accelerating. There will be special emphasis on countries such as India, which Kohli believes has a lot of scope in leveraging AI for healthcare,” he said.

Kohli said 180,000 researchers and students are using AlphaFold in India. He said he was surprised by the number of people studying protein structures and cures for disease in the country. “It also shows the expansive research ecosystem in the country.”



Source link


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.