Google and Nvidia’s competition in the artificial intelligence chip ecosystem briefly unsettled global markets last month but for India’s AI ambitions the impact is marginal — at least for now.
When reports in late November said that Meta Platforms (the owner of Facebook, Instagram and WhatsApp) may use Google’s in-house Tensor Processing Units (TPUs) for its upcoming data centres, it knocked down Nvidia’s stock nearly 3 per cent. It also reignited the debate over whether Nvidia, the world’s chip leader, is finally facing credible competition. Nvidia was quick to respond, publicly welcoming Google’s progress while asserting that its own chips remain “a generation ahead of the industry”.
Government officials and industry executives in India say Google’s growing influence in AI hardware does not materially change the country’s AI road map, which uses chips made by Nvidia, AMD and Intel. Rather, Nvidia chips have been particularly preferred for the training of large language models (LLMs, an AI model trained on vast text data to generate human-like language).
A choice in chips matters for India. Graphics Processing Units (GPUs) are specialised chips designed for graphics and image processing. They have become essential for managing the high-demand computation required by modern AI workloads. TPUs are Google’s task-specific ASICs (application-specific integrated circuits). They excel at AI computation, especially inferencing — the process of using a trained AI model to make a prediction on a new set of data.
“Originally, TPUs were used most often for inferencing, but of late, Google has developed versions of them: For large-scale training and pre-training of LLMs. In fact, the biggest news to come out recently was that Gemini 3, the GPT5-beating model, released by Google was built entirely on TPUs. The entire thing: Pre-training, training, inferencing — the works,” said Phil Fersht, founder and chief analyst at HFS Research, a global business research consultancy.
“Google and TPUs are being seen as the first credible competition to Nvidia’s GPUs, reflecting in Google stock rocketing up and Nvidia one getting depressed,” he said.
“Nvidia has had 80 per cent plus of the market and any competition to it is good. AMD’s Radeon and Instinct chips and Amazon’s Trainums are also getting better and better. The fat margin that Nvidia has enjoyed so far is too lucrative for competitors not to target. Competition is obviously good for everybody, and so India and everyone else should stand to gain from lower prices and more variety. Having said that, Nvidia still is the best in class and so it will still take some time,” said Fersht.
“Though Google’s TPUs will have different capabilities compared to Nvidia’s GPUs, a crowded market always helps in bringing the price down,” said an Indian government official, who did not want to be named.
Google’s latest TPUs have moved the market from speculation to proof, according to Greyhound Research, a technology advisory firm. Previously seen only as a Google curiosity, TPUs now match or exceed the performance of Nvidia’s current and next-generation GPUs in large-scale AI workloads. This is especially true for training and high-volume inference, where TPUs can deliver higher sustained throughput by overcoming communication and power limits.
Sunil Gupta, cofounder and chief executive officer of Yotta Data Services, said that while Google’s TPUs may be better at data inferencing, there is practically no competition for Nvidia when it comes to providing chips for initial data training.
Almost all AI training in India uses Nvidia chips. Even the Hyperscalers rely on them for generative AI (GenAI) models. The new Blackwell chips are four times more powerful than the H100s, drastically reducing the cost and energy used for AI compute, he said, referring to various Nvidia products.
“For the enterprise buyer, the consequence is a decisive shift from single-vendor thinking to portfolio thinking. Training frontier models, rapid experimentation and complex research workflows will continue to lean heavily on Nvidia because no other ecosystem yet matches its breadth and depth for low level programmability,” said Sanchit Gogia, chief analyst, founder and CEO of Greyhound Research.
“At the same time, once a model has proven itself and is destined to serve customers at scale, boards and CFOs are increasingly unwilling to pay a permanent Nvidia premium if a TPU or Cloud-specific accelerator can deliver the same business outcome at thirty to fifty percent lower cost and with materially lower power draw,” he said.
Under the government’s IndiaAI Mission, companies are using advanced chips from Nvidia, AMD and Intel, as the immediate priority is building foundational models that reflect the country’s linguistic, cultural and political complexities.
India has boosted its AI infrastructure to 34,333 GPUs, almost twice the count from the initial August 2024 tender. This common Cloud computing base provides crucial training and inference capacity for startups and enterprises to develop indigenous foundational models and AI solutions.
“The price per GPU per hour discovered under the IndiaAI Mission is among the lowest. We expect these prices to come down further as the demand in India increases. Companies that make these chips can bet on long-term prospects from the country,” said the government official.
Gupta believes the competition will soon shift to chips better at data inferencing. Besides Google’s TPUs, chips from other companies, such as Cerebras and SambaNova, are reportedly almost five times better than Nvidia’s Blackwell offering for inferencing.
For the past three years now, legacy technology companies Microsoft, Meta and Amazon and newer entrants Anthropic, X’s Grok and Perplexity have launched a host of new LLMs and GenAI models. All these LLMs have improved reasoning capabilities; they were trained extensively on vast, varied datasets and used robust transformer architectures.
Not just companies, even countries such as China and India are in the AI race and are offering state incentives to encourage companies and startups to develop and launch indigenous LLMs.
In March 2024, the government launched the ₹10,372 crore IndiaAI Mission to both support the development of indigenous LLMs and fund companies procuring GPUs to train advanced GenAI models.
Nvidia is the single largest beneficiary of an international AI arms race, supplying early semiconductor chips that powered LLMs and GenAI models.
Meta’s confidence in Google’s TPU capabilities is also likely to help Indian companies, as the costs for AI chips may reduce in a few months, according to government officials.
“The real future definitely lies in inferencing everywhere and also in India when AI adoption happens at scale and becomes cost-effective. Once the focus is on cost, there will be pressure on Nvidia. That is when companies like Cerebras and AMD will come in,” said Gupta. Over time, even in countries like India, training for AI models will still be done with Nvidia GPUs, but inference, which will be a larger market, will be done on other models.
Yotta is investing an additional $1.5 billion to buy 8,000 more GPUs from Nvidia for use by the central government as part of its India AI Mission. Backed by the real estate barons Hiranandani family, the company has almost deployed all of the first tranche of its 8,000 GPUs, which were given to AI startups such as Sarvam AI and Soket AI that are building sovereign LLMs.
The second tranche of 8,000 GPUs has been ordered and should be put to use by December or early next year, Gupta said. Business Standard

Leave a Reply