How Google is quietly planning to take on Nvidia

How Google is quietly planning to take on Nvidia

The Mountain View giant is rapidly expanding its AI hardware ecosystem, investing in custom silicon and forging strategic partnerships with leading companies to challenge Nvidia’s dominance in the semiconductor space.

Advertisement
AI chip partnerships Google has secured in recent months.AI chip partnerships Google has secured in recent months.
Aishwarya Panda
  • Apr 20, 2026,
  • Updated Apr 20, 2026 6:28 PM IST

Google is reportedly partnering with semiconductor company Marvell Technology to develop new chips for running artificial intelligence (AI) models more efficiently. The Mountain View giant is rapidly expanding its AI hardware ecosystem, investing in custom silicon and forging strategic partnerships with leading companies.

But what’s the core idea behind Google’s AI chip strategy? Google is betting on its custom chip, the Tensor Processing Unit (TPU), which is built to support AI systems. By designing these chips in-house, it can fine-tune them to work seamlessly with its own software and services. Through partnerships, Google can also scale production and supply of TPUs to handle growing AI workloads more efficiently while reducing reliance on external vendors.

Advertisement

Related Articles

As Google strengthens its ability to run large-scale AI workloads across its cloud and consumer services, the move could challenge Nvidia’s dominance in the semiconductor space. Here are the key AI chip partnerships Google has secured in recent months.

Must read: Google eyes AI deal with Pentagon to deploy Gemini AI in classified ops

How Mark Zuckerberg’s Meta is helping Google

In February 2026, social media giant Meta Platforms signed a multibillion-dollar deal with Google to gain access to TPU-powered AI infrastructure. However, this is a commercial supply agreement rather than a chip design collaboration. The partnership makes Meta a customer, allowing it to leverage Google’s TPU capacity to power its own AI workloads.

The deal will help Google generate revenue from its chip infrastructure while strengthening the commercial credibility of its TPUs as a viable alternative to Nvidia.

Advertisement

Understanding the role of MediaTek in the game

Around March 2026, Google was reported to be partnering with MediaTek to jointly develop a next-generation TPU-class AI chip for data centres. Mass production is expected this year, with fabrication likely to be handled by TSMC.

The deal has not been publicly confirmed and remains confidential. Despite this anticipated partnership, Google continues to work with Broadcom on AI chips. The move could help Google reduce its reliance on Nvidia.

Also read: Banned but booming: Apple, Google still show ‘nudify’ apps in search results

What happens to Broadcom TPU deal

While its ties with MediaTek remain under wraps, Google has renewed its long-term agreement with Broadcom to develop and supply custom AI chips, including future generations of TPUs and next-generation AI racks through 2031.

Advertisement

The deal strengthens Google’s AI compute capacity to run large-scale models such as Gemini. It is expected to become operational starting in 2027. Broadcom will handle full-stack ASIC work for Google’s TPUs, including power management and advanced packaging, while Google retains core architectural control.

It will also provide networking and other components used in next-generation AI racks, ensuring a stable, integrated silicon and networking stack for large-scale AI training.

Will Intel’s AI infra give a boost to the project

Google is also expanding its long-term AI infrastructure partnership with Intel. Under the renewed deal, Google Cloud will rely on Intel’s AI infrastructure to develop processors and deploy Xeon 6 series AI-optimised CPUs and Infrastructure Processing Units (IPUs) in its data centres.

Intel said the partnership will span multiple generations of Xeon processors to improve performance, energy efficiency and total cost of ownership across Google’s global infrastructure.

The companies will also co-develop custom ASIC-based IPUs to offload networking, storage and security functions from host CPUs.

Also read: Google launches Gemini AI app for Mac: Check features, availability, and how to download

Marvell AI chip deal - A fallback option

Google is also reported to be partnering with Marvell to develop two new AI chips. One will be a memory processing unit designed to work alongside Google’s TPUs, while the other will be a new TPU specifically built for running AI models.

Advertisement

These chips are expected to support Google’s growing workload requirements, particularly rising AI inference demand, along with training-related tasks.

Banking on Anthropic’s Claude

Anthropic is one of Google’s most significant compute partners. The company has signed a deal with Google Cloud to access multiple gigawatts of TPU capacity to run its Claude models and “serve extraordinary demand from customers worldwide.”

This marks Anthropic’s second major deal with Google Cloud, building on increased TPU capacity announced last October. The partnership will help Anthropic meet growing demand for its services.

For Google, the deal provides a large, predictable customer base, a new revenue stream, and further justification to scale its AI infrastructure across data centres, chips and system optimisation.

These partnerships not only strengthen Google’s chip-making capabilities but also help reduce costs and increase independence from external vendors. Together, they signal a broader shift from general-purpose hardware to tightly integrated AI infrastructure.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Google is reportedly partnering with semiconductor company Marvell Technology to develop new chips for running artificial intelligence (AI) models more efficiently. The Mountain View giant is rapidly expanding its AI hardware ecosystem, investing in custom silicon and forging strategic partnerships with leading companies.

But what’s the core idea behind Google’s AI chip strategy? Google is betting on its custom chip, the Tensor Processing Unit (TPU), which is built to support AI systems. By designing these chips in-house, it can fine-tune them to work seamlessly with its own software and services. Through partnerships, Google can also scale production and supply of TPUs to handle growing AI workloads more efficiently while reducing reliance on external vendors.

Advertisement

Related Articles

As Google strengthens its ability to run large-scale AI workloads across its cloud and consumer services, the move could challenge Nvidia’s dominance in the semiconductor space. Here are the key AI chip partnerships Google has secured in recent months.

Must read: Google eyes AI deal with Pentagon to deploy Gemini AI in classified ops

How Mark Zuckerberg’s Meta is helping Google

In February 2026, social media giant Meta Platforms signed a multibillion-dollar deal with Google to gain access to TPU-powered AI infrastructure. However, this is a commercial supply agreement rather than a chip design collaboration. The partnership makes Meta a customer, allowing it to leverage Google’s TPU capacity to power its own AI workloads.

The deal will help Google generate revenue from its chip infrastructure while strengthening the commercial credibility of its TPUs as a viable alternative to Nvidia.

Advertisement

Understanding the role of MediaTek in the game

Around March 2026, Google was reported to be partnering with MediaTek to jointly develop a next-generation TPU-class AI chip for data centres. Mass production is expected this year, with fabrication likely to be handled by TSMC.

The deal has not been publicly confirmed and remains confidential. Despite this anticipated partnership, Google continues to work with Broadcom on AI chips. The move could help Google reduce its reliance on Nvidia.

Also read: Banned but booming: Apple, Google still show ‘nudify’ apps in search results

What happens to Broadcom TPU deal

While its ties with MediaTek remain under wraps, Google has renewed its long-term agreement with Broadcom to develop and supply custom AI chips, including future generations of TPUs and next-generation AI racks through 2031.

Advertisement

The deal strengthens Google’s AI compute capacity to run large-scale models such as Gemini. It is expected to become operational starting in 2027. Broadcom will handle full-stack ASIC work for Google’s TPUs, including power management and advanced packaging, while Google retains core architectural control.

It will also provide networking and other components used in next-generation AI racks, ensuring a stable, integrated silicon and networking stack for large-scale AI training.

Will Intel’s AI infra give a boost to the project

Google is also expanding its long-term AI infrastructure partnership with Intel. Under the renewed deal, Google Cloud will rely on Intel’s AI infrastructure to develop processors and deploy Xeon 6 series AI-optimised CPUs and Infrastructure Processing Units (IPUs) in its data centres.

Intel said the partnership will span multiple generations of Xeon processors to improve performance, energy efficiency and total cost of ownership across Google’s global infrastructure.

The companies will also co-develop custom ASIC-based IPUs to offload networking, storage and security functions from host CPUs.

Also read: Google launches Gemini AI app for Mac: Check features, availability, and how to download

Marvell AI chip deal - A fallback option

Google is also reported to be partnering with Marvell to develop two new AI chips. One will be a memory processing unit designed to work alongside Google’s TPUs, while the other will be a new TPU specifically built for running AI models.

Advertisement

These chips are expected to support Google’s growing workload requirements, particularly rising AI inference demand, along with training-related tasks.

Banking on Anthropic’s Claude

Anthropic is one of Google’s most significant compute partners. The company has signed a deal with Google Cloud to access multiple gigawatts of TPU capacity to run its Claude models and “serve extraordinary demand from customers worldwide.”

This marks Anthropic’s second major deal with Google Cloud, building on increased TPU capacity announced last October. The partnership will help Anthropic meet growing demand for its services.

For Google, the deal provides a large, predictable customer base, a new revenue stream, and further justification to scale its AI infrastructure across data centres, chips and system optimisation.

These partnerships not only strengthen Google’s chip-making capabilities but also help reduce costs and increase independence from external vendors. Together, they signal a broader shift from general-purpose hardware to tightly integrated AI infrastructure.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Read more!
Advertisement