Intel and Google are doubling down on AI CPUs with an expanded partnership
Intel expands its decades-old partnership with Google to reclaim its reputation in a market that is currently dominated by NVIDIA.

- Apr 10, 2026,
- Updated Apr 10, 2026 11:03 AM IST
Intel Corporation has announced an expansion of its long-term partnership with Google Cloud, bringing greater focus to its AI infrastructure. The partnership will focus on the development of custom Infrastructure Processing Units (IPUs) and the integration of Intel’s Xeon 6 processors into Google’s data centres.
The partnership comes at a critical time as Intel is fighting to reclaim its reputation in a market that is currently dominated by NVIDIA. The company is now emphasising more on "heterogeneous" AI systems that not only rely on specialised accelerators, but also on CPUs and IPUs, working as "brains" to manage massive AI workloads.
Also read: Intel teams up with Elon Musk for Terafab AI chip project: Everything you need to know
Intel and Google AI CPU deal
According to Intel’s press note, the deal will expand Google Cloud’s use of Intel’s Xeon 6 processors in its data centres, and will power different types of cloud instances like C4 and N4. These systems are designed to handle AI workloads, such as training large AI models, running AI predictions quickly, and performing everyday computing tasks.
Furthermore, Intel and Google will also be working closely to build special custom chips called ASIC-based IPUs (Infrastructure Processing Units) to support networking, storage management, and security. Therefore, these chips will manage infrastructure tasks which was traditionally managed by CPUs.
"CPUs and infrastructure acceleration remain a cornerstone of AI systems—from training orchestration to inference and deployment,” said Amin Vahdat, SVP & Chief Technologist, AI Infrastructure, Google.
“AI is reshaping how infrastructure is built and scaled,” said Lip-Bu Tan, CEO of Intel. “Scaling AI requires more than accelerators - it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.”
Also read: Elon Musk’s TeraFab: Inside the $25 billion bet to build chips, robots and orbital AI
Elon Musk and Intel's partnership, and a bigger manufacturing bet
As Intel strengthens its ties, the company also signed a blockbuster $25 billion deal with Elon Musk to build the "Terafab,” a massive chip manufacturing facility in Texas. The facility plans to build 2-nanometer chips at a massive scale, targeting an output of 100,000 wafers per month. This will bring massive computing support for xAI, Tesla’s autonomous driving, and SpaceX.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Intel Corporation has announced an expansion of its long-term partnership with Google Cloud, bringing greater focus to its AI infrastructure. The partnership will focus on the development of custom Infrastructure Processing Units (IPUs) and the integration of Intel’s Xeon 6 processors into Google’s data centres.
The partnership comes at a critical time as Intel is fighting to reclaim its reputation in a market that is currently dominated by NVIDIA. The company is now emphasising more on "heterogeneous" AI systems that not only rely on specialised accelerators, but also on CPUs and IPUs, working as "brains" to manage massive AI workloads.
Also read: Intel teams up with Elon Musk for Terafab AI chip project: Everything you need to know
Intel and Google AI CPU deal
According to Intel’s press note, the deal will expand Google Cloud’s use of Intel’s Xeon 6 processors in its data centres, and will power different types of cloud instances like C4 and N4. These systems are designed to handle AI workloads, such as training large AI models, running AI predictions quickly, and performing everyday computing tasks.
Furthermore, Intel and Google will also be working closely to build special custom chips called ASIC-based IPUs (Infrastructure Processing Units) to support networking, storage management, and security. Therefore, these chips will manage infrastructure tasks which was traditionally managed by CPUs.
"CPUs and infrastructure acceleration remain a cornerstone of AI systems—from training orchestration to inference and deployment,” said Amin Vahdat, SVP & Chief Technologist, AI Infrastructure, Google.
“AI is reshaping how infrastructure is built and scaled,” said Lip-Bu Tan, CEO of Intel. “Scaling AI requires more than accelerators - it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.”
Also read: Elon Musk’s TeraFab: Inside the $25 billion bet to build chips, robots and orbital AI
Elon Musk and Intel's partnership, and a bigger manufacturing bet
As Intel strengthens its ties, the company also signed a blockbuster $25 billion deal with Elon Musk to build the "Terafab,” a massive chip manufacturing facility in Texas. The facility plans to build 2-nanometer chips at a massive scale, targeting an output of 100,000 wafers per month. This will bring massive computing support for xAI, Tesla’s autonomous driving, and SpaceX.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
