Google recently announced an expanded collaboration with Intel to increase the deployment of Xeon processors in its cloud AI data centers. This move reflects the rapidly growing demand for CPUs in AI infrastructure, prompting Google to secure supply chains early to avoid future chip shortages that could hinder AI investments.
The development confirms that CPUs are gaining significant market traction within AI data centers, opening new business avenues for ASIC vendors.
In recent years, cloud service providers (CSPs) and ASIC manufacturers have largely focused resources on AI accelerator chips, while only a few leaders like Google and AWS have simultaneously pursued custom CPU designs.
Over the past six months, CPU demand has surged due to critical roles played by AI agents, underscoring the essential nature of CPUs across the industry. Concerns about potential supply constraints amid sudden spikes in demand have intensified, driving broader strategic planning within the semiconductor ecosystem.
Chipmakers such as Nvidia have highlighted the capabilities of their Vera CPU, while Arm broke traditional business models by launching AGI-focused CPUs, signaling clear responses to customer needs.
Industry insiders note that beyond the established x86 giants Intel and Advanced Micro Devices (AMD), Arm-based CPUs are increasingly sought after by CSPs as alternative solutions. Nvidia's and Arm's products, along with Google's and AWS's custom CPUs, all leverage the Arm architecture.
This trend suggests rapid growth in CPU-related ASIC development driven by both Arm and RISC-V ecosystems. Meanwhile, China's market is witnessing a surge in x86 chip projects aimed at reducing reliance on US technology, further expanding design activity.
Regardless, the rising demand for custom CPUs represents a crucial opportunity for ASIC companies.
Semiconductor supply chain sources add that startups introducing standard AI CPU products may also find viable entry points into the market.
Given the industry's preference for diversified suppliers in the near term, and considering AI agent applications leaning toward inference at the edge, customers might opt against pricier offerings from major players.
Article translated by Charlene Chen and edited by Jack Wu

