Intel's Xeon 6 processors have been selected as the host CPU for Nvidia's DGX Rubin NVL8 system — a move announced at GTC 2026 that gives concrete form to the two companies' strategic alliance. The partnership's significance can be gauged through its product rollout timing and the revenue opportunities it unlocks.
Intel said Xeon was chosen for the DGX Rubin NVL8 based on system-level advantages, including support for high-speed memory, balanced performance across workloads, lower total cost of ownership, and a mature enterprise software ecosystem. Enhanced PCIe and I/O capabilities further support stable operation in high-bandwidth, low-latency environments.
DGX Rubin NVL8 is Nvidia's next-generation flagship AI server platform, designed for emerging workloads such as agentic AI and real-time inference.
The host CPU takes center stage
Jeff McVeigh, corporate vice president and general manager of Intel's Data Center Strategic Programs, said AI is shifting from large-scale training toward pervasive real-time inference, driven by the rise of agentic AI applications.

Jeff McVeigh, CVP & GM of Intel's Data Center Strategic Programs. Credit: HPCwire
He said Xeon 6 delivers advantages in performance, energy efficiency, and x86 ecosystem compatibility — enabling customers to scale inference workloads more effectively. The host CPU now plays a central role in GPU-accelerated systems, directly influencing orchestration efficiency, memory access speed, model security, and overall throughput.
Building on a proven foundation
The collaboration extends the x86-based cooperation between Intel and Nvidia. Xeon 6776P processors were previously adopted in the DGX B300 Blackwell platform, reinforcing Intel's standing in AI server infrastructure.
As inference workloads grow more demanding, the need for strong single-thread performance and high memory bandwidth continues to rise.
The Rubin platform shares the same architectural foundation as DGX B300, ensuring continuity across the Blackwell and Rubin generations. Intel said this continuity allows performance gains and system-level experience to carry over into next-generation AI deployments, supporting broader rollout across data center, cloud, and edge environments.
Enabling heterogeneous inference
Intel said Xeon 6 supports Nvidia Dynamo, enabling heterogeneous inference between CPUs and future GPUs, high reliability in mission-critical environments, and efficient scheduling within GPU-accelerated systems.
Article translated by Levi Li and edited by Jerry Chen


