CONNECT WITH US

Unveiling ASUS liquid-cooling AI Infrastructure through building AI supercomputer Nano4 projects at NCHC

News highlights

Amidst the reality of global oil prices continuously skyrocketing due to war, the global market is facing the formidable challenges posed by the turbulence impacting the global economy in 2026. Major Taiwanese electronics manufacturers and supply chains are pivoting to review comprehensive AI strategies. They are adopting a particularly aggressive stance in their strategic deployment of cloud server systems and AI infrastructure, aiming to achieve an operational breakthrough where the market outlook remains far from optimistic.

ASUS unveils its complete Professional Services for Sovereign AI. Since the first project in 2018, which resulted in Taiwan's first AI supercomputer, Taiwania 2, ASUS has been actively developing sovereign AI infrastructure and providing customized AI solutions to clients. Delivering trusted AI with total flexibility, from rack-scale AI Factories, desktop AI supercomputing, Edge AI to Enterprise AI solutions deployment, the company is redefining Sovereign AI Solutions with reinforces the company's position as a global leader in AI-driven digital transformation.

On April 1, 2026, ASUS joins the DIGITIMES online seminar titled "Leading the Way in Sovereign AI: A Look at Next-Generation AI Computing and Storage Architecture from Supercomputing Performance". This event starts with the construction and practical application of next-generation supercomputers at the National Center for High-Performance Computing (NCHC) of the National Applied Research Laboratories. This collaborative project shows ASUS Powering NCHC to building Taiwan's AI Supercomputer and the next generation of High Performance Computing (HPC) systems.

As AI and high HPC workloads push compute density and power consumption beyond the capabilities of traditional air cooling, the flagship ASUS offerings are the ASUS AI POD systems redefining the new architecture of rack-scale powerhouse, storage architecture, and liquid cooling designed for massive AI workloads. This momentum will continue to maximize computing power density through the NVIDIA HGXTM and Blackwell platforms and provide insights into solving heat dissipation bottlenecks with advanced liquid cooling technology to meet the performance and energy efficiency requirements of high-computing environments.

Practical experience in building a leading infrastructure in Taiwan High Performance Computing

The first session features a keynote lecture by Nobel Hsia, ASUS Deputy Manager of Product Planning, titled "Leading the Sovereign AI Wave: From the National Center for High-performance Computing (NCHC) projects to ASUS's Forward-Looking Infrastructure and Storage Solutions." ASUS's Sovereign AI targets the business opportunities that countries are currently striving to play the vital role of innovation in securing data sovereignty and security with in-house data processing and computing capabilities, and promotes the transformation of scientific research in the industry. ASUS has partnered with NVIDIA to build a massive AI supercomputer system with an "ALL IN on AI" strategy. Through the maximum computing power density provided by the NVIDIA HGXTM and Blackwell platforms, ASUS has deployed complete portfolio ranging from rack-scale AI factories to edge and enterprise deployment. With its proven track record of successfully supplying AI infrastructure at the B300 and GB300 levels in the market, ASUS has become a trusted partner, providing complete end-to-end solutions.

It is worth mentioning that ASUS has ten years of experience collaborating with the National Center for High-performance Computing on supercomputer projects. ASUS's sovereign AI expertise is proven by several successful national-level AI project deployments, including flagship supercomputing initiatives such as Taiwania 2 and Forerunner 1, cementing its leadership in high-performance computing and AI systems. Through ASUS Professional Services, the collaboration covers all aspects from design, deployment to operation.

The latest Nano4 (Crystal 26) NVIDIA HGXTM H200 cluster AI server system was to build a next-generation AI supercomputer capable of handling complex large language models (LLMs), deep learning, and advanced HPC workloads. In this new project ASUS has built Taiwan's first AI supercomputer project based on NVIDIA GB200 NVL72 system architecture with direct-cooling technology. The engineering team has played an important role in the delivery of the servers, the planning of large-scale computing architecture, deployment and optimization, which has become the foundation for ASUS to expand its overseas AI deployment and demonstrate ASUS's ability to build computing power from national to international levels.

Opening Agentic AI Frontier: ASUS Supports NVIDIA Vera Rubin Platform and Infrastructure

In response to the advent of the new-generation NVIDIA Vera Rubin platform and its accompanying infrastructure architecture, ASUS has developed the next-generation ASUS AI POD highlighting a proficiency in liquid-cooled AI solution. Spanning the new design from rack-scale AI factories, data center servers to desktop workstations and edge AI devices, this solution delivers end-to-end AI computing power and infrastructure, specifically targeting trillion-parameter models and million-token contexts, maximizing efficiency across power, memory, and compute.

Hsia began by introducing the ASUS flagship XA VR721-E3 architecture. This system supports the NVIDIA Vera Rubin NVL72 platforms; it is purpose-built for large-scale AI model inference and training to deliver massive AI performance for large-scale AI factories. while also accommodating the specific workloads required for agentic AI. Designed as a 100% liquid-cooled, rack-scale system, it features a Thermal Design Power (TDP) reaching up to 227 kW, which is a capability that fully satisfies the demand for the immense performance required to compute AI models with trillions of parameters.

Furthermore, addressing rigorous enterprise-grade data-center demands, ASUS has simultaneously launched the XA NR series. These product series support the NVIDIA HGXTM Rubin NVL8 architecture, featuring eight Rubin GPUs interconnected via the sixth-generation NVLink, which each GPU could deliver a maximum bandwidth of 800 GB/s. To facilitate a seamless and cost-effective transition to liquid cooling, ASUS offers two distinct solutions: the XA NR1I-E12L, an innovative hybrid-cooled option; and the XA NR1I-E12LR, a 100% liquid-cooled system.

To support these powerful systems and democratize AI development, ASUS also has established a robust data ecosystem by partnering with NVIDIA-Certified storage providers. The storage solutions offers the technologies such as JBOD, DPDK, and Object Storage, thereby delivering scalable, resilient solutions for memory-intensive AI applications to support robust capabilities for the integrated storage and operational management.

On the software front, ASUS provides a suite of one-stop platforms. Leveraging the ASUS Infrastructure Deployment Center (AIDC), ASUS automated the setup process including ACC (ASUS Control Center) and BMC, accelerating time-to-market for critical research resources. To address the full spectrum of requirements for system construction, deployment, and operations, ASUS provides expert consultation, a broad portfolio of tailor-made AI solution while simultaneously catering to the comprehensive lifecycle management needs of the entire data center.

WEKA data storage platform is redefining AI storage economics

To meet the elastic data storage requirements of AI workloads across diverse usage scenarios, ASUS's AI POD system incorporates a comprehensive storage solution featuring a high-speed, All-Flash NVMe SSD storage architecture. ASUS has collaborated with its ecosystem partners to develop a next-generation unified storage system with high-reliability storage servers and professional validation of network. WEKA, the AI storage company, is one of partners providing high-performance, software-defined storage for GPU-accelerated AI and HPC environments, combining low latency with unified data management.

In the second presentation of the session is delivered by Ray Wu, Senior Consultant for the Asia-Pacific region at WEKA. His speech titled the theme "Ultimate Data Empowerment: How WEKA Helps NCHC Build an AI-Accelerated Computing Storage Architecture". To address NCHC's requirements for extreme performance and energy efficiency, Wu highlighted the unified data management solutions to provide rapid scalability, which emphasizing flexibility and intelligent adaptive capabilities, and effectively addressing the demands of a wide array of usage scenarios.

Addressing the requirements of diverse application environments such as Kubernetes and Slurm in NCHC Nano4, WEKA serves as the high-performance storage foundation through its efficient storage platform. Based on the NVIDIA AI Data Platform reference architecture, the solution employs an end-to-end system to accelerate data processing performance for HPC applications. This assists the NCHC in optimizing the deployment of high-efficiency AI computing, dramatically reducing the time required for AI application deployment and development from months to mere minutes.

This ASUS WEKA storage solutions demonstrates the NCHC's technical excellence in leveraging its ecosystem and integrated services to rapidly deliver the low-latency, scalable and high throughput required capabilities and swiftly support the next generation of agent-based AI applications. WEKA solutions empower enterprises to transition from experimentation to full-scale operations, rendering AI applications economically viable and maximizing performance across a wide spectrum of fields, ranging from next-generation AI agent systems to AI-enabled healthcare applications.

In this event, ASUS showcases fully liquid-cooled AI infrastructure with the critical thermal management required for the next-generation NVIDIA Vera Rubin NVL72 system platform. By efficiently dissipating heat from high-performance CPUs, GPUs, and accelerator-dense racks, ASUS significantly reduces energy consumption while supporting unprecedented rack density to enable enterprises and cloud service providers to build high-performance, energy-efficient large-scale AI clusters with unmatched efficiency and dramatically reduced PUE and TCO.

【White paper download】Empowering Scalable AI Reasoning with ASUS AI POD featuring

【Webinar on-demand】Bridging NCHC Success to NVIDIA Vera Rubin Architecture