Home TechnologyASUS and NVIDIA Launch Next-Gen AI POD Infrastructure at Computex 2025

ASUS and NVIDIA Launch Next-Gen AI POD Infrastructure at Computex 2025

by Chelsea Spears

At Computex 2025 in Taipei, ASUS made waves by announcing its latest AI POD infrastructure in collaboration with NVIDIA. These next-gen systems are designed for enterprises ready to scale AI adoption, harness high-performance computing (HPC), and deploy autonomous AI agents.


Purpose-Built for the Future of AI and HPC

ASUS’s new AI PODs are developed with NVIDIA Enterprise AI Factory reference architectures. The goal: to streamline deployment, boost performance, and simplify management. They’re ready for demanding workloads like:

  • Generative AI and large language models (LLMs)
  • Real-time agentic AI decision-making
  • Simulation and 3D visualization
  • Scalable inference and training

Grace Blackwell, HGX, and MGX — Certified and Scalable

These NVIDIA-Certified Systems come in multiple configurations to suit any enterprise data center:

High-Density Rack Systems with GB200 and GB300

  • Liquid-cooled option: 576 GPUs across 8 racks
  • Air-cooled option: 72 GPUs per rack
  • Includes NVIDIA Quantum InfiniBand or Spectrum-X Ethernet for low-latency, high-bandwidth performance

These configurations are optimized for massive AI model training, simulation, and inference, with built-in flexibility for different cooling methods.


MGX and HGX Architectures for Complex Workloads

ASUS is also offering MGX-compliant rack designs with their powerful ESC8000 series:

  • Dual Intel Xeon 6 CPUs
  • NVIDIA RTX PRO 6000 Blackwell GPUs
  • NVIDIA ConnectX-8 SuperNIC with up to 800Gb/s throughput

Perfect for immersive workloads, 3D tasks, and AI-powered engineering applications.

For Advanced AI Training and Tuning: HGX Systems

  • ASUS XA NB3I-E12 and ESC NB8-E11
  • Powered by NVIDIA HGX B300 and HGX B200
  • Supports both air and liquid cooling
  • Optimized for LLM fine-tuning and large-scale inference

Full-Stack Integration for Agentic AI and Real-Time Simulation

ASUS’s infrastructure supports the full AI lifecycle — from training to real-time deployment — with seamless integration into:

  • NVIDIA AI Enterprise
  • NVIDIA Omniverse
  • ASUS RS501A-E12-RS12U and VS320D series for certified networking and storage

Add to that:

  • SLURM-based workload scheduling
  • NVIDIA UFM for Quantum InfiniBand management
  • WEKA Parallel File System for lightning-fast throughput
  • ASUS ProGuard SAN Storage for secure, enterprise-grade data handling

Simplified Management with ASUS Tools

To help enterprises manage, orchestrate, and scale their AI infrastructure, ASUS provides:

  • ASUS Control Center (Data Center Edition)
  • ASUS Infrastructure Deployment Center (AIDC)

These tools help reduce operational complexity, while L11 and L12 validation ensures reliable deployment across large enterprise environments.


ASUS Sets the Standard for Enterprise AI Infrastructure

With this launch, ASUS is reinforcing its position as a key player in enterprise AI and HPC. Their AI POD ecosystem delivers:

  • Extreme scalability
  • Future-ready GPU architecture
  • Air and liquid cooling flexibility
  • Seamless full-stack integration
  • Reduced time-to-deployment

For companies diving into AI — whether it's LLMs, simulations, or real-time agentic systems — ASUS is delivering the infrastructure to make it happen.


Final Thoughts

ASUS and NVIDIA’s partnership marks a major milestone in enterprise AI deployment. Their next-gen AI PODs are more than just hardware — they’re full ecosystems designed to meet the growing demands of AI, now and in the future.

You may also like