From Rack Integration to AI and Cloud Systems: MSI Debuts Full-Spectrum Server Portfolio at COMPUTEX 2025
TAIPEI, May 20, 2025 /PRNewswire/ — MSI, a global leader in high-performance server solutions, returns to COMPUTEX 2025 (Booth #J0506) with its most comprehensive lineup yet. Showcasing rack-level integration, modular cloud infrastructure, AI-optimized GPU systems, and enterprise server platforms, MSI presents fully integrated EIA, OCP ORv3, and NVIDIA MGX racks, DC-MHS-based Core Compute servers, and the new NVIDIA DGX Station. Together, these systems underscore MSI’s growing capability to deliver deployment-ready, workload-tuned infrastructure across hyperscale, cloud, and enterprise environments.
From Rack Integration to AI and Cloud Systems: MSI Debuts Full-Spectrum Server Portfolio at COMPUTEX 2025
“The future of data infrastructure is modular, open, and workload-optimized,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions. “At COMPUTEX 2025, we’re showing how MSI is evolving into a full-stack server provider, delivering integrated platforms that help our customers scale AI, cloud, and enterprise deployments with greater efficiency and flexibility.”
Full-Rack Integration from Cloud to AI Data Centers
MSI demonstrates its rack-level integration expertise with fully configured EIA 19″, OCP ORv3 21″, and AI rack powered by NVIDIA MGX, engineered to power modern infrastructure, from cloud-native compute to AI-optimized deployments. Pre-integrated and thermally optimized, each rack is deployment-ready and tuned for specific workloads. Together, they highlight MSI’s capability to deliver complete, workload-optimized infrastructure from design to deployment.
- The EIA rack delivers dense compute for private cloud and virtualization environments, integrating core infrastructure in a standard 19″ format.
- The OCP ORv3 rack features a 21″ open chassis, enabling higher compute and storage density, efficient 48V power delivery, and OpenBMC-compatible management, ideal for hyperscale and software-defined data centers.
- The enterprise AI rack with NVIDIA MGX, built on the NVIDIA Enterprise Reference Architecture, enables scalable GPU infrastructure for AI and HPC. Featuring modular units and high-throughput networking powered by NVIDIA Spectrum™-X, it supports multi-node scalable unit deployments optimized for large-scale training, inference, and hybrid workloads.
Core Compute and Open Compute Servers for Modular Cloud Infrastructure
MSI expands its Core Compute lineup with six DC-MHS servers powered by AMD EPYC 9005 Series and Intel Xeon 6 processors in 2U4N and 2U2N configurations. Designed for scalable cloud deployments, the portfolio includes high-density nodes with liquid or air cooling and compact systems optimized for power and space efficiency. With support for OCP DC-SCM, PCIe 5.0, and DDR5 DRAM, these servers enable modular, cross-platform integration and simplified management across private, hybrid, and edge cloud environments.
To further enhance Open Compute deployment flexibility, MSI introduces the CD281-S4051-X2, a 2OU 2-Node ORv3 Open Compute server based on DC-MHS architecture. Optimized for hyperscale cloud infrastructure, it supports a single AMD EPYC 9005 processor per node, offers high storage density with twelve E3.S NVMe bays per node, and integrates efficient 48V power delivery and OpenBMC-compatible management, making it ideal for software-defined and power-conscious cloud environments.
AMD EPYC 9005 Series Processor-Based Platform for Dense Virtualization and Scale-Out Workloads
- CD270-S4051-X4 (Liquid Cooling)
A liquid cooled 2U 4-Node server supporting up to 500W TDP. Each node features 12 DDR5 DIMM slots and 2 U.2 NVMe drive bays, ideal for dense compute in thermally constrained cloud deployments. - CD270-S4051-X4 (Air Cooling)
This air-cooled 2U 4-Node system supports up to 400W TDP and delivers energy-efficient compute, with 12 DDR5 DIMM slots and 3 U.2 NVMe bays per node. Designed for virtualization, container hosting, and private cloud clusters. - CD270-S4051-X2
A 2U 2-Node server optimized for space efficiency and compute density. Each node includes 12 DDR5 DIMM slots and 6 U.2 NVMe bays, making it suitable for general-purpose virtualization and edge cloud nodes.
Intel Xeon 6 Processor-Based Platform for Containerized and General-Purpose Cloud Services
- CD270-S3061-X4
A 2U 4-Node Intel Xeon 6700/6500 server supporting 16 DDR5 DIMM slots and 3 U.2 NVMe bays per node. Ideal for containerized services and mixed cloud workloads requiring balanced compute density. - CD270-S3061-X2
This compact 2U 2-Node Intel Xeon 6700/6500 system features 16 DDR5 DIMM slots and 6 U.2 NVMe bays per node, delivering strong compute and storage capabilities for core infrastructure and scalable cloud services. - CD270-S3071-X2
A 2U 2-Node Intel Xeon 6900 system designed for I/O-heavy workloads, with 12 DDR5 DIMM slots and 6 U.2 bays per node. Suitable for storage-centric applications and data-intensive applications in the cloud.
AI Platforms with NVIDIA MGX & DGX Station for AI Deployment
MSI presents a comprehensive lineup of AI-ready platforms, including NVIDIA MGX-based servers and the DGX Station built on NVIDIA Grace and Blackwell architecture. The MGX lineup spans 4U and 2U form factors optimized for high-density AI training and inference, while the DGX Station delivers datacenter-class performance in a desktop chassis for on-premises model development and edge AI deployment.
AI Platforms with NVIDIA MGX
- CG480-S5063 (Intel) / CG480-S6053 (AMD)
The 4U MGX GPU server is available in two CPU configurations, CG480-S5063 with dual Intel Xeon 6700/6500 processors, and CG480-S6053 with dual AMD EPYC 9005 Series processors, offering flexibility across CPU ecosystems. Both systems support up to 8 FHFL dual-width PCIe 5.0 GPUs in air-cooled datacenter environments, making them ideal for deep learning training, generative AI, and high-throughput inferencing.The Intel-based CG480-S5063 features 32 DDR5 DIMM slots and supports up to 20 front E1.S NVMe bays, ideal for memory- and I/O-intensive deep learning pipelines, including large-scale LLM workloads, NVIDIA OVX™, and digital twin simulations.
- CG290-S3063
A compact 2U MGX server powered by a single Intel Xeon 6700/6500 processor, supporting 16 DDR5 DIMM slots and 4 FHFL dual-width GPU slots. Designed for edge inferencing and lightweight AI training, it suits space-constrained deployments where inference latency and power efficiency are key.
DGX Station
The CT60-S8060 is a high-performance AI station built on the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, delivering up to 20 PFLOPS of AI performance and 784GB of unified memory. It also features the NVIDIA ConnectX-8 SuperNIC, enabling up to 800Gb/s networking for high-speed data transfer and multi-node scaling. Designed for on-prem model training and inferencing, the system supports multi-user workloads and can operate as a standalone AI workstation or a centralized compute resource for R&D teams.