At the OCP Global Summit 2024, MSI said that the Nvidia MGX platform offers a flexible architecture that enables MSI to deliver purpose-built solutions optimised for AI, HPC, and LLMs.
Using this platform, MSI's AI server solutions provide exceptional scalability, efficiency, and enhanced GPU density—critical factors in meeting the growing computational demands of AI workloads.
Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions, said the Nvidia partnership strengthens MSI’s position in the AI and HPC markets, helping customers achieve new efficiency and power optimisation levels in their data centres, especially for demanding LLM and AI workloads.
There are two new Nvidia MGX-based AI servers: the CG480-S5063 and CG290-S8023. The CG480-S5063 4U Nvidia MGX-based AI server, featuring dual Intel Xeon 6 processors, 8 FHFL dual width GPU slots, and 32 DDR5 DIMM slots, offers massive memory bandwidth and compute power, ensuring scalability for AI, LLM, and data analytics workloads.
The CG290-S8023 2U Nvidia MGX-enabled AI server, powered by the Nvidia GB200 NVL2 platform, combines two Nvidia Grace CPUs and two Nvidia Blackwell GPUs with 1344GB on-package memory.
MSI claims this delivers exceptional performance for AI inference and training, LLM processing, and optimising data throughput while minimising latency for efficient and high-density AI applications, ensuring future-proof AI infrastructure for data centres.