AMD expanded its portfolio of 4th Gen EPYC CPUs today with the introduction of not only three new EPYC 97X4 Series processors, but also three new EPYC 9004 Series processors that feature 3D V-Cache. The former offers up to 128 Zen 4 cores, while the latter offers up to 96 Zen 4 cores and is complemented by up to 1,152 MB of L3 cache. AMD also announced the Instinct MI300X accelerator today in an apparent bid to compete against NVIDIA in the generative AI space.
“In an era of workload optimized compute, our new CPUs is pushing the boundaries of what is possible in the data center, delivering new levels of performance, efficiency, and scalability,” said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. “We closely align our product roadmap to our customers’ unique environments and each offering in the 4th Gen AMD EPYC family of processors is tailored to deliver compelling and leadership performance in general purpose, cloud native or technical computing workloads.”
“Today, we took another significant step forward in our data center strategy as we expanded our 4th Gen EPYC processor family with new leadership solutions for cloud and technical computing workloads and announced new public instances and internal deployments with the largest cloud providers,” said AMD Chair and CEO Dr. Lisa Su. “AI is the defining technology shaping the next generation of computing and the largest strategic growth opportunity for AMD. We are laser focused on accelerating the deployment of AMD AI platforms at scale in the data center, led by the launch of our Instinct MI300 accelerators planned for later this year and the growing ecosystem of enterprise-ready AI software optimized for our hardware.”
4th Gen AMD EPYC 97X4 Series Processors
4th Gen AMD EPYC Processors with AMD 3D V-Cache Technology
From an AMD press release:
Advancing Cloud Native Computing (EPYC 97X4 Series)
Cloud native workloads are a fast-growing class of applications designed with cloud architecture in mind and are developed, deployed and updated rapidly. The AMD EPYC 97X4 processors, with up to 128 cores, deliver up to 3.7x throughput performance for key cloud native workloads compared to Ampere1. Additionally, 4th Gen AMD EPYC processors, with “Zen 4c” cores, provide customers up to 2.7x better energy efficiency and support up to 3x more containers per server to drive cloud native applications at the greatest scale.
At the “Data Center and AI Technology Premiere,” AMD was joined by Meta who discussed how these processors are well suited for their mainstay applications such as Instagram, WhatsApp and more; how Meta is seeing impressive performance gains with 4th Gen AMD EPYC 97X4 processors compared to 3rd Gen AMD EPYC across various workloads, while offering substantial TCO improvements over as well, and how AMD and Meta optimized the EPYC CPUs for Meta’s power-efficiency and compute-density requirements.
Exceptional Technical Computing Performance (EPYC 9004 Series with 3D V-Cache)
Technical computing enables faster design iterations and more robust simulations to help businesses design new and compelling products. 4th Gen AMD EPYC processors with AMD 3D V-Cache technology further extend the AMD EPYC 9004 Series of processors to deliver the world’s best x86 CPU for technical computing workloads5 such as computational fluid dynamics (CFD), finite element analysis (FEA), electronic design automation (EDA) and structural analysis. With up to 96 “Zen 4” cores and an industry leading 1GB+ of L3 cache, 4th Gen AMD EPYC processors with AMD 3D V-Cache can significantly speed up product development by delivering up to double the design jobs per day in Ansys CFX.
On stage at the “Data Center and AI Technology Premiere,” Microsoft announced the general availability of Azure HBv4 and HX instances, powered by 4th Gen AMD EPYC processors with AMD 3D V-Cache. Optimized for the most demanding HPC applications, the newest instances deliver performance gains of up to 5x when compared to the previous generation HBv3 and scale to hundreds of thousands of CPU cores.
From an AMD press release:
AMD AI Platform – The Pervasive AI Vision
Today, AMD unveiled a series of announcements showcasing its AI Platform strategy, giving customers a cloud, to edge, to endpoint portfolio of hardware products, with deep industry software collaboration, to develop scalable and pervasive AI solutions.
- Introducing the World’s Most Advanced Accelerator for Generative AI. AMD revealed new details of the AMD Instinct MI300 Series accelerator family, including the introduction of the AMD Instinct MI300X accelerator, the world’s most advanced accelerator for generative AI. The MI300X is based on the next-gen AMD CDNA 3 accelerator architecture and supports up to 192 GB of HBM3 memory to provide the compute and memory efficiency needed for large language model training and inference for generative AI workloads. With the large memory of AMD Instinct MI300X, customers can now fit large language models such as Falcon-40, a 40B parameter model on a single, MI300X accelerator. AMD also introduced the AMD Instinct Platform, which brings together eight MI300X accelerators into an industry-standard design for the ultimate solution for AI inference and training. The MI300X is sampling to key customers starting in Q3. AMD also announced that the AMD Instinct MI300A, the world’s first APU Accelerator for HPC and AI workloads, is now sampling to customers.
- Bringing an Open, Proven and Ready AI Software Platform to Market. AMD showcased the ROCm software ecosystem for data center accelerators, highlighting the readiness and collaborations with industry leaders to bring together an open AI software ecosystem. PyTorch discussed the work between AMD and the PyTorch Foundation to fully upstream the ROCm software stack, providing immediate “day zero” support for PyTorch 2.0 with ROCm release 5.4.2 on all AMD Instinct accelerators. This integration empowers developers with an extensive array of AI models powered by PyTorch that are compatible and ready to use “out of the box” on AMD accelerators. Hugging Face, the leading open platform for AI builders, announced that it will optimize thousands of Hugging Face models on AMD platforms, from AMD Instinct accelerators to AMD Ryzen and AMD EPYC processors, AMD Radeon GPUs and Versal and Alveo adaptive processors.