24 Zen 4 CPU cores, 146 billion transistors, 128GB HBM3, up to 8x faster than MI250X

AMD has just confirmed specifications for the Instinct MI300 ‘CDNA 3’ accelerator that uses Zen 4 CPU cores in a 5nm 3D chipset package.

AMD Instinct MI300 ‘CDNA 3’ specs: 5nm chipset design, 146 billion transistors, 24 Zen 4 CPU cores, 128GB HBM3

The latest unveiled specifications of the AMD Instinct MI300 accelerator confirm that this APU will be a beast of a chip design. The CPU will include several 5nm 3D chip packages, all combining to house 146 billion transistors. These transistors include many basic IP addresses, memory interfaces, interconnects, and much more. The CDNA 3 architecture is the core DNA of the Instinct MI300 but the APU also comes with a total of 24 Zen 4 Data Centers and 128 GB of next-generation HBM3 memory running in an 8192-bit wide bus configuration.

During AMD Financial Day 2022, the company confirmed that the MI300 will be a multi-chip and multi-IP Instinct accelerator that not only features next-generation CDNA 3 GPU cores but is also equipped with next-generation Zen 4 CPU cores.

To enable more than 2 exaflops of dual-precision processing power, the U.S. Department of Energy, Lawrence Livermore National Laboratory, and HPE teamed up with AMD to design El Capitan, which is projected to be the world’s fastest supercomputer with delivery expected in early 2023. El Capitan will take advantage of El Capitan products. Next Generation includes improvements from Frontier’s custom processor design.

  • Codenamed “Genoa”, the next generation of AMD EPYC processors will feature a “Zen 4” processor core to support next-generation memory and I/O subsystems for AI and HPC workloads.
  • Next-generation AMD Instinct-based GPUs optimized for computing for HPC and AI workloads will use next-generation high-bandwidth memory for optimal deep learning performance.

This design will excel at analyzing AI and machine learning data to create models that are faster, more accurate, and able to measure the uncertainty in their predictions.

via AMD

AMD will use the 5nm process node for its Instinct MI300 ‘CDNA 3’ GPUs. The chip will be equipped with the next generation Infinity Cache and feature the fourth generation Infinity architecture which enables support for the CXL 3.0 ecosystem. The Instinct MI300 accelerator will rock the APU’s unified memory architecture and mathematical new formats, allowing for 5x performance per watt lift over CDNA 2 which is huge. AMD also drops more than 8x AI performance against the CDNA 2-based Instinct MI250X accelerators. The CDNA 3 GPU’s UMAA will connect the CPU and GPU to a unified HBM memory package, eliminating redundant memory copying while providing a low cost of acquisition.

Also Read:  The Great War: Preview of the Western Front

The AMD Instinct MI300 APU accelerators are expected to be available by the end of 2023 which is around the time of the publication of the aforementioned El Capitan supercomputer.

Share this story

Facebook

Twitter

Leave a Comment