During the event, AMD shared new specifics on its upcoming Zen 2 processor core architecture and detailed its revolutionary chiplet-based x86 CPU design. Furthermore, it launched the 7nm AMD Radeon Instinct MI60 graphics accelerator. It also provided the first public demonstration of its next-generation 7nm EPYC server processor codenamed “Rome”.
Amazon Web Services (AWS) also joined AMD at the event. Together, they announced the availability of three of its popular instance families on the Amazon Elastic Compute Cloud (EC2) powered by the AMD EPYC processor.
“The multi-year investments we have made in our datacenter hardware and software roadmaps are driving growing adoption of our CPUs and GPUs across cloud, enterprise and also HPC customers,” said Lisa Su, president and CEO of AMD. “We positioned ourselves well to accelerate our momentum as we introduce a broad, powerful portfolio of datacenter CPUs and also GPUs featuring 7nm process technology over the coming quarters.”
AMD Compute Architecture Updates
AMD for the first time detailed its upcoming Zen 2 high-performance x86 CPU processor core. This is the result of a revolutionary modular design methodology. This modular system design uses an enhanced version of AMD Infinity Fabric interconnect to link separate pieces of silicon (chiplets) within a single processor package.
The multi-chip processor uses 7nm process technology for the Zen 2 CPU cores that benefit from the advanced process technology, while leveraging a mature 14nm process technology for the input/output portion of the chip. Consequently, it has much higher performance – more CPU cores at the same power, and more cost-effective manufacture than traditional monolithic chip designs.
Combining this design methodology with the benefits of TSMC’s leading-edge 7nm process technology, Zen 2 delivers significant performance, power consumption and density generational improvements. Those can help reduce datacenter operating costs, carbon footprint and cooling requirements. Other key generational advances over Zen core include:
- Improved execution pipeline. Meaning it feeds its compute engines more efficiently.
- Front-end advances. Improved branch predictor, better instruction pre-fetching, re-optimized instruction cache and larger op cache.
- Floating point enhancements. Doubled floating point width to 256-bit and load/store bandwidth, increased dispatch/retire bandwidth and maintained high throughput for all modes.
- Advanced security features. Hardware-enhanced Spectre mitigations, taking software migration and hardening it into the design, and increased flexibility of memory encryption.
Multiple 7nm-based AMD products are now in development. Those include AMD EPYC CPUs and AMD Radeon Instinct GPUs. Additionally, the company shared that its follow-on 7nm+-based Zen 3 and Zen 4 x86 core architectures are on-track.
AMD EPYC Server CPU Updates
Matt Garman, vice president of compute services at AWS, announced the availability of AMD EPYC processor-based instances on Amazon EC2. Part of AWS’s popular instance families, the new AMD EPYC processor-powered offerings feature industry-leading core density and memory bandwidth.
AMD also disclosed new details and delivered performance previews of its next-generation EPYC processors codenamed “Rome”:
- Processor enhancements including up to 64 Zen 2 cores, increased instructions-per-cycle and leadership compute, I/O and memory bandwidth.
- Platform enhancements including PCIe 4.0-capable x86 server processor with double the bandwidth per channel. This will improve datacenter accelerator performance.
- Double the compute performance per socket.
- Socket compatibility with today’s AMD EPYC server platforms.
“Rome” is sampling with customers now and is expected to be the world’s first high-performance x86 7nm CPU.
AMD Datacenter Graphics Updates
AMD launched the first 7nm GPUs and hardware-virtualized GPUs – the AMD Radeon Instinct MI60 and MI50. These new graphics cards are based on the high-performance, flexible “Vega” architecture and are specifically designed for machine learning and artificial intelligence (AI), delivering higher levels of floating-point performance, greater efficiencies and new features for datacenter deployments.
In addition to new hardware announcements, AMD also announced ROCm 2.0, a new version of its open software platform for accelerated computing that includes new math libraries, broader software framework support, and optimized deep learning operations.
ROCm 2.0 has also been upstreamed for Linux kernel distributions. Designed for scale, ROCm allows customers to deploy high-performance, energy-efficient heterogeneous computing systems in an open environment.