PCIe 5.0: Accelerating Data Movement in the Cloud

By Matt Jones

General Manager

Rambus

May 27, 2021

Story

PCIe 5.0: Accelerating Data Movement in the Cloud

The PCI Express (PCIe) specification had remained at the 3.0 generation for almost seven years (from 2010-2017), with lanes running at 8 Gigatransfers per second (GT/s).

During this time, computing and networking bandwidth requirements continued their rapid rise. Toward the end of this time span, PCIe increasingly was becoming a bottleneck for higher system performance.

With data processing demands and bandwidth demands continuing to accelerate, the industry has taken a much more active approach to advance the performance of PCIe to ensure it keeps pace with the rest of technology. The PCI-SIG has committed to a two-year cadence for upgrading the standard. PCIe 4.0, introduced in 2017, made its debut in mainstream servers late in 2019 with the AMD EPYCTM 7002 (Rome) processor introduction. PCIe 4.0 doubles the lane speeds to 16 GT/s.

Yet the demand for greater bandwidth driven by AI/ML, high-performance computing (HPC) and other data center workloads is insatiable. Networking in the cloud is moving from 100 Gigabit Ethernet (GbE) to 400 GbE. So, while PCIe 4.0 has only recently hit the market, it is already insufficient to support these faster networking speeds. As such, in early 2022, we’ll see a transition of new server architectures to the next generation of the PCIe standard.

Enter PCIe 5.0. The PCIe 5.0 standard offers another step up in performance to increase bandwidth and minimize latency of communication in the data center and at the edge. It scales data rates up to 32 GT/s, another doubling in lane speed over the previous generation. This allows for further advancement in applications with high-performance workloads such as genomics, AI/ML training, video transcoding, and streaming games, all of which are increasing in sophistication and demanding ever-more parallel processing.

Because of the high bandwidth required, the enterprise and cloud data centers are expected to be early adopters of PCIe 5.0. However, given an increasing number of low-latency and time-sensitive applications, PCIe 5.0’s adoption at the edge will quickly follow. A typical hyperscale data center can help illustrate where the interfaces will be deployed.

There are three main elements to a hyperscale data center: networking, compute, and storage. This is a very typical cloud architecture, also known as the leaf-spine architecture of a data center. At the base node of this architecture are racks of servers, and these racks are combined to form clusters. The base computation, the processing, basically happens in these servers. As workloads become more sophisticated, parallelism increases, driving increased east/west traffic (intra-data center traffic).

Additionally, applications now span multiple servers within a rack or multiple racks. Top of rack (ToR) switches are responsible for data traffic exchange between the servers within a rack. Connecting these are the leaf switches that enable data traffic between racks within a cluster.

One layer up, there is a spine switch that enables traffic to flow between clusters within a data center. At the front panel of the ToR switch are the Ethernet QSFP nodes connecting it to the servers within the rack. The server has a network interface card (NIC) at the other end of the ethernet connection from the ToR switch.

400 GbE is a bidirectional link providing 400 Gigabits per second (Gb/s) of bandwidth in both directions. That translates to 800 Gb/s or 100 GigaBytes per second (GB/s) of aggregate bandwidth. PCIe 5.0 is also bidirectional, and typically instantiated as a x16 interface. That translates to 32 GT/s, times two for duplex, times 16 lanes, divided by 8 bits per byte, (32 x 2 x 16)/8 or 124 GB/s. This is sufficient bandwidth to support a 400 GbE NIC running at full speed, while the 16 GT/s data rate of a PCIe 4.0 implementation would not.

PCIe 5.0 is driving the performance needed for very fast video storage access to the NVMe drives between the CPU and the SSD controllers as well. From a storage perspective, the videos are scaling to higher and higher resolutions, which means the interface between the controllers and the CPU must become faster and faster, and the U.2 form factor requires a x4 interface. Running at PCIe 5.0’s speeds, this translates to 32 GB/s of aggregate bandwidth.

This voracious demand for bandwidth never ends. More bandwidth enables advancements in workloads, making new applications possible, which again demand more bandwidth in a never-ending virtuous cycle. PCIe 5.0 represents the latest generation in a system interface standard which is becoming as ubiquitous for connecting the chips inside a computing device as ethernet is for the connection between devices. With the PCI Express standard now on a two-year upgrade cadence, PCIe 5.0 will be an important part of the journey to ever higher levels of computing performance.


Matt Jones is the General Manager for the IP Cores Business Unit at Rambus. He is responsible for development and growth of the company’s interface IP products, driving memory and interconnect architectural innovation in Data Center and Edge Connectivity applications. Before joining Rambus, Matt held various product line management and marketing positions for microprocessor, connectivity and power management products over a twenty-four-year career at IDT, later acquired by Renesas. Matt holds a Bachelor of Science in Electrical Engineering and a Bachelor of Arts in Economics from Stanford University.