Chris and Martin talk about the features and benefits of CXL, or Compute Express Link with CXL Chair Jim Pappas.


The post #217 – Introduction to CXL with Jim Pappas appeared first on Storage Unpacked.

This week, Chris and Martin discuss the emerging CXL technology with CXL Consortium Chair, Jim Pappas. CXL or Compute Express Link, is a new solution that will provide improved connectivity between processors and accelerator devices connected through the PCIe 5.0 bus. With CXL, central processors, GPUs and other co-processors can share memory and operate closer together with improved latency and throughput.

CXL is effectively offering a standardisation of the interface for accelerator cards, with access to a global memory namespace. This process is achieved with three new protocols, CXL.io, CXL.mem and CXL.cache, as Jim explains in this recording. Through a feature of PCIe called Alternate Protocol Mode, both PCIe and CXL devices will be able to exist on the same physical bus. Over time, we can expect to see new features like memory pooling and connectivity of Optane persistent memory to non-Intel CPUs.

The CXL technology is truly exciting and will radically change the way we think of the traditional processor, memory and storage hierarchy. Learn more at https://www.computeexpresslink.org/.

Elapsed time: 00:46:13

Timeline

00:00:00 – Intros00:01:45 – What is CXL, or Compute Express Link?00:02:40 – Alternate Protocol Mode enables CXL and PCIe together00:03:20 – CXL will deliver more efficient accelerator/CPU interfaces00:04:50 – Memory on an I/O bus is very hard to achieve00:06:20 – Memory connectivity is costly and complex00:07:10 – What is cache coherence?00:09:50 – What will CXL look like to the end user?00:12:15 – How does CXL compare to NVMe or RDMA?00:15:55 – CXL has CXL.io, CXL.cache and CXL.mem00:17:00 – How are accelerators implemented today?00:22:00 – Is CXL standardising accelerators the same way NVMe standardised drives?00:24:45 – How big is the CXL Consortium?00:29:30 – Look out for CXL introduction with PCIe 5.0 in 202200:30:45 – Intel’x next Xeon CPU will have CXL support00:33:40 – CXL offers device sharing and memory pooling00:34:50 – The first time we talk about dynamic memory…..00:35:55 – Hardware is seeing a renaissance 00:38:30 – and pooling again – dynamic memory across VMs and machines00:40:15 – How do we combine persistent memory with CXL?00:42:00 – CXL could open Optane to non-Intel platforms00:43:50 – What’s coming next? Is there a CXL 3.0?00:45:30 – Wrap Up

Related Podcasts & Blogs

#208 – NVIDIA GPUDirect Storage – More Than a Good Marketing Message?#204 – Liqid Composable Disaggregated Infrastructure#195 – Fungible Data Processing Units#190 – NVIDIA BlueField SmartNICs and DPUsVAST Data has Vast AmbitionsIntel Under Pressure as NVIDIA Announces Grace CPU

Copyright (c) 2016-2021 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #cxl1.


The post #217 – Introduction to CXL with Jim Pappas appeared first on Storage Unpacked.