![PyTorch Developer Podcast artwork](https://is3-ssl.mzstatic.com/image/thumb/Podcasts115/v4/5d/4e/01/5d4e0127-9482-b8e3-6f59-59eaf50a21d9/mza_11274935194810526674.jpg/100x100bb.jpg)
Backend extensibility
PyTorch Developer Podcast
English - May 14, 2021 13:00 - 15 minutes - 14 MB - ★★★★★ - 35 ratingsTechnology deep learning machine learning pytorch Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Previous Episode: The road to structured kernels
Next Episode: The life and death of Variable
What's the current state of backend extensibility? How did PyTorch evolve from being a CPU and CUDA only framework to also support AMD ROCm and XLA? What are some problems with adding an out-of-tree backend, and what's some work to make it better?
Further reading:
Script for HIPifying PyTorch's source when enabling ROCm https://github.com/pytorch/pytorch/blob/master/tools/amd_build/build_amd.pyPyTorch/XLA https://github.com/pytorch/xla/Brian Hirsh's spec on what out-of-tree backend codegen looks like https://github.com/pytorch/xla/issues/2871