![PyTorch Developer Podcast artwork](https://is3-ssl.mzstatic.com/image/thumb/Podcasts115/v4/5d/4e/01/5d4e0127-9482-b8e3-6f59-59eaf50a21d9/mza_11274935194810526674.jpg/100x100bb.jpg)
How new operators are authored
PyTorch Developer Podcast
English - May 18, 2021 13:00 - 15 minutes - 14.2 MB - ★★★★★ - 35 ratingsTechnology deep learning machine learning pytorch Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Previous Episode: The life and death of Variable
Next Episode: History and constraints of Tensor
What's the general process by which a new operator is added to PyTorch? Why is this actually something of a rare occurrence? How do you integrate an operator with the rest of PyTorch's system so it can be run end-to-end? What should I expect if I'm writing a CPU and CUDA kernel? What tools are available to me to make the job easier? How can I debug my kernels? How do I test them?
Further reading.
The README for the native/ directory, where all kernels get put https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/README.mdA high level overview of how TensorIterator works https://labs.quansight.org/blog/2020/04/pytorch-tensoriterator-internals/Where OpInfos live https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/common_methods_invocations.py