![PyTorch Developer Podcast artwork](https://is3-ssl.mzstatic.com/image/thumb/Podcasts115/v4/5d/4e/01/5d4e0127-9482-b8e3-6f59-59eaf50a21d9/mza_11274935194810526674.jpg/100x100bb.jpg)
TensorIterator
PyTorch Developer Podcast
English - June 01, 2021 13:00 - 17 minutes - 16.3 MB - ★★★★★ - 35 ratingsTechnology deep learning machine learning pytorch Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Previous Episode: native_functions.yaml
Next Episode: __torch_function__
You walk into the whiteboard room to do a technical interview. The interviewer looks you straight in the eye and says, "OK, can you show me how to add the elements of two lists together?" Confused, you write down a simple for loop that iterates through each element and adds them together. Your interviewer rubs his hands together evilly and cackles, "OK, let's make it more complicated."
What does TensorIterator do? Why the heck is TensorIterator so complicated? What's going on with broadcasting? Type promotion? Overlap checks? Layout? Dimension coalescing? Parallelization? Vectorization?
Further reading.
PyTorch TensorIterator internals https://labs.quansight.org/blog/2020/04/pytorch-tensoriterator-internals/Why is TensorIterator so slow https://dev-discuss.pytorch.org/t/comparing-the-performance-of-0-4-1-and-master/136Broadcasting https://pytorch.org/docs/stable/notes/broadcasting.html and type promotion https://pytorch.org/docs/stable/tensor_attributes.html#type-promotion-doc