![PyTorch Developer Podcast artwork](https://is3-ssl.mzstatic.com/image/thumb/Podcasts115/v4/5d/4e/01/5d4e0127-9482-b8e3-6f59-59eaf50a21d9/mza_11274935194810526674.jpg/100x100bb.jpg)
Intro to distributed
PyTorch Developer Podcast
English - July 08, 2021 13:00 - 15 minutes - 14.4 MB - ★★★★★ - 35 ratingsTechnology deep learning machine learning pytorch Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Previous Episode: Double backwards
Next Episode: API design via lexical and dynamic scoping
Today, Shen Li (mrshenli) joins me to talk about distributed computation in PyTorch. What is distributed? What kinds of things go into making distributed work in PyTorch? What's up with all of the optimizations people want to do here?
Further reading.
PyTorch distributed overview https://pytorch.org/tutorials/beginner/dist_overview.htmlDistributed data parallel https://pytorch.org/docs/stable/notes/ddp.html