![PyTorch Developer Podcast artwork](https://is3-ssl.mzstatic.com/image/thumb/Podcasts115/v4/5d/4e/01/5d4e0127-9482-b8e3-6f59-59eaf50a21d9/mza_11274935194810526674.jpg/100x100bb.jpg)
Reference counting
PyTorch Developer Podcast
English - July 20, 2021 13:00 - 15 minutes - 13.9 MB - ★★★★★ - 35 ratingsTechnology deep learning machine learning pytorch Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Previous Episode: Memory layout
Next Episode: torch.use_deterministic_algorithms
Reference counting is a common memory management technique in C++ but PyTorch does its reference counting in a slightly idiosyncratic way using intrusive_ptr. We'll talk about why intrusive_ptr exists, the reason why refcount bumps are slow in C++ (but not in Python), what's up with const Tensor& everywhere, why the const is a lie and how TensorRef lets you create a const Tensor& from a TensorImpl* without needing to bump your reference count.
Further reading.
Why you shouldn't feel bad about passing tensor by reference https://dev-discuss.pytorch.org/t/we-shouldnt-feel-bad-about-passing-tensor-by-reference/85Const correctness in PyTorch https://github.com/zdevito/ATen/issues/27TensorRef RFC https://github.com/pytorch/rfcs/pull/16