Previous Episode: Memory layout

Reference counting is a common memory management technique in C++ but PyTorch does its reference counting in a slightly idiosyncratic way using intrusive_ptr. We'll talk about why intrusive_ptr exists, the reason why refcount bumps are slow in C++ (but not in Python), what's up with const Tensor& everywhere, why the const is a lie and how TensorRef lets you create a const Tensor& from a TensorImpl* without needing to bump your reference count.

Further reading.

Why you shouldn't feel bad about passing tensor by reference https://dev-discuss.pytorch.org/t/we-shouldnt-feel-bad-about-passing-tensor-by-reference/85Const correctness in PyTorch https://github.com/zdevito/ATen/issues/27TensorRef RFC https://github.com/pytorch/rfcs/pull/16