Latest Pytorch Podcast Episodes

PyTorch Developer Podcast artwork

TORCH_TRACE and tlparse

PyTorch Developer Podcast - April 29, 2024 00:01 - 15 minutes ★★★★★ - 35 ratings
TORCH_TRACE and tlparse are a structured log and log parser for PyTorch 2. It gives useful information about what code was compiled and what the intermediate build products look like.

PyTorch Developer Podcast artwork

Higher order operators

PyTorch Developer Podcast - April 21, 2024 19:28 - 17 minutes ★★★★★ - 35 ratings
Higher order operators are a special form of operators in torch.ops which have relaxed input argument requirements: in particular, they can accept any form of argument, including Python callables. Their name is based off of their most common use case, which is to represent higher order functions ...

PyTorch Developer Podcast artwork

Inductor - Post-grad FX passes

PyTorch Developer Podcast - April 12, 2024 07:00 - 24 minutes ★★★★★ - 35 ratings
The post-grad FX passes in Inductor run after AOTAutograd has functionalized and normalized the input program into separate forward/backward graphs. As such, they generally can assume that the graph in question is functionalized, except for some mutations to inputs at the end of the graph. At the...

PyTorch Developer Podcast artwork

CUDA graph trees

PyTorch Developer Podcast - March 24, 2024 07:00 - 20 minutes ★★★★★ - 35 ratings
CUDA graph trees are the internal implementation of CUDA graphs used in PT2 when you say mode="reduce-overhead". Their primary innovation is that they allow the reuse of memory across multiple CUDA graphs, as long as they form a tree structure of potential paths you can go down with the CUDA grap...

PyTorch Developer Podcast artwork

Min-cut partitioner

PyTorch Developer Podcast - March 17, 2024 07:00 - 15 minutes ★★★★★ - 35 ratings
The min-cut partitioner makes decisions about what to save for backwards when splitting the forward and backwards graph from the joint graph traced by AOTAutograd. Crucially, it doesn't actually do a "split"; instead, it is deciding how much of the joint graph should be used for backwards. I also...

PyTorch Developer Podcast artwork

AOTInductor

PyTorch Developer Podcast - March 02, 2024 08:00 - 17 minutes ★★★★★ - 35 ratings
AOTInductor is a feature in PyTorch that lets you export an inference model into a self-contained dynamic library, which can subsequently be loaded and used to run optimized inference. It is aimed primarily at CUDA and CPU inference applications, for situations when your model export once to be e...

PyTorch Developer Podcast artwork

Tensor subclasses and PT2

PyTorch Developer Podcast - February 24, 2024 08:00 - 13 minutes ★★★★★ - 35 ratings
Tensor subclasses allow you to add extend PyTorch with new types of tensors without having to write any C++. They have been used to implement DTensor, FP8, Nested Jagged Tensor and Complex Tensor. Recent work by Brian Hirsh means that we can compile tensor subclasses in PT2, eliminating their ove...

PyTorch Developer Podcast artwork

Compiled autograd

PyTorch Developer Podcast - February 19, 2024 08:00 - 18 minutes ★★★★★ - 35 ratings
Compiled autograd is an extension to PT2 that permits compiling the entirety of a backward() call in PyTorch. This allows us to fuse accumulate grad nodes as well as trace through arbitrarily complicated Python backward hooks. Compiled autograd is an important part of our plans for compiled DDP/F...

PyTorch Developer Podcast artwork

PT2 extension points

PyTorch Developer Podcast - February 05, 2024 09:00 - 15 minutes ★★★★★ - 35 ratings
We discuss some extension points for customizing PT2 behavior across Dynamo, AOTAutograd and Inductor.

PyTorch Developer Podcast artwork

Inductor - Define-by-run IR

PyTorch Developer Podcast - January 24, 2024 08:00 - 12 minutes ★★★★★ - 35 ratings
Define-by-run IR is how Inductor defines the internal compute of a pointwise/reduction operation. It is characterized by a function that calls a number of functions in the 'ops' namespace, where these ops can be overridden by different handlers depending on what kind of semantic analysis you need...

PyTorch Developer Podcast artwork

Unsigned integers

PyTorch Developer Podcast - January 17, 2024 14:00 - 13 minutes ★★★★★ - 35 ratings
Traditionally, unsigned integer support in PyTorch was not great; we only support uint8. Recently, we added support for uint16, uint32 and uint64. Bare bones functionality works, but I'm entreating the community to help us build out the rest. In particular, for most operations, we plan to use PT2...

PyTorch Developer Podcast artwork

Inductor - IR

PyTorch Developer Podcast - January 16, 2024 09:00 - 18 minutes ★★★★★ - 35 ratings
Inductor IR is an intermediate representation that lives between ATen FX graphs and the final Triton code generated by Inductor. It was designed to faithfully represent PyTorch semantics and accordingly models views, mutation and striding. When you write a lowering from ATen operators to Inducto...

PyTorch Developer Podcast artwork

Dynamo - VariableTracker

PyTorch Developer Podcast - January 12, 2024 17:40 - 15 minutes ★★★★★ - 35 ratings
I talk about VariableTracker in Dynamo. VariableTracker is Dynamo's representation of the Python. I talk about some recent changes, namely eager guards and mutable VT. I also tell you how to find the functionality you care about in VariableTracker (https://docs.google.com/document/d/1XDPNK3iNNSh...

PyTorch Developer Podcast artwork

Unbacked SymInts

PyTorch Developer Podcast - February 21, 2023 08:00 - 21 minutes ★★★★★ - 35 ratings
This podcast goes over the basics of unbacked SymInts. You might want to listen to this one before listening to https://pytorch-dev-podcast.simplecast.com/episodes/zero-one-specialization Some questions we answer (h/t from Gregory Chanan):   - Are unbacked symints only for export?  Because oth...

PyTorch Developer Podcast artwork

Zero-one specialization

PyTorch Developer Podcast - February 20, 2023 08:00 - 21 minutes ★★★★★ - 35 ratings
Mikey Dagistes joins me to ask some questions about the recent recent composability sync https://www.youtube.com/watch?v=NJV7YFbtoR4 where we discussed 0/1 specialization and its implications on export in PT2. What's the fuss all about? What do I need to understand about PT2 to understand why 0/...

PyTorch Developer Podcast artwork

torchdynamo

PyTorch Developer Podcast - December 06, 2022 08:00 - 25 minutes ★★★★★ - 35 ratings
What is torchdynamo? From a bird's eye view, what exactly does it do? What are some important things to know about it? How does it differ from other graph capture mechanisms? For more reading, check out https://docs.google.com/document/d/13K03JN4gkbr40UMiW4nbZYtsw8NngQwrTRnL3knetGM/edit#

PyTorch Developer Podcast artwork

PyTorch 2.0

PyTorch Developer Podcast - December 04, 2022 08:00 - 17 minutes ★★★★★ - 35 ratings
Soumith's keynote on PT2.0: https://youtu.be/vbtGZL7IrAw?t=1037 PT2 Manifesto: https://docs.google.com/document/d/1tlgPcR2YmC3PcQuYDPUORFmEaBPQEmo8dsh4eUjnlyI/edit#  PT2 Architecture: https://docs.google.com/document/d/1wpv8D2iwGkKjWyKof9gFdTf8ISszKbq1tsMVm-3hSuU/edit#

PyTorch Developer Podcast artwork

History of functorch

PyTorch Developer Podcast - November 07, 2022 23:14 - 19 minutes ★★★★★ - 35 ratings
Join me with Richard Zou to talk about the history of functorch. What was the thought process behind the creation of functorch? How did it get started? JAX’s API and model is fairly different from PyTorch’s, how did we validate that it would work in PyTorch? Where did functorch go after the earl...

PyTorch Developer Podcast artwork

Learning rate schedulers

PyTorch Developer Podcast - June 13, 2022 16:00 - 19 minutes ★★★★★ - 35 ratings
What’s a learning rate? Why might you want to schedule it? How does the LR scheduler API in PyTorch work? What the heck is up with the formula implementation? Why is everything terrible?

PyTorch Developer Podcast artwork

Weak references

PyTorch Developer Podcast - June 06, 2022 16:00 - 16 minutes ★★★★★ - 35 ratings
What are they good for? (Caches. Private fields.) C++ side support, how it’s implemented / release resources. Python side support, how it’s implemented. Weak ref tensor hazard due to resurrection. Downsides of weak references in C++. Scott Wolchok’s release resources optimization. Other episode...

PyTorch Developer Podcast artwork

Strides

PyTorch Developer Podcast - May 30, 2022 16:00 - 20 minutes ★★★★★ - 35 ratings
Mike Ruberry has an RFC about stride-agnostic operator semantics (https://github.com/pytorch/pytorch/issues/78050), so let's talk about strides. What are they? How are they used to implement views and memory format? How do you handle them properly when writing kernels? In what sense are strides ...

PyTorch Developer Podcast artwork

AOTAutograd

PyTorch Developer Podcast - May 09, 2022 16:00 - 19 minutes ★★★★★ - 35 ratings
AOTAutograd is a cool new feature in functorch for capturing both forward and backward traces of PyTorch operators, letting you run them through a compiler and then drop the compiled kernels back into a normal PyTorch eager program. Today, Horace joins me to tell me how it works, what it is good...

PyTorch Developer Podcast artwork

Dispatcher questions with Sherlock

PyTorch Developer Podcast - May 02, 2022 16:00 - 18 minutes ★★★★★ - 35 ratings
Sherlock recently joined the PyTorch team, having previously worked on ONNX Runtime at Microsoft, and Sherlock’s going to ask me some questions about the dispatcher, and I’m going to answer them. We talked about the history of the dispatcher, how to override dispatching order, multiple dispatch,...

PyTorch Developer Podcast artwork

New CI

PyTorch Developer Podcast - April 25, 2022 16:00 - 16 minutes ★★★★★ - 35 ratings
PyTorch recently moved all of its CI from CircleCI to GitHub Actions. There were a lot of improvements in the process, making my old podcast about CI obsolete! Today, Eli Uriegas joins me to talk about why we moved to GitHub Actions, how the new CI system is put together, and what some cool feat...

PyTorch Developer Podcast artwork

Python exceptions

PyTorch Developer Podcast - April 17, 2022 16:00 - 14 minutes ★★★★★ - 35 ratings
C++ has exceptions, Python has exceptions. But they’re not the same thing! How do exceptions work in CPython, how do we translate exceptions from C++ to Python (hint: it’s different for direct bindings versus pybind11), and what do warnings (which we also translate from C++ to Python) have in co...

PyTorch Developer Podcast artwork

Torch vs ATen APIs

PyTorch Developer Podcast - April 11, 2022 16:00 - 15 minutes ★★★★★ - 35 ratings
PyTorch’s torch API is the Python API everyone knows and loves, but there’s also another API, the ATen API, which most of PyTorch’s internal subsystems are built on. How to tell them apart? What implications do these have on our graph mode IR design? Also, a plug for PrimTorch, a new set of oper...

PyTorch Developer Podcast artwork

All about NVIDIA GPUs

PyTorch Developer Podcast - September 24, 2021 16:00 - 19 minutes ★★★★★ - 35 ratings
PyTorch is in the business of shipping numerical software that can run fast on your CUDA-enabled NVIDIA GPU, but it turns out there is a lot of heterogeneity in NVIDIA’s physical GPU offering and when it comes to what is fast and what is slow, what specific GPU you have on hand matters quite a b...

PyTorch Developer Podcast artwork

Tensor subclasses and Liskov substitution principle

PyTorch Developer Podcast - September 16, 2021 13:00 - 19 minutes ★★★★★ - 35 ratings
A lot of recent work going in PyTorch is all about adding new and interesting Tensor subclasses, and this all leads up to the question of, what exactly is OK to make a tensor subclass? One answer to this question comes from an old principle from Barbara Liskov called the Liskov substitution prin...

PyTorch Developer Podcast artwork

Half precision

PyTorch Developer Podcast - September 10, 2021 13:00 - 18 minutes ★★★★★ - 35 ratings
In this episode I talk about reduced precision floating point formats float16 (aka half precision) and bfloat16. I'll discuss what floating point numbers are, how these two formats vary, and some of the practical considerations that arise when you are working with numeric code in PyTorch that al...

PyTorch Developer Podcast artwork

DataLoader with multiple workers leaks memory

PyTorch Developer Podcast - September 01, 2021 13:00 - 16 minutes ★★★★★ - 35 ratings
Today I'm going to talk about a famous issue in PyTorch, DataLoader with num_workers > 0 causes memory leak (https://github.com/pytorch/pytorch/issues/13246). This bug is a good opportunity to talk about DataSet/DataLoader design in PyTorch, fork and copy-on-write memory in Linux and Python refe...

Related Pytorch Topics