Parallelism and Acceleration for Large Language Models with Bryan Catanzaro - #507
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
English - August 05, 2021 17:35 - 50 minutes - ★★★★★ - 323 ratingsTechnology News Tech News machinelearning artificialintelligence datascience samcharrington tech technology thetwimlaipocast thisweekinmachinelearning twiml twimlaipodcast Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Today we’re joined by Bryan Catanzaro, vice president of applied deep learning research at NVIDIA.
Most folks know Bryan as one of the founders/creators of cuDNN, the accelerated library for deep neural networks. In our conversation, we explore his interest in high-performance computing and its recent overlap with AI, his current work on Megatron, a framework for training giant language models, and the basic approach for distributing a large language model on DGX infrastructure.
We also discuss the three different kinds of parallelism, tensor parallelism, pipeline parallelism, and data parallelism, that Megatron provides when training models, as well as his work on the Deep Learning Super Sampling project and the role it's playing in the present and future of game development via ray tracing.
The complete show notes for this episode can be found at twimlai.com/go/507.
Today we’re joined by Bryan Catanzaro, vice president of applied deep learning research at NVIDIA.
Most folks know Bryan as one of the founders/creators of cuDNN, the accelerated library for deep neural networks. In our conversation, we explore his interest in high-performance computing and its recent overlap with AI, his current work on Megatron, a framework for training giant language models, and the basic approach for distributing a large language model on DGX infrastructure.
We also discuss the three different kinds of parallelism, tensor parallelism, pipeline parallelism, and data parallelism, that Megatron provides when training models, as well as his work on the Deep Learning Super Sampling project and the role it's playing in the present and future of game development via ray tracing.
The complete show notes for this episode can be found at twimlai.com/go/507.