Intel Chip Chat artwork

Advancing Deep Learning with Custom-Built Accelerators- Intel® Chip Chat episode 674

Intel Chip Chat

English - November 12, 2019 19:00 - ★★★★★ - 10 ratings
Technology intel chip chat data center processor microprocessor datacenter Homepage Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed


Deep learning workloads have evolved considerably over the last few years. Today’s models are larger, deeper, and more complex than neural networks from even a few years ago, with an explosion in size in the number of parameters per model. The Intel Nervana Neural Network Processor for Training (NNP-T) is a purpose-built deep learning accelerator to speed up the training and deployment of distributed learning algorithms.

Carey Kloss is the VP and General Manager of the AI Training Products Group at Intel. In this interview, Kloss outlines the architecture and potential of the Intel Nervana NNP-T. He gets into major issues like memory and how the architecture was designed to avoid problems like becoming memory-locked, how the accelerator supports existing software frameworks like PaddlePaddle and TensorFlow, and what the NNP-T means for customers who want to keep on eye on power usage and lower TCO.

To learn more about the Intel Nervana Neural Network Processor for Training go to: https://www.intel.ai/nervana-nnp/

Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. © Intel Corporation