![NLP Highlights artwork](https://is1-ssl.mzstatic.com/image/thumb/Podcasts127/v4/a8/49/90/a849903a-65af-d8fc-07a7-c0d1bbf826a6/mza_4767231250788281707.jpg/100x100bb.jpg)
101 - The lottery ticket hypothesis, with Jonathan Frankle
NLP Highlights
English - January 14, 2020 17:52 - 41 minutes - 37.8 MB - ★★★★★ - 22 ratingsScience Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Previous Episode: 100 - NLP Startups, with Oren Etzioni
In this episode, Jonathan Frankle describes the lottery ticket hypothesis, a popular explanation of how over-parameterization helps in training neural networks. We discuss pruning methods used to uncover subnetworks (winning tickets) which were initialized in a particularly effective way. We also discuss patterns observed in pruned networks, stability of networks pruned at different time steps and transferring uncovered subnetworks across tasks, among other topics.
A recent paper on the topic by Frankle and Carbin, ICLR 2019: https://arxiv.org/abs/1803.03635
Jonathan Frankle’s homepage: http://www.jfrankle.com/