![NLP Highlights artwork](https://is1-ssl.mzstatic.com/image/thumb/Podcasts127/v4/a8/49/90/a849903a-65af-d8fc-07a7-c0d1bbf826a6/mza_4767231250788281707.jpg/100x100bb.jpg)
51 - A Regularized Framework for Sparse and Structured Neural Attention, with Vlad Niculae
NLP Highlights
English - March 12, 2018 21:29 - 16 minutes - 15.2 MB - ★★★★★ - 22 ratingsScience Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
NIPS 2017 paper by Vlad Niculae and Mathieu Blondel.
Vlad comes on to tell us about his paper. Attentions are often computed in neural networks using a softmax operator, which maps scalar outputs from a model into a probability space over latent variables. There are lots of cases where this is not optimal, however, such as when you really want to encourage a sparse attention over your inputs, or when you have additional structural biases that could inform the model. Vlad and Mathieu have developed a theoretical framework for analyzing the options in this space, and in this episode we talk about that framework, some concrete instantiations of attention mechanisms from the framework, and how well these work.