![NLP Highlights artwork](https://is1-ssl.mzstatic.com/image/thumb/Podcasts127/v4/a8/49/90/a849903a-65af-d8fc-07a7-c0d1bbf826a6/mza_4767231250788281707.jpg/100x100bb.jpg)
137 - Nearest Neighbor Language Modeling and Machine Translation, with Urvashi Khandelwal
NLP Highlights
English - January 13, 2023 22:59 - 35 minutes - 32.8 MB - ★★★★★ - 22 ratingsScience Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
We invited Urvashi Khandelwal, a research scientist at Google Brain to talk about nearest neighbor language and machine translation models. These models interpolate parametric (conditional) language models with non-parametric distributions over the closest values in some data stores built from relevant data. Not only are these models shown to outperform the usual parametric language models, they also have important implications on memorization and generalization in language models.
Urvashi's webpage: https://urvashik.github.io
Papers discussed:
1) Generalization through memorization: Nearest Neighbor Language Models (https://www.semanticscholar.org/paper/7be8c119dbe065c52125ee7716601751f3116844)
2)Nearest Neighbor Machine Translation (https://www.semanticscholar.org/paper/20d51f8e449b59c7e140f7a7eec9ab4d4d6f80ea)