Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee - #358
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
English - March 18, 2020 21:04 - 27 minutes - ★★★★★ - 323 ratingsTechnology News Tech News machinelearning artificialintelligence datascience samcharrington tech technology thetwimlaipocast thisweekinmachinelearning twiml twimlaipodcast Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Previous Episode: Upside-Down Reinforcement Learning with Jürgen Schmidhuber - #357
Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks.