![The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) artwork](https://is1-ssl.mzstatic.com/image/thumb/Podcasts113/v4/39/58/c6/3958c6ce-86e4-3b80-bfb9-840e1dfd7e4b/mza_491361902049110775.png/100x100bb.jpg)
More Language, Less Labeling with Kate Saenko - #580
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
English - June 27, 2022 16:30 - 47 minutes - ★★★★★ - 323 ratingsTechnology News Tech News machinelearning artificialintelligence datascience samcharrington tech technology thetwimlaipocast thisweekinmachinelearning twiml twimlaipodcast Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Today we continue our CVPR series joined by Kate Saenko, an associate professor at Boston University and a consulting professor for the MIT-IBM Watson AI Lab. In our conversation with Kate, we explore her research in multimodal learning, which she spoke about at the Multimodal Learning and Applications Workshop, one of a whopping 6 workshops she spoke at. We discuss the emergence of multimodal learning, the current research frontier, and Kate’s thoughts on the inherent bias in LLMs and how to deal with it. We also talk through some of the challenges that come up when building out applications, including the cost of labeling, and some of the methods she’s had success with. Finally, we discuss Kate’s perspective on the monopolizing of computing resources for “foundational” models, and her paper Unsupervised Domain Generalization by learning a Bridge Across Domains.
The complete show notes for this episode can be found at twimlai.com/go/580
Today we continue our CVPR series joined by Kate Saenko, an associate professor at Boston University and a consulting professor for the MIT-IBM Watson AI Lab. In our conversation with Kate, we explore her research in multimodal learning, which she spoke about at the Multimodal Learning and Applications Workshop, one of a whopping 6 workshops she spoke at. We discuss the emergence of multimodal learning, the current research frontier, and Kate’s thoughts on the inherent bias in LLMs and how to deal with it. We also talk through some of the challenges that come up when building out applications, including the cost of labeling, and some of the methods she’s had success with. Finally, we discuss Kate’s perspective on the monopolizing of computing resources for “foundational” models, and her paper Unsupervised Domain Generalization by learning a Bridge Across Domains.
The complete show notes for this episode can be found at twimlai.com/go/580