![Talking Machines artwork](https://is1-ssl.mzstatic.com/image/thumb/Podcasts113/v4/7e/9a/87/7e9a874c-ebd6-1951-377d-7448f1d0aa06/mza_1508067935786634013.jpeg/100x100bb.jpg)
Strong AI and Autoencoders
Talking Machines
English - September 10, 2015 17:00 - 36 minutes - 33 MB - ★★★★★ - 140 ratingsTechnology News Tech News computer science aiml research artificial intelligence networks deep programming intelligence artificial computers Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Previous Episode: Active Learning and Machine Learning in Neuroscience
Next Episode: Data from Video Games and The Master Algorithm
In episode nineteen we chat with Hugo Larochelle about his work on unsupervised learning, the International Conference on Learning Representations (ICLR), and his teaching style. His Youtube courses are not to be missed, and his twitter feed @Hugo_Larochelle is a great source for paper reviews. Ryan introduces us to autoencoders (for more, turn to the work of Richard Zemel) plus we tackle the question of what is standing in the way of strong AI. Talking Machines is beginning development of season two! We need your help! Donate now on Kickstarter.
See omnystudio.com/listener for privacy information.
Hosted on Acast. See acast.com/privacy for more information.