![Machine Learning Street Talk (MLST) artwork](https://is4-ssl.mzstatic.com/image/thumb/Podcasts123/v4/93/38/03/933803c5-1b89-d74b-7186-27ed93801348/mza_5670221076398063789.jpg/100x100bb.jpg)
#101 DR. WALID SABA - Extrapolation, Vagueness and Behaviorism
Machine Learning Street Talk (MLST)
English - February 10, 2023 17:18 - 1 hour - 135 MBTechnology Homepage Download Google Podcasts Overcast Castro Pocket Casts RSS feed
MLST Discord! https://discord.gg/aNPkGUQtc5
Patreon: https://www.patreon.com/mlst
YT: https://youtu.be/snUf_LIfQII
We had a discussion with Dr. Walid Saba about whether or not MLP neural networks can extrapolate outside of the training support, and what it means to extrapolate in a vector space. Then we discussed the concept of vagueness in cognitive science, for example, what does it mean to be "rich" or what is a "pile of sand"? Finally we discussed behaviourism and the reward is enough hypothesis.
References:
A Spline Theory of Deep Networks [Balestriero]
https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf
The animation we showed of the spline theory was created by Ahmed Imtiaz Humayun (https://twitter.com/imtiazprio) and we will be showing an interview with Imtiaz and Randall very soon!
[00:00:00] Intro
[00:00:58] Interpolation vs Extrapolation
[00:24:38] Type 1 Type 2 generalisation and compositionality / Fodor / Systematicity
[00:32:18] Keith's brain teaser
[00:36:53] Neural turing machines / discrete vs continuous / learnability
[00:49:56] Part 2:Vagueness
[01:06:50] Concepts grounded in language
[01:19:58] Behaviorism / thinking / reward is enough