MLST Discord! https://discord.gg/aNPkGUQtc5


Patreon: https://www.patreon.com/mlst


YT: https://youtu.be/snUf_LIfQII


We had a discussion with Dr. Walid Saba about whether or not MLP neural networks can extrapolate outside of the training support, and what it means to extrapolate in a vector space. Then we discussed the concept of vagueness in cognitive science, for example, what does it mean to be "rich" or what is a "pile of sand"? Finally we discussed behaviourism and the reward is enough hypothesis.


References:


A Spline Theory of Deep Networks [Balestriero]


https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf


The animation we showed of the spline theory was created by Ahmed Imtiaz Humayun (https://twitter.com/imtiazprio) and we will be showing an interview with Imtiaz and Randall very soon!


[00:00:00] Intro


[00:00:58] Interpolation vs Extrapolation


[00:24:38] Type 1 Type 2 generalisation and compositionality / Fodor / Systematicity


[00:32:18] Keith's brain teaser


[00:36:53] Neural turing machines / discrete vs continuous / learnability


[00:49:56] Part 2:Vagueness


[01:06:50] Concepts grounded in language


[01:19:58] Behaviorism / thinking / reward is enough

Twitter Mentions