Machine Learning Street Talk (MLST) artwork

Machine Learning Street Talk (MLST)

156 episodes - English - Latest episode: 5 days ago -

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

Technology
Homepage Google Podcasts Overcast Castro Pocket Casts RSS feed

Episodes

#103 - Prof. Edward Grefenstette - Language, Semantics, Philosophy

February 11, 2023 21:31 - 1 hour - 141 MB

Support us! https://www.patreon.com/mlst  MLST Discord: https://discord.gg/aNPkGUQtc5 YT: https://youtu.be/i9VPPmQn9HQ Edward Grefenstette is a Franco-American computer scientist who currently serves as Head of Machine Learning at Cohere and Honorary Professor at UCL. He has previously been a research scientist at Facebook AI Research and staff research scientist at DeepMind, and was also the CTO of Dark Blue Labs. Prior to his move to industry, Edward was a Fulford Junior Research Fellow...

#102 - Prof. MICHAEL LEVIN, Prof. IRINA RISH - Emergence, Intelligence, Transhumanism

February 11, 2023 01:45 - 55 minutes - 127 MB

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 YT: https://youtu.be/Vbi288CKgis Michael Levin is a Distinguished Professor in the Biology department at Tufts University, and the holder of the Vannevar Bush endowed Chair. He is the Director of the Allen Discovery Center at Tufts and the Tufts Center for Regenerative and Developmental Biology. His research focuses on understanding the biophysical mechanisms of pattern regulation and harnessing endogenou...

#101 DR. WALID SABA - Extrapolation, Compositionality and Learnability

February 10, 2023 17:18 - 49 minutes - 67.6 MB

MLST Discord! https://discord.gg/aNPkGUQtc5 Patreon: https://www.patreon.com/mlst YT: https://youtu.be/snUf_LIfQII We had a discussion with Dr. Walid Saba about whether or not MLP neural networks can extrapolate outside of the training support, and what it means to extrapolate in a vector space. Then we discussed the concept of vagueness in cognitive science, for example, what does it mean to be "rich" or what is a "pile of sand"? Finally we discussed behaviourism and the reward is enough...

#101 DR. WALID SABA - Extrapolation, Vagueness and Behaviorism

February 10, 2023 17:18 - 1 hour - 135 MB

MLST Discord! https://discord.gg/aNPkGUQtc5 Patreon: https://www.patreon.com/mlst YT: https://youtu.be/snUf_LIfQII We had a discussion with Dr. Walid Saba about whether or not MLP neural networks can extrapolate outside of the training support, and what it means to extrapolate in a vector space. Then we discussed the concept of vagueness in cognitive science, for example, what does it mean to be "rich" or what is a "pile of sand"? Finally we discussed behaviourism and the reward is enough...

#100 Dr. PATRICK LEWIS (co:here) - Retrieval Augmented Generation

February 10, 2023 11:18 - 26 minutes - 48.5 MB

Dr. Patrick Lewis is a London-based AI and Natural Language Processing Research Scientist, working at co:here. Prior to this, Patrick worked as a research scientist at the Fundamental AI Research Lab (FAIR) at Meta AI. During his PhD, Patrick split his time between FAIR and University College London, working with Sebastian Riedel and Pontus Stenetorp.  Patrick’s research focuses on the intersection of information retrieval techniques (IR) and large language models (LLMs). He has done extens...

#99 - CARLA CREMER & IGOR KRAWCZUK - X-Risk, Governance, Effective Altruism

February 05, 2023 20:53 - 1 hour - 137 MB

YT version (with references): https://www.youtube.com/watch?v=lxaTinmKxs0 Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Carla Cremer and Igor Krawczuk argue that AI risk should be understood as an old problem of politics, power and control with known solutions, and that threat models should be driven by empirical work. The interaction between FTX and the Effective Altruism community has sparked a lot of discussion about the dangers of optimization, a...

[NO MUSIC] #98 - Prof. LUCIANO FLORIDI - ChatGPT, Singularitarians, Ethics, Philosophy of Information

February 03, 2023 11:08 - 1 hour - 152 MB

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 YT version: https://youtu.be/YLNGvvgq3eg We are living in an age of rapid technological advancement, and with this growth comes a digital divide. Professor Luciano Floridi of the Oxford Internet Institute / Oxford University believes that this divide not only affects our understanding of the implications of this new age, but also the organization of a fair society.  The Information Revolution has been tr...

#98 - Prof. LUCIANO FLORIDI - ChatGPT, Superintelligence, Ethics, Philosophy of Information

February 03, 2023 02:26 - 1 hour - 153 MB

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 YT version: https://youtu.be/YLNGvvgq3eg (If music annoying, skip to main interview @ 14:14) We are living in an age of rapid technological advancement, and with this growth comes a digital divide. Professor Luciano Floridi of the Oxford Internet Institute / Oxford University believes that this divide not only affects our understanding of the implications of this new age, but also the organization of a f...

#97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language

January 28, 2023 18:35 - 24 minutes - 57.2 MB

Research has shown that humans possess strong inductive biases which enable them to quickly learn and generalize. In order to instill these same useful human inductive biases into machines, a paper was presented by Sreejan Kumar at the NeurIPS conference which won the Outstanding Paper of the Year award. The paper is called Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines. This paper focuses on using a controlled stimulus space of two-dimensional...

#96 Prof. PEDRO DOMINGOS - There are no infinities, utility functions, neurosymbolic

December 30, 2022 12:18 - 2 hours - 232 MB

Pedro Domingos, Professor Emeritus of Computer Science and Engineering at the University of Washington, is renowned for his research in machine learning, particularly for his work on Markov logic networks that allow for uncertain inference. He is also the author of the acclaimed book "The Master Algorithm". Panel: Dr. Tim Scarfe TOC: [00:00:00] Introduction [00:01:34] Galaxtica / misinformation / gatekeeping [00:12:31] Is there a master algorithm? [00:16:29] Limits of our understanding...

#95 - Prof. IRINA RISH - AGI, Complex Systems, Transhumanism

December 26, 2022 19:29 - 39 minutes - 89.7 MB

Canadian Excellence Research Chair in Autonomous AI. Irina holds an MSc and PhD in AI from the University of California, Irvine as well as an MSc in Applied Mathematics from the Moscow Gubkin Institute. Her research focuses on machine learning, neural data analysis, and neuroscience-inspired AI. In particular, she is exploring continual lifelong learning, optimization algorithms for deep neural networks, sparse modelling and probabilistic inference, dialog generation, biologically plausible ...

#94 - ALAN CHAN - AI Alignment and Governance #NEURIPS

December 26, 2022 13:39 - 13 minutes - 31.1 MB

Support us! https://www.patreon.com/mlst Alan Chan is a PhD student at Mila, the Montreal Institute for Learning Algorithms, supervised by Nicolas Le Roux. Before joining Mila, Alan was a Masters student at the Alberta Machine Intelligence Institute and the University of Alberta, where he worked with Martha White. Alan's expertise and research interests encompass value alignment and AI governance. He is currently exploring the measurement of harms from language models and the incentives tha...

#93 Prof. MURRAY SHANAHAN - Consciousness, Embodiment, Language Models

December 24, 2022 18:26 - 1 hour - 110 MB

Support us! https://www.patreon.com/mlst Professor Murray Shanahan is a renowned researcher on sophisticated cognition and its implications for artificial intelligence. His 2016 article ‘Conscious Exotica’ explores the Space of Possible Minds, a concept first proposed by philosopher Aaron Sloman in 1984, which includes all the different forms of minds from those of other animals to those of artificial intelligence. Shanahan rejects the idea of an impenetrable realm of subjective experience ...

#92 - SARA HOOKER - Fairness, Interpretability, Language Models

December 23, 2022 01:32 - 51 minutes - 118 MB

Support us! https://www.patreon.com/mlst Sara Hooker is an exceptionally talented and accomplished leader and research scientist in the field of machine learning. She is the founder of Cohere For AI, a non-profit research lab that seeks to solve complex machine learning problems. She is passionate about creating more points of entry into machine learning research and has dedicated her efforts to understanding how progress in this field can be translated into reliable and accessible machine ...

#91 - HATTIE ZHOU - Teaching Algorithmic Reasoning via In-context Learning #NeurIPS

December 20, 2022 17:04 - 21 minutes - 48.6 MB

Support us! https://www.patreon.com/mlst Hattie Zhou, a PhD student at Université de Montréal and Mila, has set out to understand and explain the performance of modern neural networks, believing it a key factor in building better, more trusted models. Having previously worked as a data scientist at Uber, a private equity analyst at Radar Capital, and an economic consultant at Cornerstone Research, she has recently released a paper in collaboration with the Google Brain team, titled ‘Teachin...

(Music Removed) #90 - Prof. DAVID CHALMERS - Consciousness in LLMs [Special Edition]

December 19, 2022 11:10 - 53 minutes - 123 MB

Support us! https://www.patreon.com/mlst (On the main version we released; the music was a tiny bit too loud in places, and some pieces had percussion which was a bit distracting -- here is a version with all music removed so you have the option! ) David Chalmers is a professor of philosophy and neural science at New York University, and an honorary professor of philosophy at the Australian National University. He is the co-director of the Center for Mind, Brain, and Consciousness, as well...

#90 - Prof. DAVID CHALMERS - Consciousness in LLMs [Special Edition]

December 19, 2022 01:23 - 53 minutes - 123 MB

Support us! https://www.patreon.com/mlst David Chalmers is a professor of philosophy and neural science at New York University, and an honorary professor of philosophy at the Australian National University. He is the co-director of the Center for Mind, Brain, and Consciousness, as well as the PhilPapers Foundation. His research focuses on the philosophy of mind, especially consciousness, and its connection to fields such as cognitive science, physics, and technology. He also investigates ar...

#88 Dr. WALID SABA - Why machines will never rule the world [UNPLUGGED]

December 16, 2022 02:23 - 1 hour - 113 MB

Support us! https://www.patreon.com/mlst Dr. Walid Saba recently reviewed the book Machines Will Never Rule The World, which argues that strong AI is impossible. He acknowledges the complexity of modeling mental processes and language, as well as interactive dialogues, and questions the authors' use of "never." Despite his skepticism, he is impressed with recent developments in large language models, though he questions the extent of their success. We then discussed the successes of cognit...

#86 - Prof. YANN LECUN and Dr. RANDALL BALESTRIERO - SSL, Data Augmentation, Reward isn't enough [NEURIPS2022]

December 11, 2022 00:41 - 30 minutes - 69.8 MB

Yann LeCun is a French computer scientist known for his pioneering work on convolutional neural networks, optical character recognition and computer vision. He is a Silver Professor at New York University and Vice President, Chief AI Scientist at Meta. Along with Yoshua Bengio and Geoffrey Hinton, he was awarded the 2018 Turing Award for their work on deep learning, earning them the nickname of the "Godfathers of Deep Learning". Dr. Randall Balestriero has been researching learnable signal ...

#85 Dr. Petar Veličković (Deepmind) - Categories, Graphs, Reasoning [NEURIPS22 UNPLUGGED]

December 08, 2022 23:45 - 36 minutes - 34.2 MB

Dr. Petar Veličković  is a Staff Research Scientist at DeepMind, he has firmly established himself as one of the most significant up and coming researchers in the deep learning space. He invented Graph Attention Networks in 2017 and has been a leading light in the field ever since pioneering research in Graph Neural Networks, Geometric Deep Learning and also Neural Algorithmic reasoning. If you haven’t already, you should check out our video on the Geometric Deep learning blueprint, featurin...

#84 LAURA RUIS - Large language models are not zero-shot communicators [NEURIPS UNPLUGGED]

December 06, 2022 17:36 - 27 minutes - 63.6 MB

In this NeurIPSs interview, we speak with Laura Ruis about her research on the ability of language models to interpret language in context. She has designed a simple task to evaluate the performance of widely used state-of-the-art language models and has found that they struggle to make pragmatic inferences (implicatures). Tune in to learn more about her findings and what they mean for the future of conversational AI. Laura Ruis https://www.lauraruis.com/ https://twitter.com/LauraRuis BL...

#83 Dr. ANDREW LAMPINEN (Deepmind) - Natural Language, Symbols and Grounding [NEURIPS2022 UNPLUGGED]

December 04, 2022 07:51 - 20 minutes - 47.2 MB

First in our unplugged series live from #NeurIPS2022 We discuss natural language understanding, symbol meaning and grounding and Chomsky with Dr. Andrew Lampinen from DeepMind.  We recorded a LOT of material from NeurIPS, keep an eye out for the uploads.  YT version: https://youtu.be/46A-BcBbMnA References [Paul Cisek] Beyond the computer metaphor: Behaviour as interaction https://philpapers.org/rec/CISBTC Linguistic Competence (Chomsky reference) https://en.wikipedia.org/wiki/Lingui...

#82 - Dr. JOSCHA BACH - Digital Physics, DL and Consciousness [UNPLUGGED]

November 27, 2022 20:31 - 1 hour - 103 MB

AI Helps Ukraine - Charity Conference A charity conference on AI to raise funds for medical and humanitarian aid for Ukraine https://aihelpsukraine.cc/ YT version: https://youtu.be/LgwjcqhkOA4 Support us! https://www.patreon.com/mlst  Dr. Joscha Bach (born 1973 in Weimar, Germany) is a German artificial intelligence researcher and cognitive scientist focusing on cognitive architectures, mental representation, emotion, social modelling, and multi-agent systems.  http://bach.ai/ https:...

#81 JULIAN TOGELIUS, Prof. KEN STANLEY - AGI, Games, Diversity & Creativity [UNPLUGGED]

November 20, 2022 04:05 - 1 hour - 95.8 MB

Support us (and please rate on podcast app) https://www.patreon.com/mlst  In this show tonight with Prof. Julian Togelius (NYU) and Prof. Ken Stanley we discuss open-endedness, AGI, game AI and reinforcement learning.   [Prof Julian Togelius] https://engineering.nyu.edu/faculty/julian-togelius https://twitter.com/togelius [Prof Ken Stanley] https://www.cs.ucf.edu/~kstanley/ https://twitter.com/kenneth0stanley TOC: [00:00:00] Introduction [00:01:07] AI and computer games [00:12:23...

#80 AIDAN GOMEZ [CEO Cohere] - Language as Software

November 15, 2022 01:03 - 51 minutes - 71.2 MB

We had a conversation with Aidan Gomez, the CEO of language-based AI platform Cohere. Cohere is a startup which uses artificial intelligence to help users build the next generation of language-based applications. It's headquartered in Toronto. The company has raised $175 million in funding so far. Language may well become a key new substrate for software building, both in its representation and how we build the software. It may democratise software building so that more people can build sof...

#79 Consciousness and the Chinese Room [Special Edition] (CHOLLET, BISHOP, CHALMERS, BACH)

November 08, 2022 19:44 - 2 hours - 178 MB

This video is demonetised on music copyright so we would appreciate support on our Patreon! https://www.patreon.com/mlst  We would also appreciate it if you rated us on your podcast platform.  YT: https://youtu.be/_KVAzAzO5HU Panel: Dr. Tim Scarfe, Dr. Keith Duggar Guests: Prof. J. Mark Bishop, Francois Chollet, Prof. David Chalmers, Dr. Joscha Bach, Prof. Karl Friston, Alexander Mattick, Sam Roffey The Chinese Room Argument was first proposed by philosopher John Searle in 1980. It is a...

MLST #78 - Prof. NOAM CHOMSKY (Special Edition)

July 08, 2022 22:16 - 3 hours - 199 MB

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB In this special edition episode, we have a conversation with Prof. Noam Chomsky, the father of modern linguistics and the most important intellectual of the 20th century.  With a career spanning the better part of a century, we took the chance to ask Prof. Chomsky his thoughts not only on the progress of linguistics and cognitive science but also the deepest enduring mysteries of science and philosophy as a whole...

#77 - Vitaliy Chiley (Cerebras)

June 16, 2022 14:27 - 1 hour - 92.8 MB

Vitaliy Chiley  is a Machine Learning Research Engineer at the next-generation computing hardware company Cerebras Systems. We spoke about how DL workloads including sparse workloads can run faster on Cerebras hardware. [00:00:00] Housekeeping [00:01:08] Preamble [00:01:50] Vitaliy Chiley Introduction [00:03:11] Cerebrus architecture [00:08:12] Memory management and FLOP utilisation [00:18:01] Centralised vs decentralised compute architecture [00:21:12] Sparsity [00:23:47] Does Spars...

#76 - LUKAS BIEWALD (Weights and Biases CEO)

June 09, 2022 00:02 - 57 minutes - 79.1 MB

Check out Weights and Biases here!  https://wandb.me/MLST Lukas Biewald is an entrepreneur living in San Francisco. He was the founder and CEO of Figure Eight an Internet company that collects training data for machine learning.  In 2018, he founded Weights and Biases, a company that creates developer tools for machine learning. Recently WandB got a cash injection of 15 million dollars in its second funding round.  Lukas has a bachelors and masters in mathematics and computer science resp...

#75 - Emergence [Special Edition] with Dr. DANIELE GRATTAROLA

April 29, 2022 12:17 - 1 hour - 159 MB

An emergent behavior or emergent property can appear when a number of simple entities operate in an environment, forming more complex behaviours as a collective. If emergence happens over disparate size scales, then the reason is usually a causal relation across different scales. Weak emergence describes new properties arising in systems as a result of the low-level interactions, these might be interactions between components of the system or the components and their environment.  In our ep...

#74 Dr. ANDREW LAMPINEN - Symbolic behaviour in AI [UNPLUGGED]

April 14, 2022 17:20 - 1 hour - 90.1 MB

Please note that in this interview Dr. Lampinen was expressing his personal opinions and they do not necessarily represent those of DeepMind.  Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB YT version: https://youtu.be/yPMtSXXn4OY  Dr. Andrew Lampinen is a Senior Research Scientist at DeepMind, and he thinks that symbols are subjective in the relativistic sense. Dr. Lampinen completed his PhD in Cognitive Psychology at Stanford University. His background is ...

#73 - YASAMAN RAZEGHI & Prof. SAMEER SINGH - NLP benchmarks

April 07, 2022 11:56 - 55 minutes - 76.8 MB

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB YT version: https://youtu.be/RzGaI7vXrkk This week we speak with Yasaman Razeghi and Prof. Sameer Singh from UC Urvine. Yasaman recently published a paper called Impact of Pretraining Term Frequencies on Few-Shot Reasoning where she demonstrated comprehensively that large language models only perform well on reasoning tasks because they memorise the dataset. For the first time she showed the accuracy was linearly...

#72 Prof. KEN STANLEY 2.0 - On Art and Subjectivity [UNPLUGGED]

March 29, 2022 21:31 - 1 hour - 116 MB

YT version: https://youtu.be/DxBZORM9F-8 Patreon: https://www.patreon.com/mlst  Discord: https://discord.gg/ESrGqhf5CB Prof. Ken Stanley argued in his book that our world has become saturated with objectives. The process of setting an objective, attempting to achieve it, and measuring progress along the way has become the primary route to achievement in our culture. He’s not saying that objectives are bad per se, especially if they’re modest, but he thinks that when goals are ambitious th...

#71 - ZAK JOST (Graph Neural Networks + Geometric DL) [UNPLUGGED]

March 25, 2022 18:10 - 1 hour - 86 MB

Special discount link for Zak's GNN course - https://bit.ly/3uqmYVq Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB YT version: https://youtu.be/jAGIuobLp60 (there are lots of helper graphics there, recommended if poss) Want to sponsor MLST!? Let us know on Linkedin / Twitter.  [00:00:00] Preamble [00:03:12] Geometric deep learning [00:10:04] Message passing [00:20:42] Top down vs bottom up [00:24:59] All NN architectures are different forms of informati...

#70 - LETITIA PARCALABESCU - Symbolics, Linguistics [UNPLUGGED]

March 19, 2022 14:24 - 1 hour - 108 MB

Today we are having a discussion with Letitia Parcalabescu from the AI Coffee Break youtube channel! We discuss linguistics, symbolic AI and our respective Youtube channels. Make sure you subscribe to her channel! In the first 15 minutes Tim dissects the recent article from Gary Marcus "Deep learning has hit a wall".  Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB YT: https://youtu.be/p2D2duT-R2E [00:00:00] Comments on Gary Marcus Article / Symbolic AI [00:...

#69 DR. THOMAS LUX - Interpolation of Sparse High-Dimensional Data

March 12, 2022 14:13 - 50 minutes - 69.6 MB

Today we are speaking with Dr. Thomas Lux, a research scientist at Meta in Silicon Valley.  In some sense, all of supervised machine learning can be framed through the lens of geometry. All training data exists as points in euclidean space, and we want to predict the value of a function at all those points. Neural networks appear to be the modus operandi these days for many domains of prediction. In that light; we might ask ourselves — what makes neural networks better than classical techni...

#68 DR. WALID SABA 2.0 - Natural Language Understanding [UNPLUGGED]

March 07, 2022 13:25 - 1 hour - 94.6 MB

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/HNnAwSduud YT version: https://youtu.be/pMtk-iUaEuQ Dr. Walid Saba is an old-school polymath. He has a background in cognitive  psychology, linguistics, philosophy, computer science and logic and he’s is now a Senior Scientist at Sorcero. Walid is perhaps the most outspoken critic of BERTOLOGY, which is to say trying to solve the problem of natural language understanding with application of large statistical language model...

#67 Prof. KARL FRISTON 2.0

March 02, 2022 10:01 - 1 hour - 140 MB

We engage in a bit of epistemic foraging with Prof. Karl Friston! In this show; we discuss the free energy principle in detail, also emergence, cognition, consciousness and Karl's burden of knowledge! YT: https://youtu.be/xKQ-F2-o8uM Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/HNnAwSduud [00:00:00] Introduction to FEP/Friston [00:06:53] Cheers to Epistemic Foraging! [00:09:17] The Burden of Knowledge Across Disciplines [00:12:55] On-show introduction to Friston ...

#66 ALEXANDER MATTICK - [Unplugged / Community Edition]

February 28, 2022 08:09 - 50 minutes - 69.4 MB

We have a chat with Alexander Mattick aka ZickZack from Yannic's Discord community. Alex is one of the leading voices in that community and has an impressive technical depth. Don't forget MLST has now started it's own Discord server too, come and join us! We are going to run regular events, our first big event on Wednesday 9th 1700-1900 UK time.  Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/HNnAwSduud YT version: https://youtu.be/rGOOLC8cIO4 [00:00:00] Introduction t...

#65 Prof. PEDRO DOMINGOS [Unplugged]

February 26, 2022 00:27 - 1 hour - 121 MB

Note: there are no politics discussed in this show and please do not interpret this show as any kind of a political statement from us.  We have decided not to discuss politics on MLST anymore due to its divisive nature.  Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/HNnAwSduud [00:00:00] Intro [00:01:36] What we all need to understand about machine learning [00:06:05] The Master Algorithm Target Audience [00:09:50] Deeply Connected Algorithms seen from Divergent Fra...

#64 Prof. Gary Marcus 3.0

February 24, 2022 15:44 - 51 minutes - 71.1 MB

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/HNnAwSduud YT: https://www.youtube.com/watch?v=ZDY2nhkPZxw We have a chat with Prof. Gary Marcus about everything which is currently top of mind for him, consciousness  [00:00:00] Gary intro [00:01:25] Slightly conscious [00:24:59] Abstract, compositional models [00:32:46] Spline theory of NNs [00:36:17] Self driving cars / algebraic reasoning  [00:39:43] Extrapolation [00:44:15] Scaling laws [00:49:50] Maximum like...

#063 - Prof. YOSHUA BENGIO - GFlowNets, Consciousness & Causality

February 22, 2022 00:07 - 1 hour - 128 MB

We are now sponsored by Weights and Biases! Please visit our sponsor link: http://wandb.me/MLST Patreon: https://www.patreon.com/mlst For Yoshua Bengio, GFlowNets are the most exciting thing on the horizon of Machine Learning today. He believes they can solve previously intractable problems and hold the key to unlocking machine abstract reasoning itself. This discussion explores the promise of GFlowNets and the personal journey Prof. Bengio traveled to reach them. Panel: Dr. Tim Scarfe ...

#062 - Dr. Guy Emerson - Linguistics, Distributional Semantics

February 03, 2022 12:41 - 1 hour - 164 MB

Dr. Guy Emerson is a computational linguist and obtained his Ph.D from Cambridge university where he is now a research fellow and lecturer. On panel we also have myself, Dr. Tim Scarfe, as well as Dr. Keith Duggar and the veritable Dr. Walid Saba. We dive into distributional semantics, probability theory, fuzzy logic, grounding, vagueness and the grammar/cognition connection. The aim of distributional semantics is to design computational techniques that can automatically learn the meanings ...

061: Interpolation, Extrapolation and Linearisation (Prof. Yann LeCun, Dr. Randall Balestriero)

January 04, 2022 12:59 - 3 hours - 183 MB

We are now sponsored by Weights and Biases! Please visit our sponsor link: http://wandb.me/MLST Patreon: https://www.patreon.com/mlst Yann LeCun thinks that it's specious to say neural network models are interpolating because in high dimensions, everything is extrapolation. Recently Dr. Randall Balestriero, Dr. Jerome Pesente and prof. Yann LeCun released their paper learning in high dimensions always amounts to extrapolation. This discussion has completely changed how we think about neura...

#60 Geometric Deep Learning Blueprint (Special Edition)

September 19, 2021 01:29 - 3 hours - 196 MB

Patreon: https://www.patreon.com/mlst The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact tractable given enough computational horsepower. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation o...

#59 - Jeff Hawkins (Thousand Brains Theory)

September 03, 2021 18:09 - 2 hours - 213 MB

Patreon: https://www.patreon.com/mlst The ultimate goal of neuroscience is to learn how the human brain gives rise to human intelligence and what it means to be intelligent. Understanding how the brain works is considered one of humanity’s greatest challenges.  Jeff Hawkins thinks that the reality we perceive is a kind of simulation, a hallucination, a confabulation. He thinks that our brains are a model reality based on thousands of information streams originating from the sensors in our ...

#58 Dr. Ben Goertzel - Artificial General Intelligence

August 11, 2021 14:05 - 2 hours - 137 MB

The field of Artificial Intelligence was founded in the mid 1950s with the aim of constructing “thinking machines” - that is to say, computer systems with human-like general intelligence. Think of humanoid robots that not only look but act and think with intelligence equal to and ultimately greater than that of human beings. But in the intervening years, the field has drifted far from its ambitious old-fashioned roots. Dr. Ben Goertzel is an artificial intelligence researcher, CEO and found...

#57 - Prof. Melanie Mitchell - Why AI is harder than we think

July 25, 2021 15:40 - 2 hours - 139 MB

Since its beginning in the 1950s, the field of artificial intelligence has vacillated between periods of optimistic predictions and massive investment and periods of disappointment, loss of confidence, and reduced funding. Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected.  Professor Melanie Mitchell th...

#56 - Dr. Walid Saba, Gadi Singer, Prof. J. Mark Bishop (Panel discussion)

July 08, 2021 21:31 - 1 hour - 131 MB

It has been over three decades since the statistical revolution overtook AI by a storm and over two  decades since deep learning (DL) helped usher the latest resurgence of artificial intelligence (AI). However, the disappointing progress in conversational agents, NLU, and self-driving cars, has made it clear that progress has not lived up to the promise of these empirical and data-driven methods. DARPA has suggested that it is time for a third wave in AI, one that would be characterized by h...

#55 Self-Supervised Vision Models (Dr. Ishan Misra - FAIR).

June 21, 2021 01:21 - 1 hour - 88.3 MB

Dr. Ishan Misra is a Research Scientist at Facebook AI Research where he works on Computer Vision and Machine Learning. His main research interest is reducing the need for human supervision, and indeed, human knowledge in visual learning systems. He finished his PhD at the Robotics Institute at Carnegie Mellon. He has done stints at Microsoft Research, INRIA and Yale. His bachelors is in computer science where he achieved the highest GPA in his cohort.  Ishan is fast becoming a prolific sci...

Twitter Mentions

@mlstreettalk 18 Episodes
@npcollapse 6 Episodes
@kenneth0stanley 3 Episodes
@davidchalmers42 2 Episodes
@imtiazprio 2 Episodes
@irinarish 2 Episodes
@wielandbr 1 Episode
@seesharp 1 Episode
@aravsrinivas 1 Episode
@sarahookr 1 Episode
@markusnrabe 1 Episode
@msalvaris 1 Episode
@chai_research 1 Episode
@marksaroufim 1 Episode
@drmichaellevin 1 Episode
@mpshanahan 1 Episode
@schmidhuberai 1 Episode
@bertdv0 1 Episode
@riddhijp 1 Episode
@ecsquendor 1 Episode