We were delighted to be joined by Davis Professor at the Sante Fe Insitute, Melanie Mitchell! We chat about our understanding of artificial intelligence, human intelligence, and whether it's reasonable to expect us to be able to build sophisticated human-like automated systems anytime soon.
Follow Melanie on twitter @MelMitchell1 and check out her website: https://melaniemitchell.me/
We discuss:
- AI hype through the ages
- How do we know if machines understand?
- Winograd schemas and the "WinoGrande" challenge.
- The importance of metaphor and analogies to intelligence
- The four fallacies in AI research:
- 1. Narrow intelligence is on a continuum with general intelligence
- 2. Easy things are easy and hard things are hard
- 3. The lure of wishful mnemonics
- 4. Intelligence is all in the brain
- Whether embodiment is necessary for true intelligence
- Douglas Hofstadter's views on AI
- Ray Kurzweil and the "singularity"
- The fact that Moore's law doesn't hold for software
- The difference between symbolic AI and machine learning
- What analogies have to teach us about human cognition
Errata
- Ben mistakenly says that Eliezer Yudkowsky has bet that everyone will die by 2025. It's actually by 2030. You can find the details of the bet here: https://www.econlib.org/archives/2017/01/my_end-of-the-w.html.
References:
- NY Times reporting on Perceptrons (https://www.nytimes.com/1958/07/13/archives/electronic-brain-teaches-itself.html).
- The WinoGrande challenge paper (https://arxiv.org/abs/1907.10641)
- Why AI is harder than we think (https://arxiv.org/pdf/2104.12871.pdf)
- The Singularity is Near (https://smile.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0143037889?sa-no-redirect=1), by Ray Kurzweil
Contact us
- Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
- Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
- Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Eliezer was more scared than Douglas about AI, so he wrote a blog post about it. Who wrote the blog post, Eliezer or Douglas? Tell us at over at [email protected]. Special Guest: Melanie Mitchell.

We were delighted to be joined by Davis Professor at the Sante Fe Insitute, Melanie Mitchell! We chat about our understanding of artificial intelligence, human intelligence, and whether it's reasonable to expect us to be able to build sophisticated human-like automated systems anytime soon.

Follow Melanie on twitter @MelMitchell1 and check out her website: https://melaniemitchell.me/

We discuss:

AI hype through the ages
How do we know if machines understand?
Winograd schemas and the "WinoGrande" challenge.
The importance of metaphor and analogies to intelligence
The four fallacies in AI research:

1. Narrow intelligence is on a continuum with general intelligence
2. Easy things are easy and hard things are hard
3. The lure of wishful mnemonics
4. Intelligence is all in the brain

Whether embodiment is necessary for true intelligence
Douglas Hofstadter's views on AI
Ray Kurzweil and the "singularity"
The fact that Moore's law doesn't hold for software
The difference between symbolic AI and machine learning
What analogies have to teach us about human cognition

Errata

Ben mistakenly says that Eliezer Yudkowsky has bet that everyone will die by 2025. It's actually by 2030. You can find the details of the bet here: https://www.econlib.org/archives/2017/01/my_end-of-the-w.html.

References:

NY Times reporting on Perceptrons.
The WinoGrande challenge paper
Why AI is harder than we think
The Singularity is Near, by Ray Kurzweil

Contact us

Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
Come join our discord server! DM us on twitter or send us an email to get a supersecret link

Eliezer was more scared than Douglas about AI, so he wrote a blog post about it. Who wrote the blog post, Eliezer or Douglas? Tell us at over at [email protected].

Special Guest: Melanie Mitchell.

Support Increments