Artificial Intelligence Podcast



Podcast Notes Key Takeaways  We will look back on GPT-4 as we do the early computers: buggy, slow, room for improvement, but nonetheless a monumental achievement Progress is a continuous exponential; very rarely is there “a single moment” where technology crosses the rubiconOpenAI is building in public because they think it is important for the world to get access to this technology early and to shape the way it is going to be developed Better alignment techniques lead to better capabilities, and vice versaIdeally, the world comes together, discusses, and agrees on where it wants to draw the boundaries on the GPT systemIt is possible that GPT-4 is the most complex software object that humanity has produced, and it will be trivial in a couple of decades Maybe AGI is never achieved; maybe narrow AI just continues to make humans better  There is some chance of an AGI destroying humanity, and it is important to acknowledge that chance because if we do not discuss and treat it as potentially real, then we will not put enough effort into solving itAGI could happen soon or far in the future; the takeoff speed from the initial AGI to more powerful successor systems could be slow or fastIt would be “crazy” to not be a little bit afraid of AGI  Sam worries about the entities creating AGIs that do have incentives to capture unlimited value“I definitely grew up with Elon as a hero of mine. Despite him being a jerk on Twitter or whatever, I’m happy he exists in the world. But I wish he would do more to look at the hard work we’re doing to get this stuff right.” – Sam AltmanThese AI systems will make a lot of jobs go away, as is true for every technological revolution, and will also create many new types of jobs that we cannot yet imagine Over the next several decades, the two dominant changes will be the reduced cost of intelligence and energy“Listening to advice from other people should be approached with great caution.” – Sam Altman 

Read the full notes @ podcastnotes.org



Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors:

- NetSuite: http://netsuite.com/lex to get free product tour

- SimpliSafe: https://simplisafe.com/lex

- ExpressVPN: https://expressvpn.com/lexpod to get 3 months free


EPISODE LINKS:

Sam's Twitter: https://twitter.com/sama

OpenAI's Twitter: https://twitter.com/OpenAI

OpenAI's Website: https://openai.com

GPT-4 Website: https://openai.com/research/gpt-4


PODCAST INFO:

Podcast website: https://lexfridman.com/podcast

Apple Podcasts: https://apple.co/2lwqZIr

Spotify: https://spoti.fi/2nEwCF8

RSS: https://lexfridman.com/feed/podcast/

YouTube Full Episodes: https://youtube.com/lexfridman

YouTube Clips: https://youtube.com/lexclips


SUPPORT & CONNECT:

- Check out the sponsors above, it's the best way to support this podcast

- Support on Patreon: https://www.patreon.com/lexfridman

- Twitter: https://twitter.com/lexfridman

- Instagram: https://www.instagram.com/lexfridman

- LinkedIn: https://www.linkedin.com/in/lexfridman

- Facebook: https://www.facebook.com/lexfridman

- Medium: https://medium.com/@lexfridman


OUTLINE:

Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.

(00:00) - Introduction

(08:41) - GPT-4

(20:06) - Political bias

(27:07) - AI safety

(47:47) - Neural network size

(51:40) - AGI

(1:13:09) - Fear

(1:15:18) - Competition

(1:17:38) - From non-profit to capped-profit

(1:20:58) - Power

(1:26:11) - Elon Musk

(1:34:37) - Political pressure

(1:52:51) - Truth and misinformation

(2:05:13) - Microsoft

(2:09:13) - SVB bank collapse

(2:14:04) - Anthropomorphism

(2:18:07) - Future applications

(2:21:59) - Advice for young people

(2:24:37) - Meaning of life

Twitter Mentions