![Bold Conjectures with Paras Chopra artwork](https://is4-ssl.mzstatic.com/image/thumb/Podcasts114/v4/3a/a9/ec/3aa9ec08-60fd-bbce-6fd3-4387e846b18c/mza_14520400395107988047.jpg/100x100bb.jpg)
#5 Connor Leahy - Artificial general intelligence is risky by default
Bold Conjectures with Paras Chopra
English - February 17, 2021 09:00 - 1 hour - 53.5 MB - ★★★★★ - 5 ratingsScience science physics consciousness reality economy psychology evolution Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Should we worry about AI?
Connor Leahy is an AI researcher at EleutherAI, a grass-roots collection of open-source AI researchers. Their current ambitious project is GPT-Neo, where they’re replicating currently closed-access GPT-3 to make it available to everyone.
Connor is deeply interested in the dangers posed by AI systems that don’t share human values and goals. I talked to Connor about AI misalignment and why it poses a potential existential risk for humanity.
What we talk about
00:05 – Introductions
2:55 – AI risk is obvious once you understand it
3:40 – AI risk as a principal-agent problem
4:33 – Intelligence is a double-edged sword
7:52 – How would you define the alignment problem of AI?
9:10 – Orthogonality of intelligence and values
10:15 – Human values are complex
11:15 – AI alignment problem
11:30 – Alignment problem: how do you control a strong system using a weak system
12:42 – Corporations are proto-AGI
14:32 – Collateral benefits of AI safety research
16:25 – Why is solving this problem urgent?
21:32 – We’re exponentially increasing AI model capacity
23:55 – Superintelligent AI as the LEAST surprising outcome
25:20 – Who will fund to build a superintelligence
26:28 – Goodhart’s law
29:19 – Definition of intelligence
33:00 – Unsolvable problems and superintelligence
34:35 – Upper limit of damage caused by superintelligence
38:25 – What if superintelligence has already arrived
41:40 – Why can’t we power off superintelligence if it gets out of hand
45:25 – Industry and academia is doing a terrible job at AI safety
51:25 – Should govt be regulating AI research?
55:55 – Should we shut down or slow AI research?
57:10 – Solutions for AGI safety
1:05:10 – The best case scenario
1:06:55 – Practical implementations of AI safety
1:12:00 – We can’t agree with each other on values, how will AGI agree with us?
1:14:00 – What is EleutherAI?