In this episode, we’re joined by Nick Bostrom, professor at the University of Oxford and head of the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to AI safety and ethics. In our conversation, we discuss the risks associated with Artificial General Intelligence, advanced AI systems Nick refers to as superintelligence, openness in AI development and more! The notes for this episode can be found at https://twimlai.com/talk/18

Guests