Previous Episode: Go to Zero
Next Episode: AI, Robot

Video games have become a favourite tool for AI researchers to test the abilities of their systems. In this episode, Hannah sits down to play StarCraft II - a challenging video game that requires players to control the onscreen action with as many as 800 clicks a minute. She is guided by Oriol Vinyals, an ex-professional StarCraft player and research scientist at DeepMind, who explains how the program AlphaStar learnt to play the game and beat a top professional player. Elsewhere, she explores systems that are learning to cooperate in a digital version of the playground favourite ‘Capture the Flag’.

If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or emailing us at [email protected].

Further reading

The Economist: Why AI researchers like video gamesDeepMind blogs: Capture the Flag and AlphastarProfessional StarCraft II player MaNa gives his impressions of AlphaStar and DeepMindOpen AI’s work on Dota 2 The New York Times: DeepMind can now beat us at multiplayer games, tooRoyal Society: Machine Learning resourcesDeepMind: The Inside Story of AlphaStar Andrej Karpathy: Deep Reinforcement Learning: Pong from Pixels

Interviewees: Research scientists Max Jaderberg and Raia Hadsell; Lead researchers David Silver and Oriol Vinyals, and Director of Research Koray Kavukcuoglu.

Credits:
Presenter: Hannah Fry
Editor: David Prest
Senior Producer: Louisa Field
Producers: Amy Racs, Dan Hardoon
Binaural Sound: Lucinda Mason-Brown
Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)
Commissioned by DeepMind

Twitter Mentions