There’s a lot of doomsday hype around artificial intelligence in general, and the idea of so-called “killer robots” has been especially controversial. But when it comes to the ethics of these technologies, one can argue that robots actually could be more ethical than human operators. Humans can commit war crimes. They can deliberately kill innocent people or enemies that have surrendered. Humans get stressed and tired and bring any number of biases to the table. But robots just follow their code. Moreover, U.S. adversaries are deploying these technologies quickly, and stakes are high if we don’t keep up. Rob and Jackie discuss these technologies—and the risks of sitting out the AI arms race—with Robert J. Marks, Distinguished Professor of Electrical and Computer Engineering at Baylor University, and Director of the Walter Bradley Center for Natural and Artificial Intelligence.

Mentioned:

Robert J. Marks, The Case for Killer Robots: Why America’s Military Needs to Continue Development of Lethal, e-book (Walter Bradley Center for Natural and Artificial Intelligence), https://mindmatters.ai/killer-robots/. Forrest E. Morgan, et al., Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World (RAND Corporation, 2020), https://www.rand.org/pubs/research_reports/RR3139-1.html.The Defense Innovation Unit (DIU), https://www.diu.mil/.