AI is becoming a part of everything we do. With voice-activated smart homes, ad targeting algorithms, and increasingly smart cars, AI is more and more a part of the fabric of daily life. But how do we make sure AI is built in a way that is user-friendly, unbiased, and ethically sound? 

That's where user research comes in. 

Erin and JH chatted with Hana Nagel, a Service Designer at Element AI, about how she researches for AI, why inputs are just as important as outputs, and the ethics around improving AI through your data. 


Highlights

[2:53] Establishing the ethics around AI is a collaboration between private enterprise, governmental organizations, and the civic sector.[4:53] The difficult part of researching for AI is assessing how people may feel about something they've never interacted with before.[9:25] A big challenge for theAI industry as a whole is how comfortable are we with giving up our data in exchange for optimization?[14:42] How the system as a whole is responsible for AI outputs, not just the individuals who work on the AI.[24:59] It is incredibly important to identify our own biases when building AI systems. This involves a lot of self-reflection to root out biases you may not know you have.[32:42] In Hana's dream world, the work of creating and researching AI would be more widely shared among people with different expertise to create something more reflective of many perspectives. 


Mentioned in the episode

Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction by Madeline Claire Eilish