Interpreting complicated models is a hot topic. How can we trust and manage AI models that we can’t explain? In this episode, Janis Klaise, a data scientist with Seldon, joins us to talk about model interpretation and Seldon’s new open source project called Alibi. Janis also gives some of his thoughts on production ML/AI and how Seldon addresses related problems.

Interpreting complicated models is a hot topic. How can we trust and manage AI models that we can’t explain? In this episode, Janis Klaise, a data scientist with Seldon, joins us to talk about model interpretation and Seldon’s new open source project called Alibi. Janis also gives some of his thoughts on production ML/AI and how Seldon addresses related problems.

Leave us a comment

Changelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!

Sponsors:



DigitalOcean – Check out DigitalOcean’s dedicated vCPU Droplets with dedicated vCPU threads. Get started for free with a $50 credit. Learn more at do.co/changelog.
DataEngPodcast – A podcast about data engineering and modern data infrastructure.
Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.

Featuring:


Janis Klaise – Twitter, GitHub, LinkedInChris Benson – Twitter, GitHub, LinkedIn, WebsiteDaniel Whitenack – Twitter, GitHub, Website

Show Notes:



Seldon
Seldon Core
Alibi

Books

“The Foundation Series” by Isaac Asimov
“Interpretable Machine Learning” by Christoph Molnar

Something missing or broken? PRs welcome!

Twitter Mentions