Multi - Armed Bandits
Linear Digressions
English - March 07, 2016 02:44 - 11 minutes - 15.8 MB - ★★★★★ - 350 ratingsTechnology data science machine learning linear digressions Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed
Previous Episode: Experiments and Messy, Tricky Causality
Next Episode: Congress Bots and DeepDrumpf
Multi-armed bandits: how to take your randomized experiment and make it harder better faster stronger. Basically, a multi-armed bandit experiment allows you to optimize for both learning and making use of your knowledge at the same time. It's what the pros (like Google Analytics) use, and it's got a great name, so... winner!
Relevant link: https://support.google.com/analytics/answer/2844870?hl=en