Adversarial Examples Protein Folding and Shapley Values
Journal Club
English - April 22, 2020 23:07 - 45 minutes - 52.6 MBMathematics Science Education computerscience machinelearning modelinterpretability Homepage Download Google Podcasts Overcast Castro Pocket Casts RSS feed
Previous Episode: Tools For Misusing GPT2, Tensorflow, and ML Unfairness
George dives into his blog post experimenting with Scott Lundberg's SHAP library. By training an XGBoost model on a dataset about academic attainment and alcohol consumption can we develop a global interpretation of the underlying relationships?
Lan leads the discussion of the paper Adversarial Examples Are Not Bugs, They Are Features by Ilyas and colleagues. This papers proposes a new perspective on adversarial susceptibility of machine learning models by teasing apart the 'robust' and the 'non-robust' features in a dataset. The authors summarizes the key take away message as "Adversarial vulnerability is a direct result of the models’ sensitivity to well-generalizing, ‘non-robust’ features in the data."
Kyle discusses AlphaFold