George dives into his blog post experimenting with Scott Lundberg's SHAP library. By training an XGBoost model on a dataset about academic attainment and alcohol consumption can we develop a global interpretation of the underlying relationships? Lan leads the discussion of the paper Adversarial Examples Are Not Bugs, They Are Features by Ilyas and colleagues. This papers proposes a new perspective on adversarial susceptibility of machine learning models by teasing apart the 'robust' and the 'non-robust' features in a dataset. The authors summarizes the key take away message as "Adversarial vulnerability is a direct result of the models’ sensitivity to well-generalizing, ‘non-robust’ features in the data." Last  but not least, Kyle discusses Alphafold!