Guest: 

Nicholas Carlini, Research Scientist @ Google 

Topics:

What is your threat model for a large-scale AI system? How do you approach this problem? How do you rank the attacks? How do you judge if an attack is something to mitigate? How do you separate realistic from theoretical? Are there AI threats that were theoretical in 2020, but may become a daily occurrence in 2025? What are the threat-derived lessons for securing AI? Do we practice the same or different approaches for secure AI and reliable AI? How does relative lack of transparency in AI helps (or hurts?) attackers and defenders?

Resources:

“Red Teaming AI Systems: The Path, the Prospect and the Perils” at RSA 2022 “Killed by AI Much? A Rise of Non-deterministic Security!” Books on Adversarial ML