Guest:

Elie Bursztein, Google DeepMind Cybersecurity Research Lead, Google

 Topics:

Given your experience, how afraid or nervous are you about the use of GenAI by the criminals (PoisonGPT, WormGPT and such)?

What can a top-tier state-sponsored threat actor do better with LLM? Are there “extra scary” examples, real or hypothetical?

Do we really have to care about this “dangerous capabilities” stuff (CBRN)? Really really?

Why do you think that AI favors the defenders? Is this a long term or a short term view?

What about vulnerability discovery? Some people are freaking out that LLM will discover new zero days, is this a real risk?

 Resources:

“How Large Language Models Are Reshaping the Cybersecurity Landscape” RSA 2024 presentation by Elie (May 6 at 9:40AM)

“Lessons Learned from Developing Secure AI Workflows” RSA 2024 presentation by Elie (May 8, 2:25PM)

EP50 The Epic Battle: Machine Learning vs Millions of Malicious Documents

EP40 2021: Phishing is Solved?

EP135 AI and Security: The Good, the Bad, and the Magical

EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC

EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It

PyRIT LLM red-teaming tool

Accelerating incident response using generative AI

Threat Actors are Interested in Generative AI, but Use Remains Limited

OpenAI’s Approach to Frontier Risk