Thursday 

Room 1 

13:30 - 14:30 

(UTC+01

Talk (60 min)

How to Break AI Systems (Before Someone Else Does)

AI systems are failing in production, and traditional security testing can't catch the problems that matter most. From prompt injection attacks that bypass filters to AI agents that turn helpful assistants into data theft tools, the threat landscape has grown far beyond simple chatbot vulnerabilities.

AI
Machine Learning
Security Tooling

The presentation covers why AI systems can't tell the difference between instructions and data, making them fundamentally different from traditional software. We'll show actual attack demonstrations including hidden prompts in documents, AI agent goal manipulation, and privacy violations that expose sensitive data.

You'll leave with practical methods for testing your own AI systems, understanding which attacks pose the biggest risks, and building defenses that actually work. All attendees will also get access to our AI red teaming practice platform with vulnerable AI applications, so you can continue developing your AI hacking skills after the talk.


Gary Lopez

Gary Lopez is a Senior Red Teamer on Microsoft's AI Red Team. In his current role, he collaborates with a diverse group of interdisciplinary experts, all dedicated to adopting an attacker's mindset to critically probe and test AI systems. Gary Lopez is the creator of Microsoft’s PyRIT (Python Risk Identification Toolkit), the team’s main red teaming automation tool. Prior to his tenure at Microsoft, Gary worked at Booz Allen Hamilton focusing on cybersecurity, developing tools for reverse engineering and malware analysis, specially targeting, and mitigating vulnerabilities within critical infrastructure including SCADA, ICS and DCS systems. He is also a graduate student at Georgetown University in the Applied Intelligence program focusing on Cyber Intelligence.