Thursday 

Room 1 

13:40 - 14:40 

(UTC+01

Talk (60 min)

How to Break AI Systems (Before Someone Else Does)

AI systems are failing in production, and traditional security testing can't catch the problems that matter most. From prompt injection attacks that bypass filters to AI agents that turn helpful assistants into data theft tools, the threat landscape has grown far beyond simple chatbot vulnerabilities.

Security
AI
Machine Learning

The presentation covers why AI systems can't tell the difference between instructions and data, making them fundamentally different from traditional software. We'll show actual attack demonstrations including hidden prompts in documents, AI agent goal manipulation, and privacy violations that expose sensitive data.

You'll leave with practical methods for testing your own AI systems, understanding which attacks pose the biggest risks, and building defenses that actually work. All attendees will also get access to our AI red teaming practice platform with vulnerable AI applications, so you can continue developing your AI hacking skills after the talk.

Gary Lopez

Gary Lopez is the Founder of Tinycode, a venture-backed startup helping organizations build and deploy safe and secure AI systems. Gary spent four years at Microsoft, most recently serving as a Principal Offensive AI Scientist. On the AI Red Team, he created PyRIT (Python Risk Identification Toolkit)—an open-source tool now widely used across industry for AI security assessment. During his tenure, he helped lead dozens of red teaming operations and spearheaded work on catastrophic AI risks, including chemical and biological threats. Gary actively contributes to the community by training professionals at Black Hat and publishing research on AI security. Before Microsoft, he worked at Booz Allen Hamilton identifying and remediating zero-day vulnerabilities in critical infrastructure systems.