Thursday 

Room 2 

14:45 - 15:45 

(UTC+01

Talk (60 min)

React AI

AI tools can accelerate development, but left unchecked they often generate insecure, non-standard React code - introducing risks like XSS, unsafe state management, and broken auth flows.

AI

This session shows developers how to take control by using prompt engineering as a security layer. We’ll walk through how to “teach” AI coding assistants the security rules React requires - covering topics like safe component patterns, secure handling of untrusted input, defense against injection, and avoiding dangerous APIs. Attendees will learn how to craft prompts that not only guide AI toward correct functionality, but also enforce security best practices, ensuring the code it produces is production-ready, maintainable, and resilient to attack.

Jim Manico

Jim Manico is the founder of Manicode Security where he trains software developers on secure coding and security engineering. He is also an investor/advisor for KSOC, Nucleus Security, Signal Sciences, and BitDiscovery. Jim is a frequent speaker on secure software practices, is a Java Champion, and is the author of 'Iron-Clad Java - Building Secure Web Applications' from Oracle Press. Jim also volunteers for OWASP as the project co-lead for the OWASP ASVS and the OWASP Proactive Controls.