Wednesday
Room 2
15:00 - 16:00
(UTC+01)
Talk (60 min)
Where thinking stops
I decided to rebuild a retrospective tool from scratch. This was an internal tool we'd been using for years. AI would write the code, and I'd make the architectural decisions. Or at least that's what I thought.
Although things worked, something felt off. Implementations were technically correct but oddly inconsistent.
I realized that AI didn't make any mistakes, but that I hadn't made the decisions. AI couldn't cross any boundaries, because I never drew them. Speed made it impossible to ignore implicit thinking.
This is not a talk about prompting, frameworks, tooling or models. It's about recognizing where decisions were missing all along. AI makes it uncomfortably fast to see where boundaries are missing.
