0:00
/
0:00
Transcript

Why We Need to Slow Down For Cybersecurity

What an ex-NSA AI security engineer thinks most people get wrong about AI risk

Thank you Bob Bragg, Travis Sparks, Ryan Stax, Hodman Murad, Richard Hogan, MD, PhD(2), DBA, Erich Winkler, and many others for tuning into my live video with ToxSec!

Most conversations about AI risk happen at a distance. Bias. Job displacement. Existential threat. These are real problems, but they are not the ones keeping the people who actually secure these systems awake at night.

Christopher is an AI Security Engineer at Amazon. Before that, he was at the NSA. Before that, a US Marine and defence contractor. He has spent his career breaking things that were supposed to be secure, and he now writes about AI security with the kind of bluntness that comes from knowing what the actual attack surface looks like.

In this live, we talk about why speed is the enemy of security, what happens when organisations deploy AI systems faster than they can audit them, and why the cybersecurity community needs the same critical slowness that Slow AI argues for everywhere else. Christopher explains what most people misunderstand about AI security risk, why the real vulnerabilities are not the ones making headlines, and what it actually takes to secure a system that can be manipulated through language alone.

We also talk about prompt injection, about what happens when security teams are left out of AI adoption decisions, and about why the gap between what organisations claim about their AI security posture and what they actually do is wider than most people realise.

Subscribe to ToxSec. Christopher writes about AI security with the clarity and directness of someone who has done the work, not someone selling a course about it.

Go Slow,

Sam


This month in the curriculum: 149 paid members explored how AI image generators default to colonial tropes when depicting race, profession, and power. We ran the same prompt across tools and continents. The biases were near-identical. The conversation that followed was not.

The Slow AI Curriculum for Critical Literacy is a 12-month programme for people who want to learn when AI helps, when it harms, and how to tell the difference. Each month covers one critical theme, with a live 45-minute seminar, research syntheses, moderated dialogue, and full recordings. You receive a Certificate of Critical AI Literacy upon completion.

Read the 2026 Handbook for Critical AI Literacy to see exactly what the programme covers. £75/year.

Join the Slow AI Curriculum

Discussion about this video

User's avatar

Ready for more?