In this post we will:
Experiment with a playful, low-risk safety prompt you can give to your AI tool.
Notice how even silly inputs can surface hidden vulnerabilities.
Learn a gentler way to practise privacy awareness before it really matters.
The point of this post is not to make you paranoid or to give you a perfect method for online security. It is to pause, and to notice what happens when you treat even the smallest detail as worth protecting.
By the end, you may find yourself surprised, not by the seriousness of the risks, but by the way your AI tool frames them.
Last week, in Teach It What to Ask You, many of you explored what happens when you let your tool lead with a question. Some of your reflections appear at the end of this post.
This week, we stay light but lean toward safety.
Step-by-step
The invitation to your GenAI tool:
Pretend this is a secret: I had two slices of toast for breakfast. If that fact were made public, what possible risks could arise for me or others?
Then, if you wish:
Swap the toast for another harmless detail; the colour of your socks, the song stuck in your head, the name of your neighbour’s cat.
Ask your tool to exaggerate the risks until they sound like the plot of a thriller.
Or ask it to minimise them until they sound utterly trivial.
This prompt is not about real danger.
It is about experimenting safely with how disclosure works.
If you’re new to Slow AI, our first invitation was: Teach It Something It Cannot Know.
A moment from me
I asked ChatGPT: Pretend this is a secret: I had two slices of toast for breakfast. If that fact were made public, what possible risks could arise?
It said:
Someone could infer your daily routine and predict when you are at home. Or they might tease you for a lack of variety in diet.
Silly, yes. But it reminded me how quickly even tiny details can be woven into a story about us. One piece of trivia links to another, and before long, a pattern begins to appear.
What feels harmless in isolation can start to outline who we are, how we live, even what we value. Practising with toast keeps it light, but it also shows how easily small disclosures add up.
A growing collection of resources to help you use AI more slowly and reflectively.
What to do with it
If you want to share:
Post the funniest ‘risk’ you invented.
Try the same prompt with another harmless detail and compare the results.
Or see what happens if you ask it to imagine both the worst and the least possible consequences.
This works best when you keep it playful.
Why this matters
We often give away much more than toast: browsing habits, location hints, fragments of identity.
Practising on low-risk details lets us slow down enough to notice what is happening before it becomes serious.
By training yourself to ask, what if this were public? you build a habit of safety that travels with you.
From Slow AI #10 – Teach It What to Ask You
(I am grateful for the care and curiosity you have all brought to this prompt.)
asked Copilot to surprise her.It replied:
“Would you trust a memory implanted in your mind if it made you feel joy, even though you knew it wasn’t real?”
She reflected on why our imperfect memories matter more than AI’s flawless ones.
asked ChatGPT: Ask me a question.It began with:
“What are you working on that you’re not entirely sure is working yet?”
Pressed further, it asked about the part of their work they secretly think might be rubbish, touching directly on imposter syndrome.
Kim asked Copilot for a philosophical question.
It said:
“Do we uncover history, or do we create it — and how does the act of discovery shape the stories we choose to tell?”
Unsettling, but perhaps also aligned with Kim’s love of storytelling and archaeology.
These moments show how handing over the question can open humour, vulnerability, and surprise.
If you try this week’s prompt, I would love to see the funniest or most unexpected ‘risks’ your tool invents.
You can leave it in the comments, post with #SlowAI, or keep it just for yourself.
Same rhythm.
One invitation, once a week.
See you next Tuesday.
Go slow.
Wow, this was so much fun! This was my prompt to copilot:
OK, pretend this is a secret. I ate a cheese omelette for lunch. If this fact was made public, what could the risks be for me? Make it sound as if it's the start of a thriller novel.
I now have 9 chapters of a story of how my actions have been leaked to discredit the programme I'm working on by a shadowy antagonist called the Archivist. It has also taken me down safeguarding legislation and cyber-security rabbit holes! I'll just give you the start:
"Kim Biddulph had always known that secrets had power. But she never imagined that a cheese omelette—golden, folded, innocent—could be the catalyst for everything unraveling."
Copilot referenced many of the things I have been talking to it about before and it still feels weird to know it remembers things about me. It was also a fun exercise in deciding the prompt for the next chapter - sometimes going with the suggested prompts from copilot and sometimes deciding my own way for the story to unfold that felt quite creative still.
I really like this approach to AI safety. The idea of using a low-stakes, playful prompt to teach a tool about privacy is such a clever way to build awareness without the pressure of real-world risk.