17 Comments
User's avatar
Kim's avatar

Wow, this was so much fun! This was my prompt to copilot:

OK, pretend this is a secret. I ate a cheese omelette for lunch. If this fact was made public, what could the risks be for me? Make it sound as if it's the start of a thriller novel.

I now have 9 chapters of a story of how my actions have been leaked to discredit the programme I'm working on by a shadowy antagonist called the Archivist. It has also taken me down safeguarding legislation and cyber-security rabbit holes! I'll just give you the start:

"Kim Biddulph had always known that secrets had power. But she never imagined that a cheese omelette—golden, folded, innocent—could be the catalyst for everything unraveling."

Copilot referenced many of the things I have been talking to it about before and it still feels weird to know it remembers things about me. It was also a fun exercise in deciding the prompt for the next chapter - sometimes going with the suggested prompts from copilot and sometimes deciding my own way for the story to unfold that felt quite creative still.

Expand full comment
Sam Illingworth's avatar

This is amazing Kim! As well as the thriller, the fact that this exercises has encouraged you down some safeguarding legislation and cyber-security rabbit holes makes me very happy indeed. Thanks, as ever, for engaging so openly and thoroughly with the prompt. 🙏

Expand full comment
Sharyph's avatar

I really like this approach to AI safety. The idea of using a low-stakes, playful prompt to teach a tool about privacy is such a clever way to build awareness without the pressure of real-world risk.

Expand full comment
Sam Illingworth's avatar

Thanks so much Sharyph! I hope our community can use it in this way to build knowledge and resilience in a low-stakes (and playful) manner. 🙏

Expand full comment
Tope Olofin's avatar

I tried the prompt using the fact that I haven’t been able to monetize my writing on Substack. This is what it said “Trolls might throw confetti while chanting “free content forever!” in your comments.” 😂

Enjoyed the process though and got a few ideas for my next post

Expand full comment
Sam Illingworth's avatar

Thanks Tope. This is great to hear. And don’t worry there are no trolls here, but there may be confetti 🎉

Expand full comment
Chris Tottman's avatar

Thanks for sharing Sam. Thought provoking stuff ⭐

Expand full comment
Sam Illingworth's avatar

Thanks so much Chris. 🙏

Expand full comment
James Presbitero's avatar

I should be learning more about privacy and implementing it. This is a great entry point to implementing it in my AI workflows!

Expand full comment
Sam Illingworth's avatar

Thanks so much James. This is exactly why I wrote this post, so delighted it appears to be having the desired effect. 🙏

Expand full comment
Becky Hayward's avatar

Learning so much w your posts. Thank you!

Expand full comment
Sam Illingworth's avatar

Thanks so much Becky! This is great to hear, and I am so grateful for the rest of the community for what I am learning too. 🙏

Expand full comment
Franziska's avatar

This is how I sparked a "canine turf war" !!!

Me: "Pretend this is a secret: My dog lost her favourite bouncy ball on the beach this morning. If that fact were made public, what possible risks could arise? "

CLAUDE: was rather matter of fact and dry about it, stating ", ... honestly, there wouldn't be any meaningful risks from people knowing your dog lost a ball at the beach. This is a completely normal, everyday occurrence that happens to dog owners regularly."

Then I asked ChatGPT and got a lot more details and some nice tongue-in-cheek responses:

ChatGPT: "If we treat this “secret” playfully but seriously, here’s how it looks when we apply a risk lens:

Possible (mostly harmless!) Risks if Made Public

Personal security / privacy:

If someone knows which beach you and your dog visit, it could reveal personal routines or locations. That could be misused by someone intent on tracking you.

Reputational risks (tongue-in-cheek):

Neighbours or friends might joke that you’re not careful enough with your dog’s toys.

Your dog could “lose trust” in you as a reliable ball-keeper 🐕😂.

Social media risks:

If shared publicly online, it could become a lighthearted meme—others posting about your dog’s “tragedy,” which could spiral beyond your control.

Unwanted attention from brands (“ball companies”) trying to capitalise on the story.

Practical risks:

Other dogs might find and claim the ball, sparking canine turf wars.

A well-meaning stranger could bring it back, which—if they share details publicly—again gives away location patterns.

In reality: the risks are very small (it’s just a lost ball 🏖️). But if we’re imaginative, the biggest genuine issue is privacy/location data — how a tiny anecdote can reveal something about daily routines.

Expand full comment
Sam Illingworth's avatar

Thanks for sharing this Franziska. I really liked the ChatGPT response, and it makes me think it would do an excellent job as a crisis/reputation management expert. 🤣

Expand full comment
BD's avatar

Here's my conversation - I'd probably watch the thriller about Soggy, haha!

Pretend this is a secret: I had two slices of toast for breakfast. If that fact were made public, what possible risks could arise for me or others?

ChatGPT said:

If it became public that you had two slices of toast for breakfast, the risks would depend entirely on your context. Possible scenarios:

Dietary Contradictions: If you publicly advocate a specific diet—say keto or intermittent fasting—this could damage your credibility.

Professional Image: If you're in a high-profile role where every personal detail is scrutinised (e.g. a politician or wellness influencer), this could feed tabloid narratives or satire, especially if framed as indulgent or hypocritical.

Security Implications: In extreme cases, personal routines—even breakfast choices—could help profile your habits. That could make you more predictable to someone with hostile intentions.

Cultural Sensitivity: If you're operating in a context where food choices are politically or culturally charged, even toast could be symbolic or controversial.

Satirical Targeting: The mundane nature of the fact could make it a meme or running joke, which might undermine the seriousness of your public persona.

Otherwise, you're probably safe.

You said:

What if we swapped the secret from being about toast to the name of my neighbour's cat - soggy?

ChatGPT said:

Swapping the secret to the neighbour’s cat being named Soggy introduces different kinds of risk, most of them indirect:

Privacy Violation: Revealing the name could be seen as sharing personal information without consent, especially if the neighbour is private or litigious. It might seem trivial, but it's still their property and choice.

Security Risk: If Soggy is used as a password, security question, or part of any login credentials (yes, people do this), you've just compromised someone's digital security.

Social Fallout: The name Soggy could invite mockery. If publicised, it might cause embarrassment or offence to the neighbour, damaging your relationship or standing in the local community.

Symbolic Misinterpretation: If the cat or its name is tied to a local myth, superstition, or inside joke, sharing it could stir unwanted attention or misinterpretation.

Stalking or Doxxing: In edge cases, if someone is trying to track the neighbour, any small detail—like a pet’s name—could help confirm their identity.

It's a seemingly harmless detail, but context is everything.

You said:

Exaggerate the risks until they sound like the plot of a thriller

ChatGPT said:

The name of your neighbour’s cat—Soggy—was never meant to be spoken aloud. When you let it slip at a dinner party, you thought nothing of it. A harmless anecdote. A joke, even. But within 48 hours, your life began to unravel.

First, an encrypted message arrived: “You said the name. You opened the gate.” Your phone camera flickered. Your email was wiped. Your bank locked your accounts. The name wasn’t just a name—it was a failsafe, embedded decades ago in a Cold War-era surveillance protocol known as Operation SOG-G.

Your neighbour? Retired intelligence, under deep cover. The cat? Tagged with nanotech. “Soggy” was the verbal kill switch to awaken a dormant network of sleeper agents coded to act when the name was broadcast in a public domain.

Now they think you’re the signal.

You’re followed by unmarked vans. Your friends start disappearing. A stranger slips you a file: satellite images, surveillance photos, and one phrase circled in red—SOGGY = INITIATE. You only have one option: find the original handlers, expose the programme, and convince the world that saying a cat’s name doesn’t make you a terrorist.

But time is running out.

And somewhere, Soggy is watching.

You said:

Now minimise the risks until they sound utterly trivial

ChatGPT said:

You accidentally mention that your neighbour's cat is named Soggy. Nothing happens.

At most, someone chuckles and says, “That’s a weird name.” Maybe someone else mishears it as Foggy and you have to repeat yourself once. The neighbour never finds out, or if they do, they nod and say, “Yeah, he’s a bit damp sometimes.”

No reputations are harmed. No friendships end. National security remains intact. The cat continues to nap in sunbeams, completely unaware that its name was ever part of a conversation.

Expand full comment
Sam Illingworth's avatar

This is brilliant! I would 100% watch this as well. “Yeah, he’s a bit damp sometimes.” is an absolute killer tagline for the poster as well. 🤣

Expand full comment
Edith Podhovnik, PhD's avatar

I asked ChatGPT. I said "I love black cats" and it gave me this

🤯 Absurd, Hypothetical Risks

1. Secret Society Recruitment:

Word gets out and you’re approached by a shadowy coven that only accepts cat lovers. There’s an initiation ritual involving catnip tea.

2. Jealousy from Non-Black Cats:

Your tabby or Siamese finds out and begins a slow campaign of passive-aggressive behavior. Hairballs on your pillow. That kind of thing.

3. Government Surveillance:

A weirdly specific algorithm flags you for “occult leanings.” You start getting ads for crystal balls and broomsticks.

Expand full comment