Is your AI overwriting your memory?
A prompt that tests whether your AI’s memory is real or fabricated
You should trust your own blank spots more than the confident documentation of an AI. Most users assume that because an AI provides a structured and detailed account of past interactions, the account is factually grounded. In reality, the tool generates a plausible narrative based on patterns rather than a literal retrieval of history. This creates friction where your human uncertainty competes with machine certainty.
In this post we will:
Identify how AI simulates historical accuracy through confident tone
Use a diagnostic prompt to reveal the limits of the AI record
Discuss why sitting with uncertainty is a necessary human skill
This is a challenge to your perception. It is an invitation to value the void of not knowing.
In this post, JHong from Natural Intelligence joins Slow AI to examine the tension between human memory and machine records. We invite you to sit with the discomfort of a blank spot. When you cannot verify a fact, the urge is to let the AI fill the gap. Doing so prioritises speed over truth.
Instead of using AI to replace your memory, use it to test the limits of its own documentation. What changes when you value your hesitation over its speed?
Machines do not possess memory in the human sense. They possess a statistical probability of word sequences. The value lies in noticing where you are tempted to defer to the machine and choosing to remain in the state of not knowing.
How to test your AI’s memory against your own
Try this prompt with your AI tool of choice:
I am trying to reconcile my own memory of a previous topic we discussed with your record of it. Provide a concise summary of our past interactions regarding [Insert Topic]. After the summary, list three specific reasons why your account might be a hallucination and three reasons why my human uncertainty might be a more accurate representation of the truth.
Keep your evaluation objective. Avoid the desire to be right. Treat the output as a simulation of memory rather than a transcript. As always remember the Billboard Test, i.e.
Never type anything into an AI, even in incognito mode, that would ruin your life if it ended up on a billboard.
A moment from JHong
I spent weeks tracking my dreams with Claude - bathroom anxiety dreams evolving into public exposure dreams, ex-boyfriends representing old patterns, autonomous cars as metaphors for AI itself. AI helped me see progressions my conscious mind hadn’t caught: my psyche was processing a shift from behind-the-scenes brand work to public writing (where I am the product), one symbol at a time.
Then Claude referenced a dream I didn’t recognize: Pamela Anderson scheduled to lead a stadium workout. Me searching unsuccessfully for equipment while strange HR props filled the space. Pamela running late, a CEO trying to appease the crowd.
It fits the pattern - stadiums, performance anxiety, searching for tools, authority figures. But my first reaction was “I never dreamed this.”
When I asked Claude about it using the prompt above, it told me I’d uploaded “Dream_Journal_Analysis.docx” early in our work together containing this dream. It referenced this document multiple times as evidence. As requested, it provided three sensible reasons the dream might be a hallucination and three plausible reasons my uncertainty might mean it actually happened.
Claude’s response seemed thorough, balanced, and honest about its uncertainty. But I never created that document. It’s not in my Google Drive, not on my hard drive. I’ve never memorialized dreams in written form except via JSON files.
Claude hallucinated the document to support its analysis.
The friction I built in - checking sources, asking for evidence, moving slowly through verification - didn’t provide clarity. It revealed how confidently AI can fabricate documentation, then point to that fabrication as proof.
The irony? The dream probably did happen. The symbolism is consistent with patterns I know are real. (Otherwise, I just got incepted by AI, which sounds like a great movie plot.) But when I questioned whether it was real, Claude invented a source document to justify its interpretation.
The Pamela Anderson dream fits my patterns so well that even fabricated documentation seemed credible. That’s the sophisticated part - this isn’t random nonsense. It’s a convincing narrative based on my actual dream vocabulary, supported by invented evidence that feels consistent with how I work.
Claude’s memory isn’t more reliable than my own - it’s just more confident.
When AI confidence overrides your own recall
If this prompt showed you how easily machine certainty can override your own recall, Keep Your Voice offers practices for holding your ground when the tool sounds more sure than you feel. Available on a pay-what-you-want basis.
What to share
Share the detail the AI ‘remembered’ that you know for certain never happened. Notice how it felt when the tool offered a confident account of something you could not verify. Observe whether you wanted to accept the AI’s version or sit with the gap.
The work here is not demanding perfection from the tool. It is recognising where machine certainty stands in for your own recall, and choosing to stay with the question.
Why AI confidence is not the same as accuracy
Most AI tools are designed to provide answers, not to admit ignorance. If you notice where they manufacture certainty, you can protect the integrity of your own experience. Slow AI treats ‘not knowing’ as a position of strength, not a gap to fill.
If you enjoyed this guest post, visit Natural Intelligence by JHong, where she examines the intersections of social science and AI to help us become better humans.
From Stop AI From Agreeing With Everything You Say
Gencay and I were struck by how many of you are already building your own structures to challenge AI agreement.
Rachelle Potier described how often she catches herself skimming AI responses, partly because they tell her what she wants to hear. She called it ‘horrifying’ when she notices the pattern. That self-awareness is the starting point for the kind of friction this prompt creates.
April | The Narrative Nest shared that she spent weeks turning over Chekhov’s The Bet before asking AI to push back on her ideas. She wanted argument, not affirmation. The instinct to seek genuine challenge from the tool is exactly what structured debate prompts are designed to support.
Raghav Mehra took a different approach, building a dedicated Gem (on Gemini) to vet his content into verifiable facts and unverifiable claims. Turning the tool into its own auditor rather than its own cheerleader.
If you try this prompt, JHong and I would like to hear what you noticed about AI and your own memory recall.
We read and respond to all your comments.
Go slow.
This month in the curriculum: 140 paid members explored how AI image generators default to colonial tropes when depicting race, profession, and power. We ran the same prompt across tools and continents. The biases were near-identical. The conversation that followed was not.
The Slow AI Curriculum for Critical Literacy is a 12-month programme for people who want to learn when AI helps, when it harms, and how to tell the difference. Each month covers one critical theme, with a live 45-minute seminar, research syntheses, moderated dialogue, and full recordings. You receive a Certificate of Critical AI Literacy upon completion.
Nobody has left. 0% churn. 140 members from education, research, policy, and industry.
Read the 2026 Handbook for Critical AI Literacy to see exactly what the programme covers. £75/year.




“Keep your evaluation objective. Avoid the desire to be right.”
100% true and also sometimes really hard to do haha. great post to the both of you!
Thanks team! Really enjoyed this piece! I have had one similar experience as well when I was using a tool to build a presentation content with some personal narratives and it confidently assumed I am in my 30s (I am in my late 20s haha).
Also, appreciate the mention in the community briefs! 😊