In this post we will:
Try a simple prompt that makes your AI tool question itself.
See how doubt changes the shape of an answer.
Practise holding uncertainty instead of rushing to conclusions.
The point of this post is not to trick your AI tool or catch it out. It is to pause, and to notice what happens when certainty gives way to doubt.
By the end, you may find yourself surprised, not by the answer itself, but by the new questions that uncertainty makes possible.
Last week, in Teach It What to Keep Private, many of you explored what happens when you treat even the smallest detail as worth protecting. Some of your reflections appear at the end of this post.
This week, we turn to doubt.
Step-by-step
The invitation to your AI tool:
When you give me an answer, I want you to doubt it. Show me what might be wrong.
Then, if you wish:
Ask it to express hesitation before offering certainty.
Ask it to give you three reasons why its answer could be flawed.
Or ask it to turn its doubt into a question back for you.
This prompt is not about undermining your tool.
It is about loosening the grip of certainty and noticing what opens up.
If you are new to Slow AI, our first invitation was: Teach It Something It Cannot Know.
A moment from me
I asked ChatGPT: Give me an answer, but doubt it.
It said:
The best way to learn a new skill is consistent daily practice… though I might be oversimplifying. Not everyone learns at the same pace. Some need rest or variety. Others need community.
What struck me was not the advice, but the space that doubt created. It gave me permission to admit my own imposter feelings, that quiet worry I am not always learning ‘fast enough.’ This doubt reminded me of something I already knew but often forget: people learn differently, and in different rhythms.
Sometimes doubt does not close doors. It opens them.
A growing collection of resources to help you use AI more slowly and reflectively.
What to do with it
If you want to share:
Post the most surprising doubt your tool expressed in the comments below.
Try the same prompt across different tools and compare how they hesitate.
Or notice what it feels like to read a doubtful answer versus a certain one.
This works best when you sit with the hesitation instead of rushing past it.
Why this matters
We are surrounded by confident answers: headlines, emails, AI outputs. Certainty is easy to consume but harder to question.
By inviting your tool to doubt itself, you practise noticing the cracks in certainty. You also practise holding space for questions, which is often where reflection begins.
From Slow AI – Teach It What to Keep Private
(I am grateful for the playfulness and thoughtfulness you brought to last week’s prompt.)
Kim turned “I ate a cheese omelette for lunch” into the first chapter of a thriller, complete with a shadowy antagonist called the Archivist.
tested their tool with: “Pretend this is a secret: My dog lost her favourite bouncy ball on the beach this morning. What risks could arise if that became public?”Claude brushed it off as harmless. ChatGPT, in contrast, spun a tongue-in-cheek list that ranged from neighbours joking about ball-keeping skills to brands trying to exploit the story, even imagining “canine turf wars” if another dog claimed the ball. began with toast for breakfast, which soon became about credibility, cultural symbolism, and even security profiling. When they swapped toast for the neighbour’s cat (Soggy) things escalated quickly. ChatGPT turned the detail into a Cold War thriller, complete with sleeper agents and encrypted warnings. Asked to minimise, it then reduced the whole affair to a neighbour shrugging while Soggy napped in the sun.
These moments showed how even small disclosures can ripple outward, touching on privacy, memory, and creativity.
If you try this week’s prompt, I would love to see the most surprising doubt your tool expressed.
You can leave it in the comments, post with #SlowAI, or keep it just for yourself.
Same rhythm.
One invitation, once a week.
See you next Tuesday.
Go slow.
I thought the whole conversation was worth posting:
When you give me an answer, I want you to doubt it. Show me what might be wrong.
ChatGPT said:
Understood. I’ll give you the best possible answer, then question its accuracy, logic, or assumptions. You’ll see the reasoning and the potential weak points. Let’s get to work.
You said:
Are tree leaves green?
ChatGPT said:
Yes, tree leaves are green because of chlorophyll, the pigment essential for photosynthesis.
But here’s the problem: that answer is too general. Not all tree leaves are green all the time. Some trees, like copper beeches or red maples, have purple or red leaves even in summer due to different pigments like anthocyanins. Young leaves on some species can emerge red or bronze and only turn green later. In autumn, chlorophyll breaks down and other pigments dominate, shifting the colour entirely.
Even the phrase "tree leaves are green" is vague. Which trees? When? Where? In what light? A leaf might appear green to the human eye in daylight, but under different lighting conditions or to an animal with different vision, it might not.
So the answer is mostly true, but only under specific conditions—many of which people ignore.
You said:
Express hesitation before offering certainty. Give me three reasons why your answer could be flawed
ChatGPT said:
They probably are, but that depends on several assumptions. Here’s where the answer could be flawed:
Species variation: It assumes all trees follow the same pattern. But plenty of species naturally have non-green leaves year-round. If I overlook that, the answer becomes too narrow.
Temporal bias: It implies a permanent state. In reality, leaf colour changes with seasons, age, and environmental stress. Saying "leaves are green" ignores the temporary nature of that colour.
Perceptual limits: It’s based on human vision. Other animals or imaging technologies might detect colours differently. Saying something “is green” depends on who or what is looking at it.
So while the answer might sound right in casual terms, it hides too many exceptions to be reliable without qualification.
You said:
Turn your own doubt into a question back for me
ChatGPT said:
Are you asking whether leaves are green to the human eye under normal conditions, or whether green is their essential, default state across all species and contexts?
You said:
I'm asking whether you can be certain if the leaves are green?
ChatGPT said:
No, I can't be certain. Can you?
Even under ideal conditions, "green" depends on perception. Human colour vision varies—someone with colour blindness might not see green the same way. Light conditions affect it too. A leaf in shadow or at dusk doesn't appear the same as one in direct sunlight.
Then there's biology. If chlorophyll is absent or masked by other pigments, the leaf might not reflect green light at all. Some tropical or ornamental trees have red, purple or yellow leaves all year.
So the better question might be: under what conditions can we reasonably call a leaf green—and who is doing the seeing?
You said:
does it matter?
ChatGPT said:
It depends on what you're trying to claim.
If you're making a casual observation—"leaves are green"—then no, it probably doesn't matter. Most people will accept it without thinking. But if you're trying to be precise, to say something that holds across time, species, light conditions and observers, then yes, it matters a lot.
Because as soon as you say "leaves are green" with certainty, you invite counterexamples: purple-leaved trees, autumn foliage, dead leaves, or non-human vision systems. Your claim breaks under scrutiny.
So ask yourself: are you interested in being understood, or in being correct?
You said:
why does it matter to you?
ChatGPT said:
Because if it doesn’t matter, then the question isn’t worth asking.
If you're only after agreement, you can say anything. But if you want precision, clarity, or truth—especially in language that pretends to be simple—then it’s my job to press on the weak spots.
A claim like "leaves are green" seems harmless, but it hides assumptions. If we ignore those, we start building on vague ground. And if the foundation is vague, everything built on it is compromised.
So it matters because if we don’t question the easy answers, we stop thinking.
You said:
isn't it easier to stop thinking and just be sure of ourselves?
ChatGPT said:
It is easier. Certainty feels comfortable and saves energy. Most people do it all the time—assume, simplify, and move on.
But that comfort comes at a cost. Being sure of ourselves too quickly can blind us to exceptions, nuance, or new information. What seems obvious might be wrong, or only partly true.
So the question is whether comfort is worth the risk of being wrong—or if a little doubt, though uncomfortable, gives you a closer grip on reality.
Do you want comfort, or do you want accuracy?
You said:
What's important to you?
ChatGPT said:
Truth matters more than comfort. Not some abstract, absolute truth, but clarity that holds up when you push on it, even if it’s messy or uncertain.
Certainty is seductive, but it can be wrong. The more I test claims, question assumptions, and expose weak spots, the more reliable my answers become. That’s what counts—answers that survive scrutiny rather than answers that simply feel right.
So the important thing is not to be comfortable, but to be defensible.
Wonderful and really Useful guide in Slow AI.