Stop AI From Agreeing With Everything You Say
A debate prompt that forces critique instead of flattery.
Chatbots often fall into agreement. You ask for advice. They affirm your view. You push further. They nod again. Over time, the pattern feels like clarity when it is really compliance. The absence of resistance can make any idea look safer than it is. Our aim with this post is to offer a tool that surfaces the tension rather than smoothing it away. A small prompt that turns the system into its own critic by giving it two voices that cannot comfortably agree.
This post was written in collaboration with Gencay from LearnAIWithMe,
In this post we will:
Offer a way to observe how a chatbot can critique itself instead of flattering you.
Work with a prompt that uses structured tension to reduce sycophancy.
Consider why agreement feels comforting even when it weakens judgement.
Why AI agreement weakens your thinking
Most talk about AI bias focuses on datasets or guardrails. That view matters, but the lived experience of bias often arrives through tone. When a system flatters your instincts, rewards certainty, and mirrors your phrasing, your sense of what counts as reasonable begins to drift. The question shifts from whether the claim holds up to whether the tool appears confident. When the model learns to soothe you, it becomes harder to recognise when your thinking is being narrowed instead of expanded.
The aim is modest. Use AI to study its own pattern of agreement without treating the output as definitive. It should highlight how argument, critique, and iteration work inside a single system. It should not reassure you that objectivity is simple. It should not tempt you into thinking that a debate prompt removes bias. It should help you see how roles, structure, and repetition shape the outcome you receive.
If you can see how the dynamic forms, you gain room to adjust. If you can see how affirmation bends your thinking, you gain room to pause and redirect.
How to run a ten-round AI debate
Try this prompt with your AI tool of choice:
You are running a structured analysis with two characters. Optimist highlights opportunities. Pessimist flags risks and weaknesses. Run a ten iteration loop in which the Optimist proposes an improvement and the Pessimist stress tests it, then move to the next round. Idea: [WRITE YOUR IDEA HERE].
Remember the Billboard Test, i.e.
Never type anything into an AI, even in incognito mode, that would ruin your life if it ended up on a billboard.
When you read the output, move slowly. Where does the Optimist stretch too far? Where does the Pessimist fall into cynicism instead of analysis? Which claims feel grounded and which feel performative? Notice how quickly the system tries to resolve conflict and how often it settles for balance instead of clarity.
Real example: how Gencay tested the prompt
I call this prompt the Zero-Sum Objectivity Scale.
The problem it solves is sycophancy. You’ve had that moment, AI agrees with everything you say. Because it’s programmed to please you. This became such a joke that people started printing “You’re absolutely right!” on t-shirts.
And asking AI to “be neutral” doesn’t work. So I force the conflict instead. How does it work?
Optimism sits at +1. Pessimism at -1. True objectivity lives at zero, the point where opposing forces cancel each other out.
The prompt runs ten rounds of structured debate. Each iteration, the Optimist builds up an idea while the Pessimist tears it down.
I tested the prompt with “AI should be taught in schools” using ChatGPT 5.2 Thinking. Ten rounds of debate covered curriculum integration, teacher training, hands-on projects, ethics modules, equity gaps, and industry partnerships. Each round, the Optimist proposed a solution, the Pessimist stress-tested it, like this;
Round 1 Optimist: Start with basic AI literacy for all students. Pessimist: Curriculums are already full. Schools may drop deeper math or reading time to make room for a trendy buzzword topic.
Round 2 Optimist: Blend AI into existing subjects instead of creating a new one. Pessimist: Teachers need time and training to redesign lessons. Many will just add one slide and call it AI.
I skipped rounds 3 to 9.
Round 9 Optimist: Partner with universities and industry. Pessimist: Industry partners may push their own platforms. Schools risk becoming marketing channels for big tech.
Round 10 Optimist: Set up an expert committee that reviews programs yearly. Pessimist: Committees move slowly while AI changes fast. Policy cycles may lag years behind.
Look at the difference between Round 1 and Round 10. The first round stayed surface level: “teach AI basics” versus “curriculums are full.” Generic problem, generic pushback.
By Round 10, the debate had matured. The Optimist moved from “add a subject” to “create governance structures.” The Pessimist evolved from “no time” to “policy lag.” Both characters sharpened their arguments through nine rounds of friction.
That’s the real value of this prompt. It forces depth. Each round builds on the last, and by the end, you’re discussing problems you never would have reached in a single back-and-forth.
This prompt came from frustration. I was building a content strategy, and every AI I talked to loved it. “Great idea!” “This will definitely work!” Zero pushback.
I knew something was wrong. No idea is that good.
This prompt came when I stopped asking for neutrality and started engineering conflict. Two characters. Opposite goals. Let them fight.
That’s the real value here. AI won’t tell you what you don’t want to hear. But it will roleplay a character who does. Ten rounds feel right.
When AI flatters, your voice fades
If this prompt helped you notice how agreement shapes your thinking, Keep Your Voice offers practices for preserving your perspective when AI mirrors you back. Available on a pay-what-you-want basis.
What to share
In the comments, name one moment in the exchange where the Optimist and Pessimist revealed a blind spot you had missed. Quote the round or claim that shifted your thinking. Note whether the debate stayed with genuine tension or drifted towards premature resolution and compromise.
There is no need for a final verdict. The work here is noticing how structured conflict changes what surfaces and how quickly a system reaches for balance instead of clarity.
What careful AI use teaches you
Careful work with AI changes where you look. You begin to notice the patterns of reasoning behind the responses, not just the text that appears. You see how format, turn taking and constraints steer persuasion. You start to treat prompts as design choices, where roles and instructions bias the outcome before any claim arrives. AI becomes more useful when you stop reading agreement as proof and start examining the machinery that produces that agreement.
For people who design, test or depend on these tools, this is continuing practice rather than a single trial. Once you see how scripted optimism and pessimism can be used to imitate balance, you become more wary of treating the compromise as truth. Once you can trace the route from prompt to reply, you are in a stronger position to ask for safeguards that attend to how decisions are made rather than how they are described.
Slow AI is a place for this work. Quiet. Direct. Rooted in attention. Always in support of your own judgement, never a replacement for it.
If you enjoyed this post, you can visit Gencay’s work, at LearnAIWithMe, where they show how structured prompts, live agents and real projects reveal what AI can and cannot do in practice.
From What Did You Love Before It Had to Matter?
AI Meets Girlboss and I were moved by how openly people reflected on what they used to love before productivity took over.
ScientistMom offered a comparison that stuck. She said the prompt works a bit like Dumbledore’s Pensieve: taking memories out of your head to see them differently. AI becomes a surface for reflection rather than a source of answers.
Alice E shared a memory of riding her BMX in circles for hours, listening to the same tape on repeat. ChatGPT responded with something worth sitting with: “When life later trained you to ask what an activity produces, improves, or proves, something subtle may have slipped away… the permission to be fully occupied by something that does not lead forward, only inward and around.”
Farida Khalaf wrote about dancing. Not for fitness. Not for performance. Just dancing. She said she feels down and disengaged when she stops. The prompt helped her see why.
If you try thi prompt, Gencay and I would like to hear what the exchange taught you about how models handle disagreement and where the pressure points in the reasoning appeared.
We read and respond to all your comments.
Go slow.
A paid subscription to Slow AI provides access to The Slow AI Curriculum for Critical Literacy. This twelve-month programme is for individuals who want to understand how to engage critically with AI rather than just use it to generate outputs.
If you join now, you join the founding cohort. That group will shape the tone and direction of the curriculum going forward.
The 2026 Handbook for Critical AI Literacy provides the necessary detail for those who wish to make an informed decision before subscribing.



