31 Comments
User's avatar
Daniel Ionescu's avatar

This was thought provoking. I’ve recently started telling GPT my gut feeling AFTER it answers/advises and it’s great at challenging itself and me.

Expand full comment
Sam Illingworth's avatar

That is a MUCH better system. Thanks Daniel. 🙏

Expand full comment
James Barringer's avatar

It feels a bit like like receiving a beautifully wrapped package that’s missing the key item inside.🤣

Everything looks complete, yet the purpose isn’t met.

This tension might land differently across the voices:

Nurturers may trust reassurance too quickly.

Guardians focus on accuracy and limits.

Creatives enjoy breadth and exploration.

Connectors respond to coherence and tone.

Pioneers value speed and momentum.

Naming these instincts helps teams slow just enough to ask, “Is this right, not just finished?”

Expand full comment
Sam Illingworth's avatar

This is a great observation, James. Just because something's finished doesn't mean it's correct. And similarly, just because something's incomplete doesn't mean that it has no merit to contribute.

Expand full comment
Scarcity & Abundance's avatar

Great reminder to slow down when using AI. Have you tried / had any luck injecting something like this prompt into the system message to encourage reflection on correctness / completeness prior to output?

Expand full comment
Sam Illingworth's avatar

That is a great question. I'll leave this one for Khaled to pick up, but it is certainly something worth trying! 🙏

Expand full comment
Khaled Ahmed, PhD's avatar

That's a great idea, I believe this could work, but the instruction needs to specify that the checking should happen after generating the output. Asking to check while the answer is being generated, in my own experience, usually leads to a falsely confident and incorrect output.

Expand full comment
Dennis Berry's avatar

So important to have some distance between the first impression and the final decision.

They are rarely the same thing... although that's not how most people are moving now.

Expand full comment
Sam Illingworth's avatar

Absolutely Dennis. And this prompt really helps to reintroduce that distance. 🙏

Expand full comment
Maribeth Martorana's avatar

I like this approach as it is rooted in context and the meaning behind the language for what is being reviewed. As my former ED would say, “Words have meaning. So, you need to parse your words.” This is what this prompt is doing, which grounds it in confidence.

Expand full comment
Sam Illingworth's avatar

Thanks so much Maribeth. So easy for many of us to trade away those words without care these days. 🙏

Expand full comment
Raghav Mehra's avatar

Thank you team for deliberating on a prompt that treats reasoning, correctness and completeness as three separate attibutes. Very often our biases treat each of them interchangeably but with some intentional prompting, you can weed out picking on outputs solely based on "personal intuition". Its inportant to remember that intuition builds with experience and very often our gut is just a mirror of our past experiences!

Expand full comment
Sam Illingworth's avatar

Thanks Raghav. And absolutely. We need to think about how biased our own experiences can be. 🙏

Expand full comment
Marcela Distefano's avatar

There are so many right ways to use artificial intelligence, and this prompt, which leads you to have more self-confidence, is a prime example.

Expand full comment
Sam Illingworth's avatar

Thanks Marcela. 🙏

Expand full comment
John Brewton's avatar

Separating “correct” from “complete” is such a clean way to reduce self-deception in evaluation.

Expand full comment
Sam Illingworth's avatar

Thanks John. It was a great prompt from @Khaled Ahmed, PhD to develop this idea. 🙏

Expand full comment
Csabi Berger's avatar

This was such a powerful read, so many takeaways for many of us.

I feel like this just confirms the #1 skill we all need to keep honing in on in 2026, which is discernment, whether it's our own AI's output, someone else's work, or anything we'll see on social media.

Thank you Sam and Khaled. 🙏

Expand full comment
Sam Illingworth's avatar

Thanks Csabi. And absolutely. That skill is just going to keep on getting more and more bankable. 🙏

Expand full comment
Richard Walter's avatar

I like this for the mind can only conceive with what it can see and believe. ✌️

Expand full comment
Sam Illingworth's avatar

Very well said Richard. 🙏

Expand full comment
Peter Jansen's avatar

From my vantage point in the Portuguese mountains, the distinction you draw between a "complete" answer and a "correct" one is the defining problem of our decade. We are drowning in the former, plausible, infinite noise, and starving for the latter.

You are building the Geiger counter; I am mapping the fallout zone. I think there is a powerful intersection between your forensic framework and the "Sovereign Science" thesis I’ve been working on. We are digging at the same tunnel from different ends.

Would love to compare field notes if you are open to it.

Keep sending the signal.

Expand full comment
Sam Illingworth's avatar

Thanks Peter! And always open for discussions and collaborations. 🙏

Expand full comment
Karen Spinner's avatar

Having AI check its own work intuitively makes sense—it’s what we do ourselves, after all. I suspect this approach could be adapted to have different models check each other’s work…

Expand full comment
Sam Illingworth's avatar

I LOVE this idea Karen. I wonder if the models would suffer from tall poppy syndrome though?

Expand full comment
Karen Spinner's avatar

🤣

Expand full comment
Phil Powis ❤️⚡️'s avatar

Loved seeing our network member, Juan Gonzalez featured here!

Expand full comment
Daria Cupareanu's avatar

Catching “convincing but wrong” output is the skill we need to keep developing - not just with AI, but also with social media, news, and pretty much everything we consume. Great article & prompt!

Expand full comment
Sam Illingworth's avatar

Thanks Daria. That is so true. It's going to be one of the most sought after skills! 🙏

Expand full comment
Juan Gonzalez's avatar

This prompt was pretty interesting. It reminds me I posted several months ago that AI showed that you can say anything as long as it's confident & authoritative, and people will believe it regardless of it being true.

Also, thanks for rewording my previous comment on dictation tools. You described it way better than I could :D

Expand full comment
Sam Illingworth's avatar

Thanks Juan. And you are very welcome. I am glad I did it justice. 🙏

Expand full comment