This is a great observation, James. Just because something's finished doesn't mean it's correct. And similarly, just because something's incomplete doesn't mean that it has no merit to contribute.
Great reminder to slow down when using AI. Have you tried / had any luck injecting something like this prompt into the system message to encourage reflection on correctness / completeness prior to output?
That's a great idea, I believe this could work, but the instruction needs to specify that the checking should happen after generating the output. Asking to check while the answer is being generated, in my own experience, usually leads to a falsely confident and incorrect output.
I like this approach as it is rooted in context and the meaning behind the language for what is being reviewed. As my former ED would say, “Words have meaning. So, you need to parse your words.” This is what this prompt is doing, which grounds it in confidence.
Thank you team for deliberating on a prompt that treats reasoning, correctness and completeness as three separate attibutes. Very often our biases treat each of them interchangeably but with some intentional prompting, you can weed out picking on outputs solely based on "personal intuition". Its inportant to remember that intuition builds with experience and very often our gut is just a mirror of our past experiences!
This was such a powerful read, so many takeaways for many of us.
I feel like this just confirms the #1 skill we all need to keep honing in on in 2026, which is discernment, whether it's our own AI's output, someone else's work, or anything we'll see on social media.
From my vantage point in the Portuguese mountains, the distinction you draw between a "complete" answer and a "correct" one is the defining problem of our decade. We are drowning in the former, plausible, infinite noise, and starving for the latter.
You are building the Geiger counter; I am mapping the fallout zone. I think there is a powerful intersection between your forensic framework and the "Sovereign Science" thesis I’ve been working on. We are digging at the same tunnel from different ends.
Would love to compare field notes if you are open to it.
Having AI check its own work intuitively makes sense—it’s what we do ourselves, after all. I suspect this approach could be adapted to have different models check each other’s work…
Catching “convincing but wrong” output is the skill we need to keep developing - not just with AI, but also with social media, news, and pretty much everything we consume. Great article & prompt!
This prompt was pretty interesting. It reminds me I posted several months ago that AI showed that you can say anything as long as it's confident & authoritative, and people will believe it regardless of it being true.
Also, thanks for rewording my previous comment on dictation tools. You described it way better than I could :D
This was thought provoking. I’ve recently started telling GPT my gut feeling AFTER it answers/advises and it’s great at challenging itself and me.
That is a MUCH better system. Thanks Daniel. 🙏
It feels a bit like like receiving a beautifully wrapped package that’s missing the key item inside.🤣
Everything looks complete, yet the purpose isn’t met.
This tension might land differently across the voices:
Nurturers may trust reassurance too quickly.
Guardians focus on accuracy and limits.
Creatives enjoy breadth and exploration.
Connectors respond to coherence and tone.
Pioneers value speed and momentum.
Naming these instincts helps teams slow just enough to ask, “Is this right, not just finished?”
This is a great observation, James. Just because something's finished doesn't mean it's correct. And similarly, just because something's incomplete doesn't mean that it has no merit to contribute.
Great reminder to slow down when using AI. Have you tried / had any luck injecting something like this prompt into the system message to encourage reflection on correctness / completeness prior to output?
That is a great question. I'll leave this one for Khaled to pick up, but it is certainly something worth trying! 🙏
That's a great idea, I believe this could work, but the instruction needs to specify that the checking should happen after generating the output. Asking to check while the answer is being generated, in my own experience, usually leads to a falsely confident and incorrect output.
So important to have some distance between the first impression and the final decision.
They are rarely the same thing... although that's not how most people are moving now.
Absolutely Dennis. And this prompt really helps to reintroduce that distance. 🙏
I like this approach as it is rooted in context and the meaning behind the language for what is being reviewed. As my former ED would say, “Words have meaning. So, you need to parse your words.” This is what this prompt is doing, which grounds it in confidence.
Thanks so much Maribeth. So easy for many of us to trade away those words without care these days. 🙏
Thank you team for deliberating on a prompt that treats reasoning, correctness and completeness as three separate attibutes. Very often our biases treat each of them interchangeably but with some intentional prompting, you can weed out picking on outputs solely based on "personal intuition". Its inportant to remember that intuition builds with experience and very often our gut is just a mirror of our past experiences!
Thanks Raghav. And absolutely. We need to think about how biased our own experiences can be. 🙏
There are so many right ways to use artificial intelligence, and this prompt, which leads you to have more self-confidence, is a prime example.
Thanks Marcela. 🙏
Separating “correct” from “complete” is such a clean way to reduce self-deception in evaluation.
Thanks John. It was a great prompt from @Khaled Ahmed, PhD to develop this idea. 🙏
This was such a powerful read, so many takeaways for many of us.
I feel like this just confirms the #1 skill we all need to keep honing in on in 2026, which is discernment, whether it's our own AI's output, someone else's work, or anything we'll see on social media.
Thank you Sam and Khaled. 🙏
Thanks Csabi. And absolutely. That skill is just going to keep on getting more and more bankable. 🙏
I like this for the mind can only conceive with what it can see and believe. ✌️
Very well said Richard. 🙏
From my vantage point in the Portuguese mountains, the distinction you draw between a "complete" answer and a "correct" one is the defining problem of our decade. We are drowning in the former, plausible, infinite noise, and starving for the latter.
You are building the Geiger counter; I am mapping the fallout zone. I think there is a powerful intersection between your forensic framework and the "Sovereign Science" thesis I’ve been working on. We are digging at the same tunnel from different ends.
Would love to compare field notes if you are open to it.
Keep sending the signal.
Thanks Peter! And always open for discussions and collaborations. 🙏
Having AI check its own work intuitively makes sense—it’s what we do ourselves, after all. I suspect this approach could be adapted to have different models check each other’s work…
I LOVE this idea Karen. I wonder if the models would suffer from tall poppy syndrome though?
🤣
Loved seeing our network member, Juan Gonzalez featured here!
Catching “convincing but wrong” output is the skill we need to keep developing - not just with AI, but also with social media, news, and pretty much everything we consume. Great article & prompt!
Thanks Daria. That is so true. It's going to be one of the most sought after skills! 🙏
This prompt was pretty interesting. It reminds me I posted several months ago that AI showed that you can say anything as long as it's confident & authoritative, and people will believe it regardless of it being true.
Also, thanks for rewording my previous comment on dictation tools. You described it way better than I could :D
Thanks Juan. And you are very welcome. I am glad I did it justice. 🙏