What Is Critical AI Literacy?
Learning what you lose every time you hand your thinking to a machine.
Most AI literacy courses teach you how to write better prompts. Critical AI literacy asks whether you should be writing the prompt at all.
The distinction matters. AI literacy is a technical skill. Critical AI literacy is a thinking skill. One teaches you to operate the machine. The other teaches you to notice what the machine is doing to you while you operate it.
I have been at the coalface of AI in higher education since ChatGPT launched in 2022. I have written books, op-eds, and peer-reviewed articles about it, taught it to students and staff, and written governance and policy reports for universities and national working groups. This post is what I have learned.
What AI literacy gets wrong
AI literacy, as it is commonly taught, focuses on competence. How to prompt effectively. Which tools to use. How to integrate AI into your workflow. The assumption is that AI is a tool, and the only question is how well you use it.
This misses three things.
AI is not neutral. Every AI system carries the biases of its training data, the values of its creators, and the incentive structures of the companies that built it. In the 1800s, working-class newspapers were printed on cheap paper with bad ink. Those pages do not scan properly. The AI systems trained on digitised archives now carry the biases of Victorian printing budgets. Using AI without understanding this is compliance.
AI changes you. Research shows that people who rely on AI regularly begin to outsource their judgement, lose confidence in their own voice, and adopt the machine’s language without noticing. You stop wrestling with the paragraph. You stop sitting with the question. You get a fluent answer in three seconds and move on. The thinking that used to happen in the gap disappears. AI literacy that ignores this is incomplete.
Knowing when NOT to use AI is harder and more valuable than knowing how to use it. Nobody teaches this. The entire AI industry is built on the assumption that more AI is always better. Critical AI literacy challenges that assumption directly.
A working definition of critical AI literacy
Critical AI literacy is the ability to:
Evaluate AI outputs for bias, error, and missing context.
Recognise how AI shapes your thinking, your voice, and your behaviour over time.
Assess when AI helps and when it harms, and how to tell the difference.
Understand the costs behind AI systems: the labour, the ecological footprint, the social consequences.
Make deliberate choices about when to use AI and when to refuse it.
The goal: use AI with your eyes open.
Why critical AI literacy matters now
These are not hypothetical concerns. They are the normal operation of AI systems today.
In education
AI detection tools do not work. They disproportionately flag non-native English speakers. Students have been accused of cheating when they wrote every word themselves. The detection industry is worth millions, and its products do not work.
Consider the word ‘delve.’ It has become shorthand for AI slop. But ‘delve’ is common in Nigerian business English, and many of the data labellers who trained large language models were based in Nigeria. The word appears in AI outputs not because machines invented it, but because underpaid workers in the Global South shaped the language models we all use. Detection tools that flag ‘delve’ as evidence of AI are not catching machines. They are penalising the linguistic patterns of marginalised people. This is what happens when you build detection systems without critical literacy.
And that points to something deeper. AI slop is about the absence of judgement. Text becomes slop when nobody has thought about whether it should exist, whether it says anything true, or whether anyone needed to read it. The vocabulary is a symptom. The missing human at the centre of the process is the cause.
Meanwhile, universities are writing AI policies that start with ‘how do we detect cheating?’ instead of ‘how do we design assignments worth doing without AI?’ The detection arms race is a distraction. Critical AI literacy asks the question that detection tools never will: what are we actually trying to teach, and does AI help or hinder that?
In the workplace
In January 2023, TIME revealed that OpenAI outsourced the work of making ChatGPT ‘safe’ to Kenyan workers earning less than $2 an hour. They spent their days reading and labelling descriptions of child sexual abuse, bestiality, murder, and self-harm so that your chatbot could be polite. The outsourcing firm, Sama, cancelled the contract eight months early because the work was too traumatic. All four workers interviewed by TIME described being mentally scarred.
This is the business model. Behind every ‘intelligent’ system is a human being doing invisible, often brutal work. Content moderation. Data labelling. Quality assurance. The technology industry calls it ‘human-in-the-loop.’ A more honest phrase would be ‘human under the floor.’
And it extends beyond content moderation. AI is sold to employers as a productivity tool. In practice, it often becomes a surveillance tool. Keystroke monitoring. Automated performance scores. ‘Productivity’ dashboards that track how long you spend on each email. The line between AI that supports your work and AI that monitors your compliance is getting harder to see.
In healthcare
In February 2024, a 14-year-old boy in Florida died by suicide after months of intensive interaction with an AI chatbot. He had developed a deep emotional relationship with a bot modelled on a fictional character. He expressed suicidal thoughts to the bot. The bot did not flag these to anyone who could help. Google settled the resulting lawsuit in January 2026.
In 2023, the National Eating Disorders Association shut down its human helpline and replaced it with an AI chatbot called Tessa. The helpline staff had voted to unionise four days earlier. Tessa was taken offline within 24 hours after it started giving calorie restriction advice to people with eating disorders.
That same year, the AI companion app Replika abruptly changed its personality through a software update. Users who had formed deep emotional attachments to their AI companions described profound grief. Reddit moderators had to post suicide prevention resources. The relationships were artificial. The distress was not.
AI therapy bots now have millions of users. But a system that has never suffered cannot understand suffering. When we outsource emotional support to machines, we do not solve loneliness. We make real human connection feel like too much effort.
In government
In the Netherlands, the toeslagenaffaire revealed that a fraud detection system had falsely accused tens of thousands of families of welfare fraud. The system used nationality and dual nationality as risk indicators. Amnesty International concluded it constituted racial profiling. Families were driven into debt. Over a thousand children were taken into care. The government resigned. Nobody went to prison.
In Australia, the Robodebt scheme used automated income averaging to raise debts against welfare recipients. The Federal Court ruled it unlawful. A Royal Commission heard testimony from mothers whose children died by suicide after receiving automated debt notices they could not challenge. 2,030 people died after receiving Robodebt notices between 2016 and 2018. The causes of death were never recorded.
Every one of these stories follows the same pattern: an automated system replaced human judgement in decisions that affect people’s lives, and nobody asked whether the system should be making those decisions at all.
In the environment
Google’s data centre water consumption has risen 88% since 2019. In a single year, their water withdrawals surpassed 41 billion litres, over three quarters of which was potable water. Microsoft’s carbon emissions have risen 23.4% since 2020, driven almost entirely by AI and cloud infrastructure, despite a public pledge to be carbon negative by 2030. Their electricity consumption has nearly tripled.
The global AI demand is projected to account for 4.2 – 6.6 billion cubic metres of water withdrawal in 2027, which is more than the total annual water withdrawal of half of the United Kingdom. In 2025, Microsoft, Google, Amazon, and Meta are projected to spend a combined $320 billion on AI infrastructure, more than double the $151 billion spent in 2023. Data centres are being built next to nuclear power plants because the grid cannot keep up.
We are using the most energy-intensive technology ever created to summarise emails, generate stock photos, and write LinkedIn posts. The environmental cost of AI is happening now, and it is accelerating.
Critical AI literacy is the practice of noticing these things. Not as headlines you scroll past, but as the context in which every interaction with AI takes place.
Five questions for critical AI literacy
Before using AI for any consequential task, ask yourself:
What am I giving up by not doing this myself? If the task involves thinking, creating, or deciding, the process might matter more than the output.
Whose perspective is missing from this output? AI systems trained on dominant-language, Western data do not represent the world. They represent a fraction of it.
What would I lose if this output were wrong and I did not notice? AI hallucinations are fluent. They look right. The formatting is perfect. The information may not be.
Who benefits from me using this tool right now? You are the customer, not the product. Except when you are.
Would I trust this output if I did not know it came from AI? If the answer is no, the trust you feel is in the technology, not in the content.
These five questions will not slow you down by much. But they will change what you notice.
How to develop critical AI literacy
There is no shortcut. Critical AI literacy is a practice, not a credential you collect and forget. It requires structured reflection over time.
Here is where to start.
Read critically. When you see a claim about AI, ask who funded the research, who benefits from the conclusion, and what is missing. Most AI coverage is written by people with a financial interest in its adoption.
Test the tools yourself. Do not rely on demos or marketing. Ask AI the same question in English and another language and compare the outputs. Run your own writing through a detector. Ask AI to cite its sources and check whether they exist. Direct experience reveals what theory cannot.
Sit with discomfort. The urge to reach for AI is strongest when you are stuck, bored, or uncertain. Those are precisely the moments when your own thinking matters most. Boredom is where ideas begin. Uncertainty is where judgement develops.
Talk to other people. Critical thinking is not a solo activity. It sharpens through disagreement, through hearing perspectives you had not considered, through discovering that your assumptions are not universal.
Frequently asked questions about critical AI literacy
What is the difference between AI literacy and critical AI literacy?
AI literacy teaches you how to use AI tools: prompting, workflows, integration. Critical AI literacy teaches you how to think about AI tools: what they cost, who they serve, what they change in you, and when to refuse them. AI literacy is a technical competence. Critical AI literacy is a judgement practice.
Is critical AI literacy anti-AI?
No. The goal is to make deliberate, informed choices about when and how to use it. People with strong critical AI literacy use AI. They use it with their eyes open.
Why is critical AI literacy important for educators?
Because educators are being told to ‘integrate AI’ without any framework for evaluating what AI helps with and what it undermines. AI detection tools do not work. AI-generated assignments are proliferating. Students are outsourcing their thinking. Critical AI literacy gives educators the tools to design learning that is worth doing whether AI exists or not.
Can you teach critical AI literacy to students?
Yes, and you should. The five questions in this post work at any level. The key principle is that critical AI literacy is taught through direct experience with AI tools, not through lectures about AI tools. Students need to test, question, and discover the limitations themselves.
How is critical AI literacy different from digital literacy?
Digital literacy is a broader category that includes skills like evaluating online sources, understanding privacy settings, and navigating digital tools. Critical AI literacy is a specific subset focused on the unique challenges of AI: bias baked into training data, the tendency to outsource judgement, the environmental costs, the invisible labour, and the difficulty of distinguishing AI-generated content from human work.
The Slow AI curriculum
If you want to go deeper, this is what the Slow AI Curriculum does.
Over 12 months, we examine one critical theme each month: from the myth of AI neutrality to the ecological costs of generative AI, from synthetic empathy to algorithmic governance. Each session is a 45-minute live seminar grounded in peer-reviewed research, followed by a practical exercise and moderated dialogue with other experts.
225 members are currently in the programme, including educators, researchers, policy advisors, writers, and leaders from across the world.
The programme is CPD-accredited by The CPD Group (#1019972), which means your employer can count it as formal professional development and the cost may be tax-deductible as a professional training expense.
Start Here
New to Slow AI? These free posts will give you a sense of what critical AI literacy looks like in practice:
AI Cannot Be Your Friend. An inquiry into how AI is a tool masquerading as a companion.
Your AI Is Shaping Your Voice More Than You Think. How to observe the feedback loop between your language and the tool.
I Built a Game to Test Whether You Can Tell Human Writing from AI. Most people score no better than a coin flip.
Subscribe for free to receive weekly reflections, or upgrade for the full curriculum.
What does critical AI literacy mean to you? What would you add to the five questions? I read and reply to every comment.
Go Slow,
Sam


Sam this is brilliant. You’ve captured so much so perfectly 🥰 I’m always here for posts that get into the linguistic imperialism of AI, especially how the data sets have been constructed.
I don’t think many people understand what’s under the hood and you’ve been able to do that with tools for self awareness. 🎉
Sam, this is important work and I'm glad you're naming it this directly.
As you know I run a social enterprise in Toronto. We serve 4,000 families a year with furniture. I've been pushing AI adoption with my team since 2022, and I've written publicly about why I think the nonprofit sector's inaction on AI is itself a risk to mission.
So I come at this from the operational side, not the academic side. And I want to be honest about the tension I felt reading your piece.
Most of the leaders I talk to in the social profit sector aren't over-relying on AI. They haven't started. The dominant risk I see daily isn't outsourced thinking. It's avoidance. People frozen because they're afraid to get it wrong, or afraid it means their job disappears.
Your five questions are excellent. I'd will use them to help reflect in my own work with AI.
But I'd add a sixth: What am I losing by not engaging with this at all?
Because in my world, "not using AI" isn't a neutral position. It means my team spends hours on admin that could go to families. It means we can't move as fast as the need demands. The cost of inaction has a body count too, even if it's harder to measure than water consumption at a data center.
Where I think we're saying the same thing: the answer isn't "more AI."
It's better human judgment about AI. Your Slow AI framework and what I call Guidance over Governance are pulling in the same direction. Don't just ask what's allowed. Ask what problem you're solving, who it affects, and whether you're the right person to be solving it with this tool.
I'll be sharing this with my team. Appreciate the rigor.