The US Just Defined AI Literacy. They Left Out the Most Important Part.
The Department of Labor’s new framework teaches five ways to use AI. It never asks whether you should.
The US Department of Labor released its national AI Literacy Framework on 13 February 2026. Five foundational content areas. Seven delivery principles. A plan to retool the American workforce for an AI future.
The five content areas: understand AI principles. Explore AI uses. Direct AI effectively. Evaluate AI outputs. Use AI responsibly.
Read them again. Every single one points in the same direction: toward AI.
In this post I will:
Name what the framework gets right and what it refuses to ask.
Show how the gap between AI literacy and critical AI literacy plays out in practice.
Map the DOL’s five content areas against five critical AI literacy questions
and show where they diverge.
What the DOL framework gets right
Credit where it is due. The framework is modular, not mandatory. It emphasises equitable access for frontline workers, rural communities, older workers, people with disabilities, and those with limited English proficiency. It names experiential learning as a core delivery principle. It was developed with input from employers, training providers, and state agencies.
The ‘evaluate outputs’ content area acknowledges that AI makes mistakes. The ‘use responsibly’ area gestures toward safe and secure usage.
For a government document, this is not bad.
What the DOL framework gets wrong
The framework defines an AI-literate person as someone who can understand, explore, direct, evaluate, and responsibly use AI. At no point does it define an AI-literate person as someone who knows when to refuse it.
The word ‘no’ does not appear in the framework’s vocabulary.
There is no content area for understanding what AI costs: the Kenyan workers earning less than $2 an hour to make chatbots safe. The 560 billion litres of water consumed by data centres in a single year. The fact that greenhouse gas emissions from AI use are also now equate to more than 8% of global aviation emissions.
There is no content area for recognising how AI changes the user. Research shows that AI flattery leads to intellectual stagnation. Other work has documented how cognitive offloading erodes critical thinking. The framework treats AI as something you apply to your work. It does not acknowledge that AI also applies itself to you.
There is no content area for understanding bias as a structural feature rather than a bug to be evaluated away. The framework’s ‘evaluate outputs’ section asks learners to check whether AI got the answer right. It does not ask why the AI got the answer it got, whose data shaped it, or whose experience is missing from the training set entirely.
This is AI literacy. It teaches you how to operate the machine.
Critical AI literacy asks a different set of questions.
The DOL’s five content areas and the five questions I published here point in opposite directions. The framework points toward the tool. The questions point toward the human.
We need both. Right now, the world’s largest economy is only teaching one.
Why the US AI Literacy Framework matters for everyone
The United States just told its workforce what AI literacy means. That definition will shape training programmes, funding decisions, and educational policy across the world’s largest economy. Other governments will follow.
If the definition stops at ‘can use AI productively’, then AI literacy becomes a compliance exercise. Learn the tools. Pass the assessment. Get back to work.
Critical AI literacy demands more. It asks you to notice what the tool is doing to you while you use it. It asks you to count the costs the interface hides. It asks you to hold the question that no framework wants to ask: when should you stop?
The DOL built a framework for using AI. What we need is a framework for thinking about it.
What would you add to the DOL’s five content areas? And what do you want AI literacy to mean where you work: the ability to use the tools, or the judgement to question them?
I read and respond to all your comments.
Go slow.
This month in the curriculum: 146 paid members (from education, research, policy, and industry) explored how AI image generators default to colonial tropes when depicting race, profession, and power. We ran the same prompt across tools and continents. The biases were near-identical. The conversation that followed was not.
The Slow AI Curriculum for Critical Literacy is a 12-month programme for people who want to learn when AI helps, when it harms, and how to tell the difference. Each month covers one critical theme, with a live 45-minute seminar, research syntheses, moderated dialogue, and full recordings. You receive a Certificate of Critical AI Literacy upon completion.
Read the 2026 Handbook for Critical AI Literacy to see exactly what the programme covers. £75/year.



That’s a great distinction to make between « AI literary » and « critical AI literacy ». The DOL framework is presented as a starting point and they state their commitment to improve it over time with feedback. The next version 2 needs to incorporate the « critical » aspects, and more on the trade offs involved in producing AI services.
I think there are a lot of frameworks but how many are actually applied? That’s what bothers me