Your AI Training Is Built on a Model That Failed Forty Years Ago
The dominant model for AI literacy was discredited in science communication forty years ago. Here is what the evidence says works instead.
The evidence against the current approach has existed for forty years. Nobody is using it.
In this post I will:
Show that the dominant model for AI literacy was discredited in science communication research over a decade ago, and explain why it keeps coming back.
Trace the pattern from Chernobyl to ChatGPT: what happens when experts treat the public as a knowledge gap to be filled.
Give paid subscribers a diagnostic framework for evaluating whether their own institution’s AI literacy programme is building understanding or just delivering information.
In 1986, radioactive fallout from Chernobyl reached the hill farms of Cumbria in northern England. Government scientists assured farmers that contamination would dissipate within weeks. It did not. Their predictive models had been built from lowland clay soils, where radiocesium binds to mineral particles and becomes inert. The upland fells have acidic, organic soils in which cesium remained mobile, cycling back into vegetation and sheep.
The hill farmers, whose generations of knowledge about local soil and terrain could have identified these assumptions as flawed, were not consulted. They were reassured. When they raised concerns, they were told to feed their fell sheep straw. A suggestion so detached from hill farming that it confirmed what many suspected: the experts did not understand what they were making decisions about.
Brian Wynne’s study of the Cumbrian farmers identified a dynamic that has been confirmed across dozens of contexts since. When expertise is drawn from too narrow a base, its holders mistake the limits of their own knowledge for the limits of knowledge itself. They treat communication as delivery rather than exchange. They dismiss the understanding of those closest to the problem as lay ignorance rather than what it often is: a different and necessary form of expertise.
AI is now repeating this dynamic on a compressed timeline.
Why AI literacy programmes fail
The deficit model assumes that public misunderstanding stems from a knowledge gap, and that filling the gap with expert knowledge will produce the desired attitudes and behaviours. It persists because it is intuitive. The evidence against it is substantial.
Research across climate change, vaccination, genetically modified organisms, and nuclear risk has consistently shown that public responses to complex science are shaped more by trust, values, identity, and lived experience than by information alone. Kahan and colleagues demonstrated that higher scientific literacy can intensify polarisation on contested issues, because people use their knowledge to defend positions they already hold. Knowledge and behaviour are not linearly related. Social context, institutional trust, and personal experience mediate the relationship.
Current AI literacy efforts follow the deficit model’s blueprint with striking fidelity. UNESCO’s AI Competency Framework for Students structures understanding as competencies to be acquired. University policies focus on compliance. Corporate training teaches tool use. Public discourse oscillates between utopian promise and existential threat, with virtually no structured space for people to articulate their own experiences or concerns.
A bibliometric analysis of 335 AI literacy studies found that the overwhelming majority focus on technical competency acquisition, with minimal attention to participatory or critical engagement. Almost all follow the same structure: experts decide what people need to know, package it into modules, and deliver it.
The people on the receiving end are given no structural role in shaping the conversation.
Farrell and colleagues recently argued in Science that large AI models are best understood as cultural and social technologies. If that reframing is correct, the expertise required to govern and communicate about AI must be correspondingly broad. Yet the voices shaping AI policy, literacy, and governance are drawn overwhelmingly from computer science and the technology industry.
A nurse implementing AI diagnostic tools understands clinical judgement in ways no developer can replicate. A teacher navigating AI in assessment holds expertise about pedagogy and trust that no competency framework captures. Institutions are not structured to hear this knowledge. That is the problem.
I have watched institutions build AI literacy programmes that follow the deficit model exactly: a working group of senior academics decides what staff need to know, packages it into workshops, and measures success by attendance. Nobody asks the staff or the students what they already understand, what worries them, or what they have already figured out on their own. The expertise in the room is treated as empty space to be filled. It is Cumbria again, with laptops instead of sheep.
What science communication research shows
If the deficit model does not work, what does? Science communication research offers tested alternatives.
Two-way dialogue outperforms one-way messaging. Wynne documented this in microcosm: the few scientists who stayed on Cumbrian farms, engaging informally with farmers rather than issuing statements from London, gained credibility precisely because they were open about what they did not know. I found something similar when I ran poetry workshops with refugees in Bristol and people living with mental health needs in Manchester: given a medium to express what they knew about environmental change, participants produced insights the experts in the room had missed, including local flood knowledge dating back to the 1960s and an understanding of urban pollution that no air quality index could capture.
The dialogue changed the experts as much as the participants. This is because expertise becomes trustworthy when it is willing to be revised through encounter with the knowledge of others.
Early evidence from AI-specific engagement supports this pattern. When AI literacy initiatives incorporate participatory elements, asking people to test tools against their own expertise or interrogate outputs through the lens of their professional knowledge, the quality of critical engagement improves markedly. Such approaches are harder to scale and harder to report in competency frameworks. That is not a reason to avoid them.
Creative and arts-based methods access dimensions of understanding that conventional approaches cannot reach. Poetry, theatre, and storytelling change how participants relate to complex phenomena, making abstract concepts personally meaningful. Serious games, for example, create engagement with complex systems by requiring players to make trade-offs and negotiate competing values. These methods work because they are experiential. They create space for people to articulate complex, contradictory responses that surveys and training workshops are structurally unable to capture.
Science communication research has mapped a spectrum of engagement, from one-way dissemination through consultation to full public participation. Current AI communication sits almost entirely at the dissemination end. The AI sector invokes ‘responsible AI’ and ‘human-centred design’, yet its engagement practices remain expert-led and one-directional. Responsible innovation frameworks have long argued that meaningful participation must occur upstream, during development and design, not downstream after deployment.
AI literacy is currently an entirely downstream activity.
How to fix AI literacy
Four things follow from this evidence.
AI literacy programmes must move beyond information provision. Institutional training should include participatory, dialogic, and creative elements. The hundreds of AI literacy initiatives launched in higher education since 2023 should be evaluated against participation metrics, not just completion rates. Did participants contribute their own knowledge? Were their concerns incorporated into institutional responses? If the answer to both is no, the programme is delivering information. It is not building understanding.
AI governance must create genuine space for public voice. The current conversation is dominated by technology companies, governments, and a media ecosystem that rewards extreme positions. Publics are positioned as recipients, never as participants. Governance bodies and educational institutions should adopt participatory formats that bring diverse publics in as contributors with relevant knowledge.
Creative and arts-based approaches must be funded and recognised as legitimate methods for AI engagement. Poetry, games, theatre, and participatory arts are methodologies with established evidence bases and demonstrated capacity to produce understanding that conventional methods miss. Funders should treat creative engagement as a core component of AI research, not as outreach conducted after the real work is done.
Research funders should require AI projects to include participatory public engagement as a condition of funding, in the same way that many funders now require data management plans or ethics review. The infrastructure for meaningful public engagement with complex science exists. It has been developed, tested, and refined over forty years. What is missing is the institutional will to apply it.
The hill farmers of Cumbria knew their land better than the scientists who dismissed them. The nurse, the teacher, and the artist know things about the human consequences of AI that its builders do not. We have been here before. The evidence exists. The methods are proven. What remains is the institutional will to use them.
Has your institution asked you what you think about AI, or has it told you what to think? I want to know which one.
Below the paywall: five practical interventions for fixing AI literacy where you work. Each one can be used in your next meeting without institutional permission. Paid subscribers get this and every Slow AI post, plus the full Slow AI Curriculum: a CPD-accredited, 12-month programme in critical AI literacy with monthly live seminars, exercises, and a community of over 200 educators, policymakers, and practitioners doing this work seriously.
Five interventions to fix AI literacy where you work
Everything above tells you what is broken. This section tells you what to do about it. Each intervention takes less than five minutes to explain to a colleague and can be implemented without institutional permission.


