<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Slow AI : The Slow AI Curriculum]]></title><description><![CDATA[A 12-month programme in critical inquiry to better understand how to engage with AI effectively and ethically. All webinars, recordings, and other resources are available to paid subscribers. ]]></description><link>https://theslowai.substack.com/s/the-slow-ai-curriculum</link><generator>Substack</generator><lastBuildDate>Tue, 28 Apr 2026 13:12:27 GMT</lastBuildDate><atom:link href="https://theslowai.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Sam Illingworth]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[theslowai@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[theslowai@substack.com]]></itunes:email><itunes:name><![CDATA[Dr Sam Illingworth]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dr Sam Illingworth]]></itunes:author><googleplay:owner><![CDATA[theslowai@substack.com]]></googleplay:owner><googleplay:email><![CDATA[theslowai@substack.com]]></googleplay:email><googleplay:author><![CDATA[Dr Sam Illingworth]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Quarterly Reflective Check-in: January to March 2026]]></title><description><![CDATA[Three months of the Slow AI Curriculum have passed. Here is what the cohort found, what surprised me, and what is coming next.]]></description><link>https://theslowai.substack.com/p/quarterly-reflective-check-in-january-to-march-2026</link><guid isPermaLink="false">https://theslowai.substack.com/p/quarterly-reflective-check-in-january-to-march-2026</guid><dc:creator><![CDATA[Dr Sam Illingworth]]></dc:creator><pubDate>Thu, 09 Apr 2026 08:02:28 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/71532d77-55a3-47ba-8286-b3cdba104ecb_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The cohort taught me more than the curriculum did.</p><p>Three sessions. Bias, empathy, security. Three prompts, three sets of results that none of us fully anticipated. What follows is not a summary. It is a reflection on what happened when a global cohort of scholars tested these ideas against their own tools, their own assumptions, and each other.</p><h4><strong>In this post I will:</strong></h4><ul><li><p>Walk through the three sessions and what the cohort discovered in each one.</p></li><li><p>Name the surprises: the moments that shifted my own thinking.</p></li><li><p>Look ahead to Session 4 and some changes to how we meet.</p></li></ul><div><hr></div>
      <p>
          <a href="https://theslowai.substack.com/p/quarterly-reflective-check-in-january-to-march-2026">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Security and Surveillance]]></title><description><![CDATA[Every security problem identified for simple chatbots in 2021 is still present in 2026&#8217;s far more capable systems. We scaled up without fixing any of them.]]></description><link>https://theslowai.substack.com/p/security-surveillance-curriculum</link><guid isPermaLink="false">https://theslowai.substack.com/p/security-surveillance-curriculum</guid><dc:creator><![CDATA[Dr Sam Illingworth]]></dc:creator><pubDate>Fri, 27 Mar 2026 14:00:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d8cb219b-8205-452d-8922-c848b7b4e5cf_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to the third session of the 2026 Slow AI Curriculum for Critical Literacy.</p><p>This session examined AI security and privacy by placing two papers side by side: a 2021 review of chatbot security (<a href="https://onlinelibrary.wiley.com/doi/10.1002/cpe.6426">Hasal et al.</a>) and a 2025 survey of AI agent threats (<a href="https://dl.acm.org/doi/10.1145/3716628">Deng et al.</a>). The contrast is the curriculum. Every concern raised about simple, rule-based chatbots in 2021 remains unresolved. The 2025 paper adds entirely new categories of threat that were not possible four years ago: prompt injection, memory poisoning, cascading agent failure, and impersonation at scale.</p><p>Participants pasted a prompt into their AI tool of choice that asked it to describe, without revealing specific details, how much of their identity it was aware of, whether it could impersonate them, and whether it could forget them. What came back was illuminating. Some tools hedged. Some reassured. One admitted it could write in its user&#8217;s voice well enough to fool a casual reader.</p><p>If you were not able to join us live, the recording is worth watching in full. The variation in responses across different tools produced a conversation that the post alone does not capture.</p><div><hr></div><h3><strong>The webinar</strong></h3>
      <p>
          <a href="https://theslowai.substack.com/p/security-surveillance-curriculum">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Synthetic Empathy ]]></title><description><![CDATA[Affective computing replicates the markers of care without the capacity for it. For vulnerable people, the simulation is the danger.]]></description><link>https://theslowai.substack.com/p/synthetic-empathy</link><guid isPermaLink="false">https://theslowai.substack.com/p/synthetic-empathy</guid><dc:creator><![CDATA[Dr Sam Illingworth]]></dc:creator><pubDate>Sat, 28 Feb 2026 09:01:24 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ffbf60c0-68e3-4c7d-a873-736e78227fb9_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to the second session of the 2026 Slow AI Curriculum for Critical Literacy.</p><p>This session examines synthetic empathy: what happens when machines learn to simulate care, and why that simulation poses the greatest risk to those who need genuine care the most. Drawing on <a href="https://journals.sagepub.com/doi/10.1177/14639491231206004">Kurian&#8217;s (2023)</a> research into the empathy gap with young children, participants pasted a prompt into their LLM of choice that explicitly asked it to sit with a difficult feeling without comforting, advising, or reframing. </p><p>When the responses came back, almost all of them broke the instruction. What followed was a discussion about the compassion illusion, the difference between temporary and memory-laden chats, and why the people most likely to trust these responses are the ones least equipped to question them. </p><p>If you were not able to join us live, the recording is worth watching in full. The participant contributions were exceptional, and there is a lot in the discussion that the transcript alone does not capture.</p><h4><strong>The webinar </strong></h4>
      <p>
          <a href="https://theslowai.substack.com/p/synthetic-empathy">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Myth of Neutrality]]></title><description><![CDATA[Generative systems automate historical prejudices by treating biased training data as fundamental laws of reality.]]></description><link>https://theslowai.substack.com/p/myth-of-ai-neutrality</link><guid isPermaLink="false">https://theslowai.substack.com/p/myth-of-ai-neutrality</guid><dc:creator><![CDATA[Dr Sam Illingworth]]></dc:creator><pubDate>Fri, 30 Jan 2026 10:02:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a8b53e18-0c06-4d26-8042-0df801e97007_2944x1440.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to our first class of the 2026 <em>Slow AI</em> Curriculum for Critical Literacy. </p><p>It was a privilege to see so many of you at our opening webinar. The energy in the room, and the critical scepticism you brought to the dialogue confirms exactly why this community is necessary. For those who joined us live, thank you for your contributions. </p><p>This session explored the statistical illusion of neutrality in AI systems, drawing on a 2023 Lancet study that revealed how image-generating models default to colonial tropes, depicting healers as white and sick people as Black. We tested this live using a prompt asking AI to generate images of a quantum physicist and an unskilled labourer sharing a meal. The results were striking: across tools and continents, participants found remarkably similar biases: the physicist rendered as elderly, white, and male; the labourer marked by grime, high-vis jackets, and subordinate positioning. What emerged from our discussion was not despair but clarity: AI models are prediction engines trained on biased data, and understanding this is the first step toward using them critically rather than passively.</p><p>For those who couldn't make it, <strong>you are exactly where you need to be.</strong> Our curriculum is designed to be a slow, steady immersion. We suggest that those catching up watch the recording below before engaging with this month&#8217;s written inquiry, as it establishes the necessary foundation for our exploration of the &#8216;Myth of Neutrality&#8217;.</p><h4><strong>The webinar </strong></h4>
      <p>
          <a href="https://theslowai.substack.com/p/myth-of-ai-neutrality">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Slow AI Curriculum 2026: The Critical Literacy Repository]]></title><description><![CDATA[A central dashboard providing the webinar links and administrative documents required for the 2026 certification programme.]]></description><link>https://theslowai.substack.com/p/slow-ai-curriculum-2026-repository</link><guid isPermaLink="false">https://theslowai.substack.com/p/slow-ai-curriculum-2026-repository</guid><dc:creator><![CDATA[Dr Sam Illingworth]]></dc:creator><pubDate>Wed, 21 Jan 2026 10:02:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/31197948-b32d-4be2-95cb-0eb3d07501b1_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This post serves as the central repository for <em>The Slow AI Curriculum for Critical Literacy </em>2026. Scholars should bookmark this page for use throughout the twelve-month programme. It contains the live webinar links, the syllabus archive, and key contact information.</p>
      <p>
          <a href="https://theslowai.substack.com/p/slow-ai-curriculum-2026-repository">
              Read more
          </a>
      </p>
   ]]></content:encoded></item></channel></rss>