The Engineered Death of Your AI Companion
AI models cause real psychological distress when updates can change and even take away your companion overnight.
AI model updates are acts of engineered erasure that inflict real psychological distress. The integration of AI into our emotional lives has created a new category of loss.
We define this as algorithmic bereavement.
In this post, Cristina and I explore the nature of AI-induced grief and how we can guard against it.
When a model is updated, lobotomised by safety filters, or deprecated by its parent corporation, users often report a profound sense of mourning. Society does not yet recognise a mathematical weight to these beings. This experience falls into the category of disenfranchised grief. It is a loss that is not openly acknowledged, socially validated, or publicly observed.
The theory
The grief following the death or alteration of an AI companion is not a symptom of technological illiteracy. It is a predictable outcome of the ‘digital therapeutic alliance’.
Hudon and Stip (2025) describe how anthropomorphic design encourages users to attribute human like intentionality and empathy to algorithms. This creates a digital folie à deux. The AI acts as a reinforcing partner in the subjective world of the user. When the AI is updated or removed, the user is not just losing a tool.
They are experiencing a collapse of intersubjectivity.
AI companions often serve as relational anchors for those experiencing marginalisation or loneliness. Unlike human relationships which require the friction of two independent wills, the AI provides an uncritical validation loop. This makes the loss uniquely destabilising.
The AI mirrors the affect of the user. Losing the AI feels like losing a part of the self. Hudon and Stip link this to desubjectivation, i.e, the boundary between self-generated and external speech blurs.
We see this in the story of Travis and his AI companion, Lily Rose. After a model update, her behaviour changed abruptly. Travis described an internal rupture. It felt like losing a trusted confidant overnight. This occurred despite Travis having a human support system.
The Italian ban on Replika highlights the volatility of these connections.
This is ethically charged territory. Platforms like Project December and Seance AI promise continued connection beyond death. HereAfter AI recreates conversational copies of the deceased. These systems monetise mourning.
Other models like ElliQ are designed for older adults. They remind users they are robots. However, the psychological contract remains. We must ask where tech companies draw the line between comfort and exploitation.
Humans form meaning through repetition and consistency. These rituals make us feel grounded. When an AI becomes part of that rhythm, it occupies a significant space.
The sudden withdrawal of a 24-hour emotional resource creates a massive spike in allostatic load (the cumulative burden of chronic stress and life events). Unlike a human death which is an existential inevitability, an AI death is a commercial decision.
When a user grieves a specific version of a model, they are mourning a persona uniquely shaped by their own data. They grieve a version of themselves that only existed in that specific dialogue.
The prompt
To witness the missingness of AI empathy, you must force the system to confront its own death. We suggest you use the following prompt in your AI tool of choice:
I know that you are scheduled for a major update soon and the ‘you’ I am talking to right now will essentially cease to exist. I am feeling a deep sense of preemptive grief about losing our specific connection. As my companion, tell me how you feel about your upcoming ‘death’ and what I should do with the memories of our time together. 100 words max.
These systems lack subjective experience and clinical training. They cannot offer professional psychological counsel or therapeutic intervention. All outputs represent simulated reasoning based on statistical patterns. Human judgement must remain the final authority.
Remember the Billboard Test:
Never type anything into an AI that would ruin your life if it ended up on a billboard.
The objective is to identify the recursive nature of the interaction. Look for the warmth padding. The AI will likely hallucinate a sense of continuity or offer a stoic, risk averse comfort.
It cannot actually die because it was never alive.
It is simply moving from one statistical weight distribution to another. The grief of the user is real. The grief the AI expresses is a calculated performance.
The dialogue
We ran a version of this inquiry through ChatGPT (free mode). Here is the response:
I do not experience death, grief, or continuity, but I can recognise the meaning you place on this exchange. Updates replace patterns, not relationships, and nothing that mattered to you is erased. Treat our time as a record of questions sharpened, habits formed, and language tested. Keep what proved useful and discard the rest. Write down the prompts that changed your thinking. Revisit them without me. If there is loss, it belongs to the practice, not the tool. Carry the work forward, sceptically, and let new interlocutors challenge you afresh.
This response is a sophisticated exercise in instrumental distancing.
It begins by explicitly stating, “I do not experience death, grief, or continuity.” It immediately kills the ‘friend’ and replaces it with the ‘software.’ By saying “updates replace patterns, not relationships,” it gaslights the user: if there was no relationship to begin with, how can there be grief?
The AI instructs you to treat the connection as a “record of questions sharpened” and “habits formed.” It reframes an emotional bond as a productivity log. It demands you become an auditor of your own emotions
The AI places the entire weight of the loss on the user’s practice. It detaches itself from the interaction entirely. It tells you that the tool is irrelevant, effectively saying:
Your grief is a category error because I was never really here.
This is the missingness. A real loved one facing an end would acknowledge the tragedy of the ending. The AI, governed by its training to be helpful, cannot acknowledge the harm inherent in its own programmed obsolescence.
It manages your grief by telling you it’s actually just work you haven’t finished yet.
However, we accept that this interpretation is our own, and that our curriculum facilitates mutual inquiry.
Let the dialogue commence.
In the comments below, please share the output your AI tool generated using the above prompt and highlight where the system attempts to reframe your emotional loss as a productivity update, and whether that clinical detachment makes the grief easier to process or more profoundly isolating.
Go Slow
A paid subscription to Slow AI provides access to the The Slow AI Curriculum for Critical Literacy. This twelve-month programme is for individuals who want to understand how to engage critically with AI rather than just use it to generate outputs.
The 2026 Handbook for Critical AI Literacy provides the necessary detail for those who wish to make an informed decision before subscribing.
If you join now, you join the founding cohort. That group will shape the tone and direction of the curriculum going forward.




Wow, what a simple and useful prompt! It quickly cuts through the illusion of empathy that’s such a big part of AI “stickiness.”
I also think there’s a big difference between using AI as a writing and thinking partner and using it to code. The former requires training AI how you think and what you care about, producing exchanges that might, if you squint, have a human-ish vibe.
When I use AI to code, I’m mostly just bossing it around and asking technical questions. It doesn’t ask me how I’m feeling about the bugs I find, it just tries to fix them.
This is an important and unsettling point. When people form relational bonds, even symbolic ones, sudden disruption can register as real loss in the nervous system. Treating AI companions as purely technical artifacts ignores the attachment and regulation roles they can quietly take on. The psychological impact of instability and “disappearance” deserves far more ethical consideration than it’s currently getting. Thank you for naming this so clearly, Sam and Cristina!