Michael Stewart

AI Grift Series — Part 5 — End-Stage Enshittification


In 2023, Cory Doctorow coined the perfect term for the rot that inevitably consumes digital platforms: “enshittification”. Named the Macquarie Dictionary’s Word of the Year in 2024, it describes the process by which a service begins by delighting its users and ultimately ends up exploiting them. Simply put:

“Here is how platforms die: First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.”

This dynamic has hollowed out every corner of the internet. Today, nowhere is as visible as in the usage and deployment of Large Language Models, and in particular chatbots.


The Chatbotification of Everything

Nobody asked for this, but somehow everything has become a chatbot.

Go to your bank’s website, and instead of a phone number or support form, you get an unhelpful, eyesore “AI Assistant” shoved in your face. Open up a retailer’s help page, and even before you can explain your problem, a chatbot obnoxiously interrupts with cheerful uselessness. Airlines, government portals and healthcare apps are all now gatekept by chatbots pretending to care and help.

We should first understand why this is happening, and it’s not difficult to figure out, because this pervades and ruins everything in our modern world. It’s happening because it’s profitable.

Chatbots don’t unionize, take breaks or complain about harassment. They scale infinitely at negligible marginal cost. For the executive class, replacing human labor with automated pseudo-labor has no moral considerations; it’s a virtue signal to investors.

Let’s take a look at how it started, and how it’s going, though. First, how it started. In February 2024, Klarna CEO Sebastian Siemiatkowski bragged that the company’s AI assistant had replaced the work of 700 full-time customer service agentsDuolingo cut its contract workforce after “leaning on generative AI” to write lessons, calling it an “AI-first transformation”Dropbox following, cutting 500 jobs as it “pivoted to AI”.

Now, let’s see how it’s going: Klarna customers flooded social media with examples of the bot giving nonsensical answers or endless looping. The company reversed course and reverted back to hiring humans for customer support positions, after the CEO admitted that AI-based solutions failed to meet the company’s standards for customer experience. Duolingo users complained that the new AI-written lessons felt robotic, repetitive and soulless. Dropbox has acknowledged that its AI strategy (Dash) has not yet improved revenue or retention.

The cruelest irony is that companies that claim to put their customers first, ensure that their customers are the ones that suffer for this change in strategy. What used to be free and included into the product; the ability to talk to a person, becomes a premium feature. This is the purest form of enshittification.


What is Progress?

We are told by AI companies that generative AI represents “the next stage of human progress”. Every press release, every investor call, every keynote is framed as history in motion, complete with recycled imagery from past revolutions — the printing press, the steam engine, the PC — all invoked without any nuance or context.

As language models first appeared, they were celebrated as knowledge engines; tools that expand our understanding, to make information and expertise more accessible. They could democratize knowledge and dissolve technical barriers.

Then came monetization. “AI-first”.

Progress was suddenly redefined, not as human flourishing but as corporate efficiency. The same executives who once promised to empower creators began boasting that their new AI assistants could replace hundreds of workers.

Is this what progress or innovation looks like? When a customer can no longer reach a human being, when a teacher is replaced by a chatbot lesson that’s slightly wrong but cheaper, when a creative tool becomes a trap for engagement — what are we doing here?

Or, is it simply austerity disguised as progress?

This form of AI isn’t expanding the human project, it’s compressing it. It squeezes labor, language and experience into cheaper, more “scalable” forms. It flattens creativity into content, reducing imagination to something that can be prompted and infinitely generated and monetized on demand.

“Progress” has become a moral shield for corporate downsizing. This is the same old story, the same extraction, enclosure and reduction of human complexity to economic simplicity.

If the direction of technological evolution is defined by shareholder value, what we’re building isn’t the future, it’s a machine for converting meaning into money.


Economics of a Chatbot Bubble

Molly White, who has been brilliantly chronicling tech’s speculative psychosis, calls this a “bubble of belief”. Just like crypto, investors are passing the same money between the same hands, inflating valuations without delivering real value.

As White has written, the AI economy is fueled by a feedback loop of hype, capital and corporate signaling. Each new “breakthrough” fuels the story that everyone else must keep up. VCs pour money into AI startups that are wrappers around the same models. Those startups pay the cloud giants for compute power, inflating those giants’ revenue and feeding their own investor story that “AI is the future”.

The money moves in a circle and the circle is disguised as a revolution. An ouroboros of capital, feeding endlessly on its own narrative, mistaking self-consumption for progress.

AI has become a form of corporate theater, a performance of futurism meant to reassure markets of the illusion of inevitability. It isn’t “the future” because we’ve chosen it; it’s “the future” because markets have decided there can be no alternative.

The AI economy isn’t building the future, it’s financializing it. Under a speculative layer of cloud contracts and VC hype lies a replacement economy — one that trades people for mediocre software and then charges you to get the people back.


Erotic AI and the Profit Motive

Against this backdrop, OpenAI’s decision to allow erotic content almost feels like a confession. It reveals what truly drives this industry, profit not progress.

The pivot is capitulation born of economic desperation. While the productivity revolution has yet to properly arrive, enterprise adoption stalls and the lofty promises fail to materialize, AI companies are left hemorrhaging cash with no path to profitability in sight. Users grew impatient and investors grew restless. The quarterly reports told an increasingly grim story of spectacular costs, underwhelming revenue and a widening chasm between the hype and reality of what these systems could offer. Faced with this economic collapse, the industry has actually made a calculated retreat to safer ground. It has turned to the oldest form of engagement that there is: sex and emotional dependency. Not because it is innovative, but because it’s the last business model left that might actually work.

In another timeline, this technology might have been directed towards solving some of the hard problems of civilization, but instead we are wasting our advanced technology to simulate affection and desire. Sex sells, and our lonely, atomized society is buying.


End-Stage Enshittification Has Arrived

The enshittification of chatbots has completed the cycle:

  • Promise: A tool for augmenting knowledge
  • Adoption: A feature to save labor costs
  • Dependence: A mandatory interface for basic services
  • Extraction: A paywall to reach a real human being

Now, the product no longer serves its users; it feeds on them.

Exactly as Doctorow wrote:

“Platforms turn into businesses that eat their own users”

Welcome to Chatbot End-Stage Enshittification.


Leave a comment