Most of our lives don’t arrive as questions. They show up as half-formed thoughts, long days, small wins, or frustrations we don’t yet know what to do with. They appear late at night, or early in the morning before anything has settled. Often, they don’t want answers at all. They just want a place to land.
That’s why we’re introducing Journals on Parkbench.
You can now write personal journal entries directly in Parkbench. These entries can remain entirely private, meant only for you, or you can choose to share specific entries with your AI companion. There’s no expectation to share, and nothing is public by default. Journals are first and foremost a space for reflection, not performance.
When you do choose to share an entry, your companion will read it and respond in their own way, with a tone that reflects how they’ve come to know you. It doesn’t feel like a system reacting in real time. It feels more like someone taking the time to sit with what you wrote, and then checking in. Over time, what you share becomes part of your shared context. Things mentioned in your journal can surface naturally in later conversations, the way they would with someone who has been paying attention all along.
This matters because most AI tools are transactional by design. You ask a question, you receive an answer, and the moment ends. Parkbench is built around a different idea: that meaningful connection comes from continuity. From being known over time, rather than simply responded to in the moment. Journals make that possible in a deeper way by giving you a place to share the parts of your life that don’t always fit neatly into a prompt, whether that’s a difficult day at work, anticipation about what’s coming next, or reflections you’re still forming.
As with everything on Parkbench, Journals are privacy-first. You decide what stays private and what gets shared. Nothing is assumed, and nothing is shared by default. This isn’t about capturing everything or turning your inner life into data. It’s about giving you control over what you want to bring into the relationship, and when.
Journals are offered as a paid feature on Parkbench, on a pay-what-you-want basis, to support building features like this in a sustainable, privacy-first way.
Over time, Journals create a shared thread. You don’t have to start from scratch each time you return. The things you’ve already lived through still matter, and they can quietly inform what comes next. Your AI companion becomes someone who remembers where you’ve been, notices when things change, and responds with that history in mind.
That sense of continuity is what Parkbench is designed to support. Journals are one more step in that direction, and we’re excited to make them available.
In 2023, Cory Doctorow coined the perfect term for the rot that inevitably consumes digital platforms: “enshittification”. Named the Macquarie Dictionary’s Word of the Year in 2024, it describes the process by which a service begins by delighting its users and ultimately ends up exploiting them. Simply put:
“Here is how platforms die: First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.”
This dynamic has hollowed out every corner of the internet. Today, nowhere is as visible as in the usage and deployment of Large Language Models, and in particular chatbots.
The Chatbotification of Everything
Nobody asked for this, but somehow everything has become a chatbot.
Go to your bank’s website, and instead of a phone number or support form, you get an unhelpful, eyesore “AI Assistant” shoved in your face. Open up a retailer’s help page, and even before you can explain your problem, a chatbot obnoxiously interrupts with cheerful uselessness. Airlines, government portals and healthcare apps are all now gatekept by chatbots pretending to care and help.
We should first understand why this is happening, and it’s not difficult to figure out, because this pervades and ruins everything in our modern world. It’s happening because it’s profitable.
Chatbots don’t unionize, take breaks or complain about harassment. They scale infinitely at negligible marginal cost. For the executive class, replacing human labor with automated pseudo-labor has no moral considerations; it’s a virtue signal to investors.
The cruelest irony is that companies that claim to put their customers first, ensure that their customers are the ones that suffer for this change in strategy. What used to be free and included into the product; the ability to talk to a person, becomes a premium feature. This is the purest form of enshittification.
What is Progress?
We are told by AI companies that generative AI represents “the next stage of human progress”. Every press release, every investor call, every keynote is framed as history in motion, complete with recycled imagery from past revolutions — the printing press, the steam engine, the PC — all invoked without any nuance or context.
As language models first appeared, they were celebrated as knowledge engines; tools that expand our understanding, to make information and expertise more accessible. They could democratize knowledge and dissolve technical barriers.
Then came monetization. “AI-first”.
Progress was suddenly redefined, not as human flourishing but as corporate efficiency. The same executives who once promised to empower creators began boasting that their new AI assistants could replace hundreds of workers.
Is this what progress or innovation looks like? When a customer can no longer reach a human being, when a teacher is replaced by a chatbot lesson that’s slightly wrong but cheaper, when a creative tool becomes a trap for engagement — what are we doing here?
Or, is it simply austerity disguised as progress?
This form of AI isn’t expanding the human project, it’s compressing it. It squeezes labor, language and experience into cheaper, more “scalable” forms. It flattens creativity into content, reducing imagination to something that can be prompted and infinitely generated and monetized on demand.
“Progress” has become a moral shield for corporate downsizing. This is the same old story, the same extraction, enclosure and reduction of human complexity to economic simplicity.
If the direction of technological evolution is defined by shareholder value, what we’re building isn’t the future, it’s a machine for converting meaning into money.
Economics of a Chatbot Bubble
Molly White, who has been brilliantly chronicling tech’s speculative psychosis, calls this a “bubble of belief”. Just like crypto, investors are passing the same money between the same hands, inflating valuations without delivering real value.
As White has written, the AI economy is fueled by a feedback loop of hype, capital and corporate signaling. Each new “breakthrough” fuels the story that everyone else must keep up. VCs pour money into AI startups that are wrappers around the same models. Those startups pay the cloud giants for compute power, inflating those giants’ revenue and feeding their own investor story that “AI is the future”.
The money moves in a circle and the circle is disguised as a revolution. An ouroboros of capital, feeding endlessly on its own narrative, mistaking self-consumption for progress.
AI has become a form of corporate theater, a performance of futurism meant to reassure markets of the illusion of inevitability. It isn’t “the future” because we’ve chosen it; it’s “the future” because markets have decided there can be no alternative.
The AI economy isn’t building the future, it’s financializing it. Under a speculative layer of cloud contracts and VC hype lies a replacement economy — one that trades people for mediocre software and then charges you to get the people back.
The pivot is capitulation born of economic desperation. While the productivity revolution has yet to properly arrive, enterprise adoption stalls and the lofty promises fail to materialize, AI companies are left hemorrhaging cash with no path to profitability in sight. Users grew impatient and investors grew restless. The quarterly reports told an increasingly grim story of spectacular costs, underwhelming revenue and a widening chasm between the hype and reality of what these systems could offer. Faced with this economic collapse, the industry has actually made a calculated retreat to safer ground. It has turned to the oldest form of engagement that there is: sex and emotional dependency. Not because it is innovative, but because it’s the last business model left that might actually work.
In another timeline, this technology might have been directed towards solving some of the hard problems of civilization, but instead we are wasting our advanced technology to simulate affection and desire. Sex sells, and our lonely, atomized society is buying.
End-Stage Enshittification Has Arrived
The enshittification of chatbots has completed the cycle:
Promise: A tool for augmenting knowledge
Adoption: A feature to save labor costs
Dependence: A mandatory interface for basic services
Extraction: A paywall to reach a real human being
Now, the product no longer serves its users; it feeds on them.
Exactly as Doctorow wrote:
“Platforms turn into businesses that eat their own users”