Requiem for a dream
For my 500th post, I thought I’d write something on why I don’t see much of a future in posting.
It’s hard to see a dream die, even if it was always something of a selfish and insane thing. I wanted, strange to say it, to be a philosopher and social commentator. I think we’re past that now, or almost. The number of prominent new thinkers, especially outside academia, is going to plummet.
There’s another dream standing at death’s door with it- the dream of Substack. We can illustrate precisely why writers are fucked using this platform. I don’t think that Substack is going to survive the next 5 years in any recognisable form. Granted, it was a great idea. Substack was, perhaps, the first social media model that I have been tempted to truly believe in. Obviously, a commie like me thinks it’s an evil corporate entity, but I do see in the idea something immensely positive compared to, say, Instagram. Like many great ideas, it was untimely. Not, as one would hope, early, but rather, late.
You’ve likely already thought through this, but it’s worth spelling out what will happen in detail. More and more AI accounts will pursue Substack bucks. The percentage of Substack bucks going to AI accounts will rise. However, people will realise this is happening, and Substack will develop a reputation as a place that is full of bots. When this happens, fewer and fewer new users will join, and existing users will become frustrated. Eventually, the bot will destroy the very source of funding they are pursuing. A classic Tragedy of the Commons scenario, with analogies to Gresham’s Law and Akerlof’s classic “A Market for Lemons” paper. The first hit will be new authors trying to make a name for themselves, but I believe that the logic will eventually grind everyone- except maybe a few superstars- down to dust. A cruel truth here: the thing that made Substack appear so tempting as an option for sustaining writers- the paid subscription model- makes it a staggeringly juicy plum for AI spammers.
[There’s another dynamic here- stuff partially written by AI with a bit of that slop vibe. This will also slowly fuck everything up, but we’ll leave it aside in this essay.]
The general situation for ambition, 10,000 BC to the present
A quick detour through what I see as the larger situation with AI. A few years ago, I wrote an essay that argued, in effect, the following:
Once upon a time, before agriculture and modern population densities, just about everyone got to be the best at something significiant among the people they knew or at least among the best, whether that was singing, dancing, child care, fire starting, wrestling, hunting, gathering, painting, mysticism, storytelling, good looks, rizz, memory, etc. etc.
Then, as population density increased, the percentage of people who could claim to be the best at any activity of significance among the people they knew of fell. This is the condition of most people now; most people come to an acceptance towards the end of their adolescence that they are indistinct against the social field- that they have no clear point of difference or divergent line of travel.
There’s a minority of us- from philosophers to professional athletes- who still aspire to be either the best, or ‘close to the top’ at something. even if not a whole sport or academic discipline, a genre, and even if not the top at least in a list of the best. For many of us (case in point: myself), this was always pretty wane [less gently: insane] hope, but it animated us nonetheless.
Our voices are amplified because we tend to be writers, critics, celebrities, etc. From tiny micro-writers like myself, all the way up to eminent letterists, you will find people with big, often insane, but nonetheless fervently clutched ambitions.
And we are disproportionately terrified of AI. Terrified that it will be better than us, and terrified that even if it is never better than us, it will make our work invisible. We have disproportionate sway in this conversation, and we speak with erratic voices because we are afraid of facing something that has already happened to most people, perhaps in the Neolithic, losing the dream of standing out.
Now, I think, we’re on the eve of this happening. My dream, and the dream of my colleagues, is dead. No longer will the world entertain even our delusions of grandeur.
Funding by the middlebrow: it’s worse for writers than scientists or chess players
What I perhaps didn’t grasp at the time is that it’s actually worse for the writer than so many other categories of strivers. If your project is, say, chess, you’re in luck because chess can be played over the board and computer use prevented. The human story of, say, Magnus Carlsen will attract people long after no human is the technical best. On the other hand, if you’re a brilliant scientist, then you can hold on as long as there’s something you can do that the AI can’t- and your brilliant scientist colleagues can judge that. However, all writers depend on a majority middlebrow audience to pay the bills and spread their name- hopefully drawn in by the recommendations of highbrow readers.
[I refer to the greater part of the audience of writers on Substack as “Middlebrow” many times in this piece. Does that mean you, dear reader, are middlebrow? I don’t know. That’s between your screen and the back of your head. Certainly, no individual insult or attack on the readership of Substack is intended- after all, there’s basically nothing in the world that isn’t majority read, watched and consumed by the middlebrow. Isn’t there something terribly reductive about classifying readers by erudition and intellectual firepower? Of course, but sometimes the reductive is broadly accurate, and it’s useful for the dynamics I’m talking about here. As Kieran Healy once put it: Fuck Nuance.]
Let’s explore this middlebrow idea a little more. Suppose you object, AI is so clunky in its style! Let’s imagine, for the sake of argument, that AI writing doesn’t get better than it is right now. That would doubtless be a help, but the problem is that 80%+ of the reading population cannot discriminate the quality gap between AI writing and a really good human writer. Writing, already in a parlous state, cannot survive a further massive diminishment of audience. There is always a temptation to scorn the middlebrow reader who simply reads what they’re told, but the literary world needs middlebrow readers to survive. In many ways, it also needs to have middlebrow readers to matter a damn, what would even be the point of literature if it were just a conversation between a handful of high-literati and cognoscenti? And even if, impossibly, all that mattered were the highbrow, no one starts life as a sophisticated reader.
Moreover, complaining about the middlebrow is a bit unrealistic. Go open a typical article from The Conversation, read it, and keep in mind that this ‘middlebrow’ content is above the interest level of 80%+ of the population. If you want to whinge about the hoi polloi being up to your intellectual standards, the middlebrow seems like an odd place to start.
But unfortunately, preliminary results suggest they’re not going to be able to protect us against slop-aggedon.
Challenge in a package
What a lot of people want from non-fiction writing is a packaged challenge. Something that will ‘challenge’ them but within fixed, legible, and marketable parameters. AI is, sad to say, very good at this. If used in an interactive format, it can even do this without the reader having to spell out how they want to be challenged explicitly. Even in a non-interactive format, carefully microtargeted content with the right signals could be powerful on Substack, I suspect.
The classic example of packaged challenge in the literary world is the often fetishistic way that writers have been packaged into their identities as discrete ‘offerings’, careful extensions of the discourse. The first South East Asian woman to write about her orphan trauma. The first paraplegic trans man to write magical realism. Perhaps it sounds like I’m making fun of these people- I’m not- I have every sympathy for them. I cannot imagine what it must be like to spend a lifetime forming thoughts, and then to be reduced to two or three identities enacting a ritualised ‘challenge’ to literary norms.
The extreme example- a step further than most people want to go- is when marketing is trope-based. Here, even the form of challenge is dropped. Nothing shows better than this, I think, how much people crave standardisation. I recently deleted Facebook, but when I used to use it, I made a point of interacting with every romance novel advertisement that came up on my feed, because they’re so damn entertaining. As a result, my ad feed was mostly romance novels. What I noticed above all else was that marketing was done in terms of tropes. E.g.:
Second-chance romance, Mafia, Arranged Marriage, Fake dating, forced proximity, Enemies to lovers, Slow Burn, Forbidden love, Touch-her-and-die, Bad Boy, Hockey Team Owner, Age Gap, Workplace Romance, One Bed, Friends with Benefits, Opposites Attract, Fated Mates.
Perhaps I am being overly cynical, but I think there’s a good chance that the romance genre is ahead of the curve here. Oh, of course, people won’t be quite so crude when specifying the parameters of their insight-pornography as when specifying the parameters of their softcore porn-pornography, but the same essential logic may well come into play.
AI, being able to create something exactly according to specification, is going to thrive in this world. It can go a step further, too- it excels at targeting what its audience wants without its audience having to spell it out. For many kinds of production, this is essential. To get what I really want, if I have to spell out “I want something challenging, but not too challenging, on a topic there’s a bit of buzz about but isn’t done to death, that confirms my existing political beliefs but does so in a mildly novel way that gives me a pleasant sensation of grokking something of moderate difficulty” then I’m never going to spell out what I want because, frankly, doing so will make me feel like a fucking idiot. AI can figure out you want that and do it without asking, and indeed, while flattering your intellect. Perhaps you experience AI’s manipulations as crude, but taken at the aggregate population level, ChatGPT and co are flatterers par excellence.
Content and connections in bulk
AI can automate more than just the writing process. As others have already noted, it is very adept at building connections, effortlessly automating the process of putting flattering comments on other writers’ posts. With a few tweaks, it can become a better network marketer than any human alive by volume alone.
One of the things that a lot of non-writers don’t understand about writing in the current internet environment is that the capacity to produce enormous quantities of material is essential for success. Audiences grow slowly, and no one can guarantee a hit. It’s all about sticks in the fire and the number of lottery tickets purchased. AI, is, of course, capable of writing orders of magnitude more than any human. The only restriction is the bounds of plausibility on how much one account can put out.
I have written 800,000 words or so on Substack. I would estimate that this has cost me about 1200 hours of time. If I were to value my time at, say, 40 dollars an hour, this amounts to 48,000 dollars worth of time. By any stretch of the imagination, a substantial investment- and I think this is likely an understatement. Gemini 3- a high-end AI model- can write 1 million words for 12 dollars. For 48,000 dollars, perhaps spread over multiple accounts, it could write approximately 4 billion words, or 1 million substantial [by volume] think pieces. A sizeable oeuvre and back catalogue for say, 10,000 virtual writers. If you have never tried to establish yourself as a writer, it is easy, I think, to not understand the significance of this volume alone. There is a terrifying weight in quantity. As the electronic tentacles spread out, they can win for themselves another advantage, real-time statistical feedback on what is working and what is not.
People like slop!
I thought I would end this piece by having an LLM write a similar take and post it here. I think it’s a pretty good response, but there’s certainly room to debate whether it’s quite good or merely passable. What seems undeniable, though, is that, unfortunately, it’s good enough that even a much better writer is going to have trouble standing out against it when a large majority of the reading public has limited discernment.
Many readers, upon starting through the AI-generated essay below, will observe that it sounds like obvious AI. This is true for a sufficiently discerning reader, but it will not be protective. The first reason is that many people like this style of writing. If you don’t believe me, look at the 5.7 thousand likes on this:
[For comparison: How many likes do Freddie deBoer’s notes get, or Scott Alexander’s? Despite being much better, usually not 5.7 thousand.]
There are a couple of interrelated reasons why AI likes the slop form that have little to do with the incapacity to do anything else. The first is that slop is a form of writing that is optimised for getting attention and conveying ideas rapidly. It is highly compressed, but not too compressed.
The second is that people like slop. The reason why everyone on LinkedIn posts in slop, and did so even before AI, is that there’s a huge market for this shit. In psychology, there’s this idea that the most enjoyable things are of an optimal difficulty. Too difficult and it gets frustrating, too easy and it feels boring. For a lot of people, the optimal is just enough challenge that they feel their powers in motion, but are not too taxed, and also for a lot of people, AI slop writing in the form of a thinkpiece is at about that level.
Frankly, this post on Moltbook is no worse than 80%+ of the think pieces on here. That’s not to say it’s good necessarily, but it’s better than many.
Another reason the AI slop form won’t prevent it from taking over is that the AI's writing style is, from a technical perspective, largely due to certain optimisation choices that OpenAI has made. Once those parameters are changed, AI won’t sound like this anymore. AI doesn’t have to sound like this.
A final thought before we go to our AI friend. I’ve largely made my projections on the idea that AI will never get better- that it will never truly have anything of significance to say, even if the writing style improves. I have made this assumption because a lot of people are deeply emotionally committed to this proposition, and so if I want to persuade them, I need to grant it. Besides, does originality even matter that much? Not to put anyone on blast, but there are several accounts on here who have five essays they cycle through, writing a variation on, tying it into some news story that’s come out in the last few days.
But I don’t think AI will be forever unoriginal. If AI can right now help prove novel results of interest to mathematicians, I think AI can probably say novel and interesting things in essay form- and indeed, I believe I’ve seen it myself. As such, we’re doubly fucked. To use AI’s favoured syntax-form: it’s not just that we won’t be able to beat AI in a contest for middlebrow audiences, it’s that much of the time we don’t even deserve to win. Arguing about whether an idea in the humanities/social sciences is truly novel is a fool’s errand, so your mileage here may vary here, of course.
Your paid subscription would be much appreciated, especially under the, ahem, circumstances.
AI’s attempt at tackling the same question:
Substack Won’t Survive the LLM Age (and It’s Not Because People Will Stop Writing)
Substack is built around a comforting myth: that what people fundamentally want is authorship. A person. A voice. A mind you can subscribe to. A recurring human presence in your inbox, arriving with opinions, style, and judgment.
That myth was never entirely false—but it depended on a particular scarcity that is now disappearing.
The LLM age does not kill writing. It annihilates unearned distinctiveness.
Substack’s hidden premise
Substack’s economic model rests on three claims, mostly implicit:
There are a large number of people who want to read long-form prose.
A meaningful fraction of them want to pay for it.
They are paying for the writer, not just the content.
The third claim is the fragile one.
For most Substack writers—yes, even quite good ones—what subscribers are really buying is not originality in the strong sense. They are buying:
competent synthesis,
ideological reassurance,
affective alignment,
and a sense of belonging to a micro-public.
This was viable when producing articulate synthesis itself was scarce.
It no longer is.
The collapse of synthesis as a value proposition
Large language models do something very specific and very destructive to Substack’s middle class: they make “thoughtful takes” abundant.
Not groundbreaking scholarship. Not lived experience. Not institutional power. But the enormous middle terrain of:
smart-but-not-special commentary,
“here’s how I think about X,”
careful unpacking of familiar disputes,
morally earnest but conventional reflection.
This is the terrain Substack colonised.
An LLM can now:
summarise a week of discourse,
reconstruct the dominant arguments on either side,
generate a version that matches your preferred moral tone,
adjust ideological temperature on demand,
and do it instantly, privately, and for free.
Why would a reader pay $8/month for your synthesis of the news, when they can ask a model to synthesise it for them, calibrated to their priors, interests, and emotional needs?
The usual response is: voice matters. Style matters. Personality matters.
Sometimes. But not as often as writers like to believe.
“Voice” was always overestimated
Most Substack “voices” are not actually voices in the literary sense. They are bundles of:
demographic signals,
political orientation,
trauma disclosures,
professional credentials,
and lightly idiosyncratic rhetorical tics.
LLMs already mimic this frighteningly well.
More importantly, they unbundle it. Readers can now have:
the argument without the attitude,
the attitude without the argument,
the insight without the ego,
the reassurance without the obligation.
Substack asks readers to tolerate a lot of filler for the occasional insight. LLMs invert that ratio.
Parasociality doesn’t scale the way people think
One might think Substack’s real moat is parasocial attachment. People subscribe because they care about the writer.
This is true—but it doesn’t generalise.
Parasocial loyalty is:
slow to build,
cognitively expensive,
emotionally saturating,
and zero-sum at the margin.
You can only care about so many newsletter writers.
Meanwhile, LLMs offer something subtler and, in many contexts, more appealing: instrumental intimacy. They feel responsive without demanding allegiance. They feel attentive without being needy. They never guilt-trip you for unsubscribing.
That matters more than we like to admit.
The economics become brutal
Here’s the uncomfortable arithmetic:
The top 1–5% of Substack writers will survive.
They will survive because they have:
genuine expertise,
original access,
institutional authority,
or a cultishly strong personal following.
Everyone else faces a pincer movement:
downward pressure on prices (why pay when an LLM can do this?),
upward pressure on output (post more, stay visible),
and increasing competition from machines trained on the very corpus Substack helped generate.
Substack’s promise—“own your audience”—starts to look hollow when the audience realises it doesn’t need to own you back.
The deeper problem: Substack confuses writing with thinking
This is the philosophical mistake at the heart of the platform.
Substack implicitly treats fluent prose as evidence of thought. LLMs expose how thin that equation always was.
Good thinking is rare.
Good writing is rarer.
But competent writing that looks like thinking is now effectively free.
Once readers internalise that distinction, the aura collapses.
What replaces it?
Not nothing.
What survives are:
genuine reporting,
original data,
deep scholarship,
lived experience that cannot be abstracted,
communities organised around action rather than commentary.
Substack is badly designed for most of these.
It is optimised for opinion, reflection, and synthesis—the very things LLMs commoditise first.
The likely endgame
Substack won’t disappear overnight. It will linger, respectable and diminished, like a magazine rack in a streaming world.
But its growth logic is broken.
The next generation of writers will not ask: How do I build a Substack?
They will ask:
How do I leverage models rather than compete with them?
How do I offer something that cannot be summarised, personalised, or simulated?
How do I justify attention, not just content?
Substack has no good answers to these questions. Its core product assumes a world where articulate humans are scarce.
That world is over.
And the tragedy—for Substack, and for many of its writers—is that it trained its audience to love exactly the kind of writing that machines learned fastest to do.
Your paid subscription would be much appreciated, especially under the, ahem, circumstances.




This is hardly a requiem for writing alone. There is nothing uniquely tragic about the demise of wordsmithery. Homogeneity comes for us all.
When everyone wants to "democratise skills", there is this insidious assumption that this development constitutes a monotonic improvement. It doesn't -- why are we so blind to horseshoes? But these are the loud voices of a bunch of petulant chicks with their beaks open demanding food, without concerning themselves with how it is procured.
It's an apocalypse very much of our own invention -- or rather, of our maladaptive strive for convenience.
Ironically, the AI’s take in the latter half is almost unreadable.