Is ubiquitous A.I. writing "inevitable"?
On a weird few weeks of A.I.-writing scandals
Greetings from Read Max HQ! In today’s edition: is ubiquitous A.I. writing “inevitable”?
A reminder: Read Max is almost entirely funded by reader subscriptions. Everything I write here (and I do write all of it!) is a product of multiple hours of blood, sweat, tears, anxiety, procrastination, research, reporting etc. If you see the value of that labor--i.e. if you read these pieces and learn something, or adopt these opinions to sound smart elsewhere, or manage to pass seven to ten minutes freed from psychic pain thanks to the light entertainment of my prose--consider translating it into money by subscribing. At $5/month, it’s about the price of one beer every four weeks.
What is “inevitable” about A.I. and writing?
Perhaps eventually there will be a week when there is nothing to say about A.I. and writing; this is not that week. In the period since Hachette canceled its run of the novel Shy Girl over suspicions that the author had generated some of the text with A.I., it’s been impossible to avoid on the subject of A.I.-generated, -assisted, or -augmented text.
On Monday, the Wrap reported that The New York Times had “cut ties with a freelancer after the paper discovered he used AI to help write a book review that inadvertently incorporated elements of a Guardian review on the same title.” This story came only a few days after Isabella Simonetti’s Wall Street Journal profile of Nick Lichtenberg, a Fortune editor “who has penned more than 600 stories since rejoining Fortune in July,” many of them “A.I.-assisted.” “AI-assisted stories accounted for nearly 20% of Fortune’s web traffic in the second half of 2025,” the Journal reports. “Most were written by Lichtenberg.”
Lest you think these are isolated developments, Wired’s Maxwell Zeff wrote about a number of journalists using A.I. to assist their writing, including the Times columnist Kevin Roose, who “created a team of Claude agents to help edit his book, led by a ‘Master Editor’ agent,” and the independent tech reporter Alex Heath, who “transmits his ideas to an AI agent, then lets it write his first draft”:
The AI tool is connected to his Gmail, Google Calendar, Granola AI transcription service, and Notion notes. He’s also built a detailed skill—a custom set of instructions—to help Claude write in his style, including the “10 commandments” of writing like Alex Heath. The skill includes previous articles he’s written, instructions on how he likes his newsletters to be structured, and notes on his voice and writing style.
Claude Cowork then automates the drafting process that used to take place in Heath’s head. After the agent finishes its first draft, Heath goes back and forth with it for up to 30 minutes, suggesting revisions. It’s quite an involved process, and he still writes some parts of the story himself. But Heath says this workflow saves him hours every week, and he now spends 30 to 40 percent less time writing.
These stories have all been widely shared, mostly disapprovingly, by journalists and their patrons on Bluesky, Substack, and Twitter. But reading them I was struck, more than anything else, by how commonplace they all sounded.
A hacky Times writer trying to cut corners was fired for inadvertent plagiarism? Wouldn’t be the first time. A Fortune editor is producing an inhuman number of stories a day, raising questions of accuracy?1 You should see what they used to write about bloggers! A good reporter who offloads the task of writing elsewhere? As Zeff points out, Heath has reverse-engineered a rewrite desk.
For all the revolutionary, transformational promises of A.I.--for good and for ill--the problems it causes all seem quite familiar.
My perspective on this is admittedly shaped by a 17-year career in digital media that I would describe as “de-sentimentalizing,” at a minimum. To work as a journalist online over the past decade has generally meant--with only a few exceptions--to be producing text inside a system that prioritizes speed, volume, and attention over any other attribute, including and maybe especially those you’d associate with “quality.”
At some publications, institutional identities and imperatives can militate against sloppiness, unreliability, and banality. But the external pressures and incentives toward slop are enormous and un-ignorable. And that has been true for many years, since well before A.I. offered a tantalizing push-button interface for high-volume content production. All the problems of A.I. writing--inaccuracy, misinformation, plagiarism, misrepresentation, and, above all, hack work--were widespread in the early days of blogs2 and throughout the digital media boom years, accelerated by the Facebook News Feed and other platform distribution schemes.
I recognize here that I’m recapitulating a boosterish argument that suggests that A.I. isn’t really a “big deal.” To be clear, I think it is! But it’s a big deal within a bigger deal, and that matters. When I read stories about writers or editors or publications using A.I. I wonder if I’m reading about A.I. causing problems to the media economy, or solving problems that economy poses to writers and editors and publications. Is A.I. “dangerous” as such? Or is exacerbating an already “dangerous” incentive structure and distribution ecosystem? Is A.I. transforming this system, or supercharging it?
I think these are distinctions worth making not so much because I want to rescue or defend some “innocent” conception of A.I., but because any organized professional and political response will want to properly diagnose the problem. If slop is a “feed” problem rather than (or prior to) an A.I. problem, the solutions are likely somewhat different. And if generative A.I. is a technology that solves actual problems--even debased problems of platform incentive structure like “how do I produce more content, cheaply and quickly, regardless of quality or accuracy?”--it’s harder to see it as a scam or an imposition. It’s doing what it’s designed to do.
It affects, too, how we understand what comes next. In a recent Guardian op-ed pegged to the Shy Girl controversy, the novelist Stephen Marche argued that
Artificial intelligence is here to stay, neither as an apocalypse nor as the solution to all life’s problems, but as a disruptive tool. The recent scandal over Shy Girl, the novel by Mia Ballard, was doubly revealing. Hachette cancelled its publication amid claims it was reliant on AI generation (Ballard has said that an acquaintance who edited the self-published version used AI, not her). But the book was originally self-published. Apparently readers and editors didn’t mind until the use of AI was pointed out to them. […]
There seem to be two options facing writers. The first is not to use AI at all, or to pretend not to use it. The other is to automate their writing practice. The first is retrograde and fearful. The second forgets that art is a human practice, made by people for people. As becomes obvious when you actually try to use AI to make art, this is a false binary. Already a few paths through the slop are emerging.
Generally, this is exactly the kind of assumption of inevitability that rubs A.I. critics precisely the wrong way.3 And in most ways I agree with A.I. critics that no technological development or implementation is “inevitable” in the sense of un-opposable, and certainly not “inevitable” because it presents some kind of revolutionary new paradigm that’s self-evidently superior.
But if we understand A.I. as a product of the systems that precede it, I think it’s fair to say ubiquitous A.I.-generated text is “inevitable” in the same way that high-volume blogs were “inevitable” or Facebook fake news pages were “inevitable”: Not because of some “natural” superiority or excellence, but because they follow so directly from the logic of the system out of which they emerge. In this sense A.I. is “inevitable” precisely because it’s not revolutionary.
This post originally claimed that Lichtenberg worked for Forbes; he works, of course, for Fortune, as he pointed out to me over email. As this post is arguing: “Accuracy” is a problem for writers whether you use A.I. or not!
And--let’s not let traditional media off the hook here--before!
It didn’t help that the op-ed’s headline, which I’m sure Marche didn’t right, had the bluntly provocative title “I wrote a novel using AI. Writers must accept artificial intelligence – but we are as valuable as ever.” Say what you will about A.I.; it could never produce a headline as lumpy and annoying as that.





Thanks, I hate it.
You're right of course that incentives are at the root of how we use the tools we have access to. Still, it seems to me that the ability to produce infinite bullshit for zero marginal cost is far from a non-event, or any kind of continuity. It's probably fair to say, if a tad cynical, that profit motives have always prevailed over the love of truth and good prose; but the idea of reducing to nothing the friction involved in the production of useless slop makes me yearn for the impossible peak that would stand above the oncoming bullshit tsunami I smell on the wind.