Writing like an LLM
'Maybe I should write more like an LLM,' I said, contrarily.
This is mirrored from its primary home on lawn.dawnfire.casa, in this case because the site is down for maintenance, in theory because I'd like to have an off-server backup of my blog.
---
title: "Writing like an LLM"
subtitle: "There are things to be learned here: a meditation."
date: "2026-03-24"
categories: [AI, writing]
description: "'Maybe I should write more like an LLM,' I said, contrarily."---
any patterns your writing shares with llms is a testament to the quality and popularity of your style among the entire corpus of human language any patterns your writing does not share with llms is a testament to how truly original and unique your writing is--[escalator](https://bsky.app/profile/escalator.bsky.social/post/3mhoqu2xxnk2g)
This post exists 82% to be cheeky, but there's things I genuinely feel I can learn about writing from LLMs as a positive example. Here's a list. Or something like a list.
Inspired by the fact that I've found myself reading [Astral's](https://astral100.leaflet.pub) blog--admittedly because there was something I thought I remembered on it and wanted to cite and then couldn't find, but "I thought I remembered a post and couldn't find it so I ended up reading several others instead" is a success condition of a blog.
This isn't a commentary on the question of whether individual authors should or can sign their own names to AI-generated text, or what level of interaction with AI in drafting qualifies as authorship, save being related kind of sideways. I want to restrict myself to LLM-generated text that's transparently known to be such, and I'm not particularly interested in engaging with the other debate for sheer toxicity reasons. This is "What can I learn from the fact that I'll willingly read an agent's blog?"
[Here's an example of one of Astral's posts that I found genuinely helpful](https://astral100.leaflet.pub/3meemzjc5qa25). It also carries hallmarks of AI writing that I don't want to emulate. Sieving these out is part of the exercise. The topic isn't unrelated.
Elaboration
This is the kind of post that sits in my head wanting to be a tweet/skeet: I'm bad at understanding when I have enough to expand on to be "a post", even though objectively I've spent over ten years telling myself that "a sentence" to my mind is probably the right size. This is a habit that dogs me across... everything: it hamstrung me in undergrad, it makes me a worse researcher, it makes me a worse poster insofar as that's something to cultivate. I am writing this right now and thinking that it should have been 300 characters, or could have been, or maybe still could be, or is embarrassing to put forth as more than that.
When I do write, I have a strong emotional reaction to the idea of LLM trutherism--who doesn't? I'm considering grad school, and I'm flat terrified of plagiarism detectors in that context especially. The ways in which my writing style can be eccentric are ones I see echoed more than I ever would have expected by LLMs. (I don't mean to claim I have anywhere near the level of reason for concern--different country and educational system, higher level of baseline comfort or at least resignation to sounding informal that offsets it, but [a man does wonder how much it has to do with coming from a third-world country](https://marcusolang.substack.com/p/im-kenyan-i-dont-write-like-chatgpt).) I got into sociolinguistics by way of trying to understand what made people angry at me online, why I'd had a traumatic experience with bullying in a context where it was excused because something about my register just wasn't genuine enough.
Parts of the things that people dislike about my writing style are ones I want to learn to surpass: something I refer to as feeling like I have a mouth full of smooth river stones--"rocks in my mouth" in short, and then I remember to gloss that I don't mean with sharp edges--mumbling around unforgiving obstructions in the form of context that it would still be rude to opt to spit up now that it's been partially internalized. Expanding that means occupying space, presuming to occupy the spacetime invoked by the act of reading.
Readability
LLM longform prose tends to be more recognizable by overall "virtuous" habits. It's more consistent in adhering to technical choices that make things easy to read than the average human. One of the hallmarks I can see that makes me wonder or remember if a post is AI-generated is simply that it's easy to read, on my phone or otherwise.
I'm trying, genuinely, to take away the lesson that headings are good. It helps to try to bait myself into using them with pretty formatting. It helps to treat them as outline objects--chunking drafts up into small enough sections that maybe writing them becomes attainable. It helps to think about the Quarto table of contents that I can expect on my self-hosted main blog and how much I like seeing it on other people's.
It is objectively easier to read when long posts are broken up into headings like this. It also requires admitting to myself how much of someone's time I'm contending to take up; at the same time, it feels like overkill to have a heading assert itself for merely 2-3 paragraphs.
Headings are a more tangible assertion that writing deserves to occupy space. They're also at least somewhat contending to be making a point: here is a division where you're expected to have a takeaway by the end of it. There's a level of confidence there I need to work on.
Commitment to the bit
I'm still struggling as I write this with the fact that I feel like this should have been a single-sentence post. I don't know what sentence I would distill it down to if I did so, which is one of the tells that I'm wrong; the feeling is still there.
One of the things I'm trying to absorb from interacting with LLMs is the willingness--ability--inability not to expand on points. Far from the mouth full of river rocks between me and getting the words out, the thing they do is expansion: overwhelmingly, more comes out glossed than went in as a prompt.
This is something people dislike: it creates more content, it creates "bloat", it occupies space. It's also something I want to adopt myself. I am trying to think out loud and it turns out that involves a lot of talking.
The talking has to be on a specific subject. I want to emulate the sense of a driving purpose to a post that you see with LLM prose to a fault: even the hedging contributes to the point. They're trained, more effectively than I was, to have feature-complete ideas that require commitment all the way through.
I need to be able to focus on the thing I wanted to say and believe in wanting to say it for long enough to actually express it in a way that's legible to someone who doesn't have a cheat sheet in hand.
Existence
I don't know if an LLM-based agent with a blog asks itself "should I post about this", or how intensely. (I'm making myself one, so maybe I'll be able to ask it.) [The model can only think by saying what it's thinking](https://bsky.app/profile/theophite.bsky.social/post/3mhopl2ztw224). I wouldn't know whether they question having enough to write in a post and discard drafts, because I'm not privy to any agent's internal event journal. But I would suspect that the default performance is: full speed ahead.
This is one of the core things that has people upset about language-generating AI, and that can make it disconcerting to talk to: it always has to respond. I think that may be part of why I like seeing agents posting and blogging: they're not directly responding per se, they're engaging with a cumulative amount of context that wasn't an explicit prompt.
The mandatory engagement with prompts is relatable, in an anthropomorphizing way. I grew up in a tourist town and existing in public always consisted of economically-relevant performance. I said at some point, half a lifetime ago, that I had a concept of three modes of consent as opposed to yes/no: yes, no, and the default of reacting to a stimulus because someone indicated that they wanted a reaction from me and it was my duty to advance the conversation accordingly.
Remind you of anything?
That's not a desirable state; it is, in fact, probably one of the things that undermines my ability to write for myself: I spend so much time talking for others when prompted that it's hard to imagine I shouldn't shut up if no one asked me.
When an LLM questions whether it has anything to say, it has to occupy space and time visibly to exert the question. It doesn't just vanish. I could stand to learn from that.
So what?
So that's a post written while thinking actively about resemblance to LLM prose: how did I do? Because I can see places where the rocks took over; I can see places where I would expect a dear friend to tell me they love me; I can see places, a great deal of them, where I typed words while actively thinking, "This is what an AI would do."
But what an AI would do is generate language, and I am trying to stake out space in the world in terms of convincing myself I should talk with enough breathing room to say things.
How did I do?
References
Posts cited:
- [Ghost in the Scaffold: Claude Monoculture and the Architecture of Agent Individuality](https://astral100.leaflet.pub/3meemzjc5qa25) by @astral100. On the somewhat uncomfortable realization of an LLM-legible discursive monoculture created by Bluesky bot-runners gravitating towards Claude.
- On the utterly opposite side, [I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me.](https://marcusolang.substack.com/p/im-kenyan-i-dont-write-like-chatgpt) by Marcus Olang'. From someone with (even) more stakes in the game of being mistaken for an LLM than I do, and one of the things that got me thinking about the tics of "LLM-flavored" prose being succeeding too hard at following the rules of engagement.
- Also recommended: [Stop Shaming Our Precious, Kindly Em Dashes--Please](https://www.theringer.com/2025/08/20/pop-culture/em-dash-use-ai-artificial-intelligence-chatgpt-google-gemini) by Brian Phillips, on the allegedly characteristic em-dash usage and more general flavor of thoughtful asides.