To me, the best first sentence of any piece of journalism is the one in Joan Didion’s 1987 book, Miami, which begins like this: “Havana vanities come to dust in Miami.”
I love that sentence and that propulsive first chapter so much that I once sat down to try to figure out how she did it. I looked at the sentences one at a time to assess what purpose each one was serving, and I counted how many of them Didion had needed to accomplish each thing she wanted to accomplish. Then I thought about how she figured out what order to put them in to have maximum page-turning impact. And then I compared all of it unfavorably with the flailing and feeble way in which I would have pursued the same goals. I marked up my copy of the book in a somewhat desperate fashion and then became depressed.
That type of copying is pretty normal, and they teach it in school. It’s how you learn (and how you become depressed). But in the age of generative AI, there are many new kinds of copying. For instance, Wired reported last week on a tool offered by Grammarly, which briefly offered users the opportunity to put their writing through something called “Expert Review.” This produced AI-generated advice purportedly from the perspective of a bunch of famous authors, a bunch of less-famous working journalists (including myself, per The Verge’s reporting), and a bunch of academics (including some who had recently died).
What Was Grammarly Thinking?
Submitted 1 hour ago by Powderhorn@beehaw.org to technology@beehaw.org
https://www.theatlantic.com/technology/2026/03/grammarly-ai-expert-bad-advice/686343/
MountingSuspicion@reddthat.com 9 minutes ago
Ridiculous that Grammarly even attempted to do this. The article was good, but at the end, though they hedged, they fell into the same trap everyone seems to. AI is not better at coding than it is at writing and their tinkering with this does not suggest that. Grammarly had a bad product, but realistically, there was likely just no effort put into this aspect of the software. Maybe I’m way off base, and I don’t support AI either way, but I just think it was a poor way to end the article. Programmers think it’s good for art, artists think it’s good for programming, it’s almost like it’s easier to see flaws in a field you’re familiar with.