They just asked a few people if they thought it was written by an LLM. /s
I mean, you can tell when something is written from ChatGPT, especially if the person isn’t using it for editing, but is just asking it to write a complaint or request. It is likely they are only counting the most obvious, so the actual count is higher.
sober_monk@lemmy.world 1 day ago
They developed their own detector described in another paper. Basically, this reverse-engineers texts based on their vocabulary to provide an estimate on how much of them was ChatGPT.
brucethemoose@lemmy.world 9 hours ago
This sounds plausible to me, as specific models (or even specific families) do tend to have the same vocabulary/phrase biases and “quirks.” There are even some community “slop filters” used for sampling specific models, filled with phrases they’re known to overuse through experience, with “shivers down her spine” being a meme for Anthropic IIRC.
It’s detestable. But the “good” thing is most LLM writing is incredibly lazy, not meticulously crafted to avoid detection.