I wonder if they made chat gpt use an unnatural amount of emojis just to make it easier to spot
AI generated issue
Submitted 5 weeks ago by qaz@lemmy.world to mildlyinfuriating@lemmy.world
https://lemmy.world/pictrs/image/f7814119-7229-4099-87d7-3f7ed2f6a877.png
Comments
DudeDudenson@lemmings.world 5 weeks ago
qaz@lemmy.world 5 weeks ago
People often use a ridiculous amount of emoji’s in their readme, perhaps seeing it was a README triggered something in the LLM to talk like a readme?
FQQD@lemmy.ohaa.xyz 5 weeks ago
Wow, this just hurts. The “twice, I might add!” is sooooo fucking bad. I don’t have aby words for this.
dabaldeagul@feddit.nl 5 weeks ago
aby
Checks out
FQQD@lemmy.ohaa.xyz 5 weeks ago
god damn it i can’t type lmao
possiblylinux127@lemmy.zip 5 weeks ago
There have been so many people filing AI generated security vulnerabilities
salmoura@lemmy.eco.br 5 weeks ago
The emoji littering actually drove me away from using fastapi while reading its documentation.
Korne127@lemmy.world 5 weeks ago
I mean, even if it’s annoying someone obviously used AI, they probably still have that problem and just suck at communicating that themselves
qaz@lemmy.world 5 weeks ago
They don’t, because it’s not an actual issue for any human reading it. The README contains the data and the repo is just for coordination, but the LLM doesn’t understand that.
Korne127@lemmy.world 5 weeks ago
Then… that’s so fucking weird, why would someone make that issue? I genuinely lack the understanding for how this could have happened in that case.
AmbiguousProps@lemmy.today 5 weeks ago
Why do LLMs obsess over making numbered lists? They seem to do that constantly.
Tolookah@discuss.tchncs.de 5 weeks ago
Oh, I can help! 🎉
coherent_domain@infosec.pub 5 weeks ago
My conspricy is that they have a hard time figuring out the logical relation between sentenses, hence do not generate good transitions between sentences.
I think bullet point is manually tuned up by the developers instead of inheritly in the model, because we don’t tend to see them that much in human communications.
possiblylinux127@lemmy.zip 5 weeks ago
That’s not a bad theory
possiblylinux127@lemmy.zip 5 weeks ago
A_norny_mousse@feddit.org 5 weeks ago
Well they are computers…
gamer@lemm.ee 4 weeks ago
Late but I’m pretty sure it’s a byproduct of the RHLF process used to train these types of models. Basically, they have a bunch of humans look at multiple outputs from the LLM and rate the best ones, and it turns out people find lists easier to understand than other styles (alternatively, the poor souls slaving away in the AI mines rating responses all day find it faster to understand a list than a paragraph through the blurry lens of mental fatigue)