I wonder if they made chat gpt use an unnatural amount of emojis just to make it easier to spot
AI generated issue
Submitted 1 week ago by qaz@lemmy.world to mildlyinfuriating@lemmy.world
https://lemmy.world/pictrs/image/f7814119-7229-4099-87d7-3f7ed2f6a877.png
Comments
DudeDudenson@lemmings.world 1 week ago
qaz@lemmy.world 1 week ago
People often use a ridiculous amount of emoji’s in their readme, perhaps seeing it was a README triggered something in the LLM to talk like a readme?
FQQD@lemmy.ohaa.xyz 1 week ago
Wow, this just hurts. The “twice, I might add!” is sooooo fucking bad. I don’t have aby words for this.
dabaldeagul@feddit.nl 1 week ago
aby
Checks out
FQQD@lemmy.ohaa.xyz 1 week ago
god damn it i can’t type lmao
possiblylinux127@lemmy.zip 1 week ago
There have been so many people filing AI generated security vulnerabilities
salmoura@lemmy.eco.br 1 week ago
The emoji littering actually drove me away from using fastapi while reading its documentation.
Korne127@lemmy.world 1 week ago
I mean, even if it’s annoying someone obviously used AI, they probably still have that problem and just suck at communicating that themselves
qaz@lemmy.world 1 week ago
They don’t, because it’s not an actual issue for any human reading it. The README contains the data and the repo is just for coordination, but the LLM doesn’t understand that.
Korne127@lemmy.world 1 week ago
Then… that’s so fucking weird, why would someone make that issue? I genuinely lack the understanding for how this could have happened in that case.
AmbiguousProps@lemmy.today 1 week ago
Why do LLMs obsess over making numbered lists? They seem to do that constantly.
Tolookah@discuss.tchncs.de 1 week ago
Oh, I can help! 🎉
coherent_domain@infosec.pub 1 week ago
My conspricy is that they have a hard time figuring out the logical relation between sentenses, hence do not generate good transitions between sentences.
I think bullet point is manually tuned up by the developers instead of inheritly in the model, because we don’t tend to see them that much in human communications.
possiblylinux127@lemmy.zip 1 week ago
That’s not a bad theory
gamer@lemm.ee 4 days ago
Late but I’m pretty sure it’s a byproduct of the RHLF process used to train these types of models. Basically, they have a bunch of humans look at multiple outputs from the LLM and rate the best ones, and it turns out people find lists easier to understand than other styles (alternatively, the poor souls slaving away in the AI mines rating responses all day find it faster to understand a list than a paragraph through the blurry lens of mental fatigue)
possiblylinux127@lemmy.zip 1 week ago
A_norny_mousse@feddit.org 1 week ago
Well they are computers…