A lot of LLMs now use synthesized, or AI generated training data. It doesn’t seem to affect them too adversely.
TranquilTurbulence@lemmy.zip 1 week ago
Since basically all data is now contaminated, there’s no way to get massive amounts of clean data for training the next generation of LLMs. This should make it harder to develop them beyond the current level. If an LLMs wasn’t smart enough for you yet, there’s a pretty good chance that it won’t be in a long time.
Xylight@lemdro.id 1 week ago
TranquilTurbulence@lemmy.zip 6 days ago
Interesting. In other models that was a serous problem.
Tollana1234567@lemmy.today 6 days ago
law of diminishing returns, LLM train thier data on AI slop of LLM, that is trained other llm, all the way down to “normal human written slop”
fascicle@leminal.space 1 week ago
People will find a way somehow
TranquilTurbulence@lemmy.zip 6 days ago
Oh I’m sure there is a way. We’ve already grabbed the low hanging fruit, and the next one is a lot higher. It’s there, but it requires some clever trickery and effort.
artifex@piefed.social 1 week ago
Didn’t Elon breathlessly explain how the plan was to have Grok rewrite and expand on the current corpus of knowledge so that the next Grok could be trained on that “superior” dataset, which would forever rid it of the wokeness?
Naich@lemmings.world 1 week ago
It started calling itself MechaHitler after the first pass, so I’d be interested to see how less woke it could get training itself on that.
Tollana1234567@lemmy.today 6 days ago
trying to train it to be only a NAZI-LLM is difficult eventhough he lobotomized it a couple times.
TranquilTurbulence@lemmy.zip 6 days ago
That’s just musk talk. I’ll ignore the hype and decide based on the results instead.