AI models collapse when trained on recursively generated data
Submitted 3 months ago by bot@lemmy.smeargle.fans [bot] to hackernews@lemmy.smeargle.fans
Submitted 3 months ago by bot@lemmy.smeargle.fans [bot] to hackernews@lemmy.smeargle.fans
Sibbo@sopuli.xyz 3 months ago
So, maybe this is the end of LLM companies simply scraping the web, because they will be unable to distinguish between LLM and non-LLM content. If someone would have made a snapshot of the complete web in 2020, and waits to sell it until 2030, the person may just become the richest person on earth.
leisesprecher@feddit.org 3 months ago
That means, as long as generated content isn’t like 90% of the Internet, they’ll be fine. Even then, you can find relatively easy ways to sift data for generated content. Doesn’t even have to be perfect.
What really bothers me here is that we might create a world, where the typical AI style of writing takes over the world, because the AI learns on itself, and the companies simply don’t care about it. That’s not really a collapse as such, but a narrowing.
match@pawb.social 3 months ago
this sounds so much like the 2° Celsius target for climate change