cross-posted from: programming.dev/post/36082211
I’ll stop contributing if this happens.
Submitted 3 days ago by Pro@programming.dev to technology@beehaw.org
https://en.wikipedia.org/wiki/User_talk:Jimbo_Wales#An_AI-related_idea
cross-posted from: programming.dev/post/36082211
I’ll stop contributing if this happens.
The collateral damage from LLMs is just starting. No one asked for this – see also: the rousing success of Alexa and Siri – yet now we’re all having to adapt our lives around code that makes up shit.
I’d rather Wikipedia didn’t use such software, but if they’re getting slammed with AI-slop content, what options do they really have?
…or Jankopedia
theangriestbird@beehaw.org 3 days ago
I do appreciate the direct link to exactly what Wales said, and the full conversation with his replies and such. It’s definitely a bit heady - Wales points out that editors are overstretched and he gives an example where he used ChatGPT to give helpful feedback to a new contributor. Then, a bunch of editors file in and point out parts of the GPT response that are inaccurate and go against Wikipedia policy. They also point out how LLMs themselves are already making life hell for editors.
If the site is being flooded by LLM submissions, and then Wikipedia starts using LLMs to provide feedback on rejected articles, when does a human step in to clear out the hallucinations? If I was submitting an article, and then I got bot feedback and edited my article with that feedback, and then a human looked at it and told me half the stuff the bot told me was wrong, I would be rightly pissed. If I was a new contributor dipping my toe into the scene for fun, that might just turn me off from Wiki editing forever.
And all of this is without considering the environmental impact of adding yet another major website to the data center load of existing LLMs. But it is clear that there are problems with this idea, even if the environmental costs are a nonfactor.
jimmux@programming.dev 3 days ago
I’ve done fact checking on LLM models for work before, and it quickly becomes evident that many models rely on Wikipedia as a heavily weighted source of truth.
If LLMs have even a small role in producing Wikipedia content, the ouroborus of declining quality will accelerate.
remington@beehaw.org 3 days ago
I have studied academic biblical scholarship for over 30 years. All of Wikipedia’s biblical pages are riddled with errors. IMO, Wikipedia is a decent starting point but that would be it.