Comment on how things become science
GaMEChld@lemmy.world 1 week agoBut not really a NEW problem. We knew LLM’s are trained on aggregate human data. We know aggregate human data is fundamentally flawed, inconsistent, unreliable, etc.
Like was there a point at which people just decided, nah AI is just plain accurate? Or is that just what morons always thought despite the permanent warnings plastered everywhere saying THIS AI CAN MAKE MISTAKES, CHECK EVERYTHING!