Comment on how things become science
GaMEChld@lemmy.world 1 week ago
I don’t see this as a problem, rather, an opportunity to study information & disinformation propogation.
Comment on how things become science
GaMEChld@lemmy.world 1 week ago
I don’t see this as a problem, rather, an opportunity to study information & disinformation propogation.
MalReynolds@slrpnk.net 1 week ago
Valid use case, still a significant problem.
GaMEChld@lemmy.world 1 week ago
But not really a NEW problem. We knew LLM’s are trained on aggregate human data. We know aggregate human data is fundamentally flawed, inconsistent, unreliable, etc.
Like was there a point at which people just decided, nah AI is just plain accurate? Or is that just what morons always thought despite the permanent warnings plastered everywhere saying THIS AI CAN MAKE MISTAKES, CHECK EVERYTHING!