Comment on how things become science

<- View Parent
Jesus_666@lemmy.world ⁨3⁩ ⁨hours⁩ ago

It’s known that AI companies will harvest content without care for its veracity and train LLMs on it. These LLMs will then regurgitate that content as fact.

This isn’t a particularly novel finding but the experiment illustrates it rather well.

The researchers you consider to have acted so immorally did add useless information to the knowledge pool – but it was unadvertised, immediately recognizable useless information that any sane reviewer would’ve flagged. They included subtle clues like thanking someone at Starfleet Academy for letting them use a lab aboard the USS Enterprise. They claimed to have gotten funding from the Sideshow Bob Foundation. Subtle.

By providing this easily traceable nonsense, they were able to turn the generally-but-informally known understanding that LLMs will repeat bullshit into a hard scientific data point that others can build on. Nothing world-changing but still valuable. They basically did what Alan Sokal did.

Instead of worrying about this experiment you should worry about all the misinformation in LLMs that wasn’t provided (and diligently documented) by well-meaning researchers.

source
Sort:hotnewtop