Comment on ChatGPT's o3 Model Found Remote Zeroday in Linux Kernel Code

<- View Parent
shnizmuffin@lemmy.inbutts.lol ⁨3⁩ ⁨days⁩ ago

If I were to ask my Magic 8 Ball “Is the word ‘difinitely’ misspelled?” 100 times, it’s going to reply in the affirmative over 16% of the time. Literally double. This would also be “the very first experiment in this use case, done by a single person on a model that wasn’t specifically designed for this.”

It’s not impressive.

The issue with hallucinations…

This is the real problem: working under the false assumption that there are two kinds of output. It’s all the same output. An LLM cannot hallucinate in the same way that it cannot think or reason. It’s fancy autofill. Predictive text.

You can use it to brainstorm creative solutions, but you need to treat its output for what it is: complicated dice rolls from the tables in the back of the Dungeon Masters Guide. A fun distraction. Implausible fantasy 9 times out of 10.

source
Sort:hotnewtop