Comment on ChatGPT's o3 Model Found Remote Zeroday in Linux Kernel Code

<- View Parent
knokelmaat@beehaw.org ⁨3⁩ ⁨days⁩ ago

This is literally the very first experiment in this use case, done by a single person on a model that wasn’t specifically designed for this. The fact that it is able to formulate a correct response at all in this situation impresses me.

It would be easy to criticize this if it were the endpoint and this was being advertised as a tool for vulnerability research, but as discussed at the end of the post, this “quick little test” shows both initial promising results and had the fortunate byproduct of actually revealing a new vulnerability. By no means is it implied that it is now ready for use in this field.

The issue with hallucinations is one that in my opinion is never going to be totally fixed. That is why I hate the use of AI as a final arbiter of truth, which is sadly how a lot of people use it (I’ll quickly ask ChatGPT) and companies advertise it. What it is good at however, is coming up with plausible ideas, and in this case having an indication for things to check in code can be a great tool to discover new stuff, as is literally the case for this security researcher finding a new vulnerability after auditing the module themselves.

source
Sort:hotnewtop