When it has a code and when human points out which part of code wqs changed with update or which part of code ro analyze, its not really something new anf horizon.ai was doing it for a while I guess. Wake me up when AI can find 0day by itself without having a full code 🤖💀
AI models can generate exploit code at lightning speed
Submitted 1 month ago by cm0002@lemmy.world to cybersecurity@infosec.pub
https://www.theregister.com/2025/04/21/ai_models_can_generate_exploit/
Comments
noctivius@lemm.ee 1 month ago
otter@lemmy.dbzer0.com 1 month ago
Uh. Duh?
joshcodes@programming.dev 1 month ago
The vulnerability is the scary part, not the exploit code. It’s like someone saying they can walk through an open door if they’re told where it is.
Ajen@sh.itjust.works 1 month ago
Using your analogy, this is more like telling someone there’s an unlocked door and asking them to find it on their own using blueprints.
Not a prefect analogy, but they didn’t tell the AI where the vulnerability was in the code. They just gave it the CVE description (which is intentionally vague) and a set of patches from that time period that included a lot of irrelevant changes.
joshcodes@programming.dev 1 month ago
I’m referencing this:
It wrote a fuzzer before it was told to compare the diff and extrapolate the answer, implying it didn’t know how to get to a solution either.
“So if you give it the neighbourhood of the building with the open door and a photo of the doorway that’s open, then drive it to the neighbourhood when it tries to go to the mall (it’s seen a lot of open doors there), it can trip and fall right before walking through the door.”