Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws

⁨86⁩ ⁨likes⁩

Submitted ⁨⁨4⁩ ⁨hours⁩ ago⁩ by ⁨misk@piefed.social⁩ to ⁨technology@lemmy.zip⁩

https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html

source

Comments

Sort:hotnewtop
  • Guntrigger@sopuli.xyz ⁨2⁩ ⁨hours⁩ ago

    One of these days, the world will no longer reward bullshitters, human or AI. And society will benefit greatly.

    source
    • SapphironZA@sh.itjust.works ⁨2⁩ ⁨hours⁩ ago

      The Lion was THIS big and kept me in that tree all day. And that is why I did not bring back any prey.

      Ignore the smell of fermented fruit on my breath.

      source
  • Technus@lemmy.zip ⁨3⁩ ⁨hours⁩ ago

    Beyond proving hallucinations were inevitable, the OpenAI research revealed that industry evaluation methods actively encouraged the problem. Analysis of popular benchmarks, including GPQA, MMLU-Pro, and SWE-bench, found nine out of 10 major evaluations used binary grading that penalized “I don’t know” responses while rewarding incorrect but confident answers.

    “We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty,” the researchers wrote.

    I just wanna say I called this out nearly a year ago: lemmy.zip/comment/13916070

    source
    • MelodiousFunk@slrpnk.net ⁨2⁩ ⁨hours⁩ ago

      nine out of 10 major evaluations used binary grading that penalized “I don’t know” responses while rewarding incorrect but confident answers.

      This is how we treat people, too. I can’t count the number of times I’ve heard IT staff spouting off confident nonsense and getting congratulated for it. My old coworker turned it into several promotions because the people he was impressing with his bullshit were so far removed from day to day operations that any slip-ups could be easily blame shifted to others. What mattered was that he sounded confident despite knowing jack about shit.

      source
    • Rhaedas@fedia.io ⁨2⁩ ⁨hours⁩ ago

      I'd say extremely complex autocomplete, not glorified, but the point still stands that using probability to find accuracy is always going to deviate eventually. The tactic now isn't to try other approaches, they've come too far and have too much invested. Instead they keep stacking more and more techniques to try and steer and reign in this deviation. Difficult when in the end there isn't anything "thinking" at any point.

      source
      • lemmyng@piefed.ca ⁨1⁩ ⁨hour⁩ ago

        Instead they keep stacking more and more techniques to try and steer and reign in this deviation.

        I hate how the tech bros immediately say "this can be solved with an MCP server." Bitch, if the only thing that keeps the LLM from giving me wrong answers is the MCP server, then said server is the one that's actually producing the answers I need, and the LLM is just lipstick on a pig.

        source
      • 87Six@lemmy.zip ⁨1⁩ ⁨hour⁩ ago

        AI is and always will be just a temporary solution to problems that we can’t put into an algorithm to solve as of now. As soon as an algorithm for issues comes out, AI is done for. But, figuring out complex algorithms for near-impossible problems is not as impressive to investors…

        source
    • misk@piefed.social ⁨2⁩ ⁨hours⁩ ago

      My guess they know the jig is up and they’re establishing a timeline for the future lawsuits.

      „Your honour, we didn’t mislead the investors because we’ve only learned of this September 2025.”

      source
  • BombOmOm@lemmy.world ⁨2⁩ ⁨hours⁩ ago

    A hallucination is something that disagrees with your active inputs (ears, eyes, etc). AIs don’t have these active inputs, all they have is the human equivalent of memories. Everything they draw up is a hallucination, literally all of it. It’s simply coincidence the hallucination matches reality.

    Is it really surprising that the thing that can only create hallucinations is often wrong? That the thing that can only create hallucinations will continue to be wrong on a regular basis?

    source
  • kubica@fedia.io ⁨2⁩ ⁨hours⁩ ago

    I don't know where I read it but sort of said that it to have that much information inside the models it was basically similar to a compression algorithm.

    From logic, if we have a lossy compression then its mostly luck if the output is equal to the original. Sometimes it will tip one way and sometimes the other.

    source
    • arthur@lemmy.zip ⁨2⁩ ⁨hours⁩ ago

      With the caveat that there is no LLM where the “compression” is lossless on this analogy.

      source