Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Huh

⁨567⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨year⁩ ago⁩ by ⁨Sixth0795@sh.itjust.works⁩ to ⁨science_memes@mander.xyz⁩

https://sh.itjust.works/pictrs/image/0046d246-9c38-49fa-a727-e4f2fde8d7aa.jpeg

source

Comments

Sort:hotnewtop
  • Pyro@programming.dev ⁨1⁩ ⁨year⁩ ago

    GPT doesn’t really learn from it’s conversations, it’s the over-correction of OpenAI in the name of “safety” which is likely to have caused this.

    source
    • lugal@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

      I assumed they reduced capacity to save power due to the high demand

      source
      • MalReynolds@slrpnk.net ⁨1⁩ ⁨year⁩ ago

        This. They could obviously reset to original performance (what, they don’t have backups?), it’s just more cost-efficient to have crappier answers. Yay, turbo AI enshittification…

        source
        • -> View More Comments
    • rtxn@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Sounds good, let’s put it in charge of cars and nuclear power plants!

      source
      • OpenStars@startrek.website ⁨1⁩ ⁨year⁩ ago

        Even getting 2+2=2 98% of the time is good enough for that. :-P

        spoiler

        (wait, 2+2 is what now?)

        source
        • -> View More Comments
    • Redward@yiffit.net ⁨1⁩ ⁨year⁩ ago

      Just for the fun of it, I argued with chatgpt saying it’s not really a self learning ai, 3.5 agreed that it’s a not a fully function ai with limited powers. 4.0 on the other hand was very adamant about being fully fleshed Ai

      source
  • AnUnusualRelic@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Amazing, it’s getting closer to human intelligence all the time!

    source
    • MotoAsh@lemmy.world ⁨1⁩ ⁨year⁩ ago

      The more I talk to people the more I realize how low that bar is. If AI doesn’t take over soon, we’ll kill ourselves anyways.

      source
    • Dasus@lemmy.world ⁨1⁩ ⁨year⁩ ago

      I mean, I could argue that it learned not to piss off stupid people by showing them how math the stoopids didn’t understand.

      source
  • Limeey@lemmy.world ⁨1⁩ ⁨year⁩ ago

    It all comes down to the fact that LLMs are not AGI - they have no clue what they’re saying or why or to whom. They have no concept of “context” and as a result have no ability to “know” if they’re giving right info or just hallucinating.

    source
    • Benaaasaaas@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Hey, but if Sam says it might be AGI he might get a trillion dollars so shut it /s

      source
  • UnRelatedBurner@sh.itjust.works ⁨1⁩ ⁨year⁩ ago

    Kind of a clickbait title

    “In March, GPT-4 correctly identified the number 17077 as a prime number in 97.6% of the cases. Surprisingly, just three months later, this accuracy plunged dramatically to a mere 2.4%. Conversely, the GPT-3.5 model showed contrasting results. The March version only managed to answer the same question correctly 7.4% of the time, while the June version exhibited a remarkable improvement, achieving an 86.8% accuracy rate.”

    source: techstartups.com/…/chatgpts-accuracy-in-solving-b…

    source
    • angrymouse@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Not everything is a click bait. Your explanation is great but the tittle is not lying, is just an simplification, titles could not contain every detail of the news, they are still tittles, and what the tittle says can be confirmed in your explanation. The only think I could’ve made different is specified that was a gpt-4 issue.

      Click bait would be “chat gpt is dying” or so.

      source
      • andrewta@lemmy.world ⁨1⁩ ⁨year⁩ ago

        I think that’s title not tittle

        source
        • -> View More Comments
      • TrickDacy@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Mmmmm, _titt_les

        source
      • overcast5348@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Tittles are the little dots above i and j, that’s why you weren’t autocorrected. You’re looking for “title” though.

        source
        • -> View More Comments
      • A_Very_Big_Fan@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Oversimplified to the point of lying you could say

        source
  • BennyHill@lemmy.ml ⁨1⁩ ⁨year⁩ ago

    ChatGPT went from high school student to boomer brain in record time.

    source
  • Hotzilla@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

    I have seen the same thing, gpt4 was originally able to handle more complex coding tasks, GPT4-turbo is not able to do it anymore. I have creative coding test that I have tested many LLM’s with, and only original gpt was able to solve it. Current one fails miserable with it.

    source
  • shiroininja@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Originally, it was people answering the questions. Now it’s the actual tech doing it Lmao

    source
    • Omega_Haxors@lemmy.ml ⁨1⁩ ⁨year⁩ ago

      AI fudging is notoriously common. Just ask anyone who lived in the 3rd world what working was like and they’ll animate with stories of how many times they were approached to fake the output of ““AI””

      source
      • NigelFrobisher@aussie.zone ⁨1⁩ ⁨year⁩ ago

        A colleague of mine worked for an AI firm a few years back. The AI was a big room of older women with keyboards.

        source
        • -> View More Comments
    • TurtleJoe@lemmy.world ⁨1⁩ ⁨year⁩ ago

      It’s often still people in developing countries answering the questions.

      source
  • helpImTrappedOnline@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Perhaps this AI thing is just a sham and there are tiny gnomes in the servers answering all the questions as fast as they can. Unfortuanlty, there are not enough qualified tiny gnomes to handle the increased work load. They have begun to outsource to the leprechauns who run the random text generators.

    Luckily the artistic hypersonic orcs seem to be doing fine…for the most part

    source
  • Mikufan@ani.social ⁨1⁩ ⁨year⁩ ago

    Yeah it now shows the mathematics as a python script so you can see where it does wrong.

    source
    • OpenStars@startrek.website ⁨1⁩ ⁨year⁩ ago

      How ironic… people now need to learn a computer language in order to understand the computer? (instead of so that the computer can understand people)

      source
      • Mikufan@ani.social ⁨1⁩ ⁨year⁩ ago

        Eh its not that hard to understand that scrips, its basically math…

        But yes.

        source
      • Wanderer@lemm.ee ⁨1⁩ ⁨year⁩ ago

        I get how chat GPT works [really I don’t] but what I don’t get is why they don’t put add ons into it.

        Like a: is this a math question? Okay it goes to the wolfram alpha system otherwise it goes to the LLM.

        source
        • -> View More Comments
  • Omega_Haxors@lemmy.ml ⁨1⁩ ⁨year⁩ ago

    This is a result of what is known as oversampling. When you zoom in really close and make one part of a wave look good, it makes the rest of the wave go crazy. This is what you’re seeing; the team at OpenAI tried super hard to make a good first impression and nailed that, but then once some time started to pass things started to quickly fall apart.

    source
    • someguy3@lemmy.ca ⁨1⁩ ⁨year⁩ ago

      So they made the math good, and now that they’re trying to make the rest good it’s screwing up the math?

      source
      • Omega_Haxors@lemmy.ml ⁨1⁩ ⁨year⁩ ago

        It’s more they focused super hard on making it have a good first impression that they gave no consideration to what would happen long-term.

        source
  • pewgar_seemsimandroid@lemmy.blahaj.zone ⁨1⁩ ⁨year⁩ ago

    human want make thing dumb

    source
  • EarMaster@lemmy.world ⁨1⁩ ⁨year⁩ ago

    I am wondering why it adds up to exactly 100%. There has to be some creative data handling happened with these numbers.

    source
    • MeatPilot@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Maybe the article was generated using another LLM?

      source
    • Gabu@lemmy.world ⁨1⁩ ⁨year⁩ ago

      People like you are why Mt. Everest had two feet added to its actual size so as to not seem too perfect.

      source
      • EarMaster@lemmy.world ⁨1⁩ ⁨year⁩ ago

        No I’m not. Why would I use feet to measure a mountain’s height?

        source
        • -> View More Comments
  • Jumi@lemmy.world ⁨1⁩ ⁨year⁩ ago

    The AI feels good, much slower than before

    source