Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

I wish I was as bold as these authors.

⁨943⁩ ⁨likes⁩

Submitted ⁨⁨10⁩ ⁨months⁩ ago⁩ by ⁨jupyter_rain@discuss.tchncs.de⁩ to ⁨science_memes@mander.xyz⁩

https://discuss.tchncs.de/pictrs/image/2103085f-84ca-4c1e-a21f-c3e83f255fa1.jpeg

source

Comments

Sort:hotnewtop
  • Seraph@fedia.io ⁨10⁩ ⁨months⁩ ago

    Well, yeah. People are acting like language models are full fledged AI instead of just a parrot repeating stuff said online.

    source
    • GBU_28@lemm.ee ⁨10⁩ ⁨months⁩ ago

      Spicy auto complete is a useful tool.

      But these things are nothing more

      source
    • JackGreenEarth@lemm.ee ⁨10⁩ ⁨months⁩ ago

      Whenever any advance is made in AI, AI critics redefine AI so its not achieved yet according to their definition. Deep Blue Chess was an AI, an artificial intelligence. If you mean human or beyond level general intelligence, you’re probably talking about AGI or ASI (general or super intelligence, respectively).

      And the second comment about LLMs being parrots arises from a misunderstanding of how LLMs work. The early chatbots were actual parrots, saying prewritten sentences that they had either been preprogrammed with or got from their users. LLMs work differently, statistically predicting the next token (roughly equivalent to a word) based on all those that came before it, and parameters finetuned during training. Their temperature can be changed to give more or less predictable output, and as such, they have the potential for actually original output, unlike their parrot predecessors.

      source
      • SkyNTP@lemmy.ml ⁨10⁩ ⁨months⁩ ago

        You completely missed the point. The point is people have been lead to believe LLM can do jobs that humans do because the output of LLMs sounds like the jobs people do, when in reality, speech is just one small part of these jobs. It turns, reasoning is a big part of these jobs, and LLMs simply don’t reason.

        source
      • Prunebutt@slrpnk.net ⁨10⁩ ⁨months⁩ ago

        Bullshit. These people know exactly how LLMs work.

        LLMs reproduce the form of language without any meaning being transmitted. That’s called parroting.

        source
        • -> View More Comments
      • Tar_alcaran@sh.itjust.works ⁨10⁩ ⁨months⁩ ago

        LLMs work differently, statistically predicting the next token (roughly equivalent to a word) based on all those that came before it, and parameters finetuned during training.

        Which is what a parrot does.

        source
        • -> View More Comments
      • lunarul@lemmy.world ⁨10⁩ ⁨months⁩ ago

        AI hasn’t been redefined. For people familiar with the field it has always been a broad term meaning code that learns (and subdivided in many types of AI), and for people unfamiliar with the field it has always been a term synonymous with AGI. So when people in the former category put out a product and label it as AI, people in the latter category then run with it using their own definition.

        For a long time ML had been the popular buzzword in tech and people outside the field didn’t care about it. But then Google and OpenAI started calling ML and LLMs simply “AI” and that became the popular buzzword. And when everyone is talking about AI, and most people conflate that with AGI, the results are funny and scary at the same time.

        source
        • -> View More Comments
      • Seraph@fedia.io ⁨10⁩ ⁨months⁩ ago

        I appreciate you taking the time to clarify thank you!

        source
      • WagyuSneakers@lemmy.world ⁨10⁩ ⁨months⁩ ago

        LLMs have more in common with chatbots than AI.

        source
        • -> View More Comments
    • frezik@midwest.social ⁨10⁩ ⁨months⁩ ago

      The paper actually argues otherwise, though it’s not fully settled on that conclusion, either.

      source
  • moonsnotreal@lemmy.blahaj.zone ⁨10⁩ ⁨months⁩ ago

    https://link.springer.com/article/10.1007/s10676-024-09775-5

    Link to the article if anyone wants it

    source
    • jballs@sh.itjust.works ⁨10⁩ ⁨months⁩ ago

      Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005)

      Now I kinda want to read On Bullshit

      source
      • tomkatt@lemmy.world ⁨10⁩ ⁨months⁩ ago

        Don’t waste your time. It’s honestly fucking awful. Reading it was like experiencing someone mentally masturbating in real time.

        source
        • -> View More Comments
    • DaGeek247@fedia.io ⁨10⁩ ⁨months⁩ ago

      That's actually a fun read

      source
  • myslsl@lemmy.world ⁨10⁩ ⁨months⁩ ago

    I will fucking piledrive you if you mention AI again.

    source
    • glitchdx@lemmy.world ⁨10⁩ ⁨months⁩ ago

      fucking love that article. sums up everything wrong with AI. Unfortunately, it doesn’t touch on what AI does right: help idiots like me achieve a slight amount of competence on subjects that such people can’t be bothered with dedicating their entire lives to.

      source
  • mkwt@lemmy.world ⁨10⁩ ⁨months⁩ ago

    This paper should cite On Bullshit.

    source
    • just2look@lemm.ee ⁨10⁩ ⁨months⁩ ago

      It does. It’s even cited in the abstract, and it’s the origin of bullshit as referenced in their title.

      source
    • thanks_shakey_snake@lemmy.ca ⁨10⁩ ⁨months⁩ ago

      It talks extensively about On Bullshit, lol.

      source
    • xenoclast@lemmy.world ⁨10⁩ ⁨months⁩ ago

      Yup. The paper is worth actually reading

      source
    • ace_garp@lemmy.world ⁨10⁩ ⁨months⁩ ago

      Important question of our time

      source
  • Nicoleism101@lemm.ee ⁨10⁩ ⁨months⁩ ago

    Suddenly it dawned on me that I can plaster my CV with AI and win over actual competent people easy peasy

    source
    • blady_blah@lemmy.world ⁨10⁩ ⁨months⁩ ago

      As an engineering manager, I’ve already seen cover letters and intro emails that are so obviously AI generated that it’s laughable. These should be used like you use them for writing essays, as a framework with general prompts, but filled in by yourself.

      Fake friendliness that was outsourced to an ai is worse than no friendliness at all.

      source
      • Nicoleism101@lemm.ee ⁨10⁩ ⁨months⁩ ago

        I didn’t mean AI generated anything though

        source
        • -> View More Comments
    • WagyuSneakers@lemmy.world ⁨10⁩ ⁨months⁩ ago

      It’s extremely easy to detect this. Recruiters actively filter out resumes like this for important roles.

      source
  • ace_garp@lemmy.world ⁨10⁩ ⁨months⁩ ago

    Plot-twist: The paper was authored by a competing LLM.

    source
  • glitchdx@lemmy.world ⁨10⁩ ⁨months⁩ ago

    There are things that chatgpt does well, especially if you temper your expectations to the level of someone who has no valuable skills and is mostly an idiot.

    Hi, I’m an idiot with no valuable skills, and I’ve found chatgpt to be very useful.

    I’ve recently started learning game development in godot, and the process of figuring out why the code that chatgpt gives me doesn’t work has taught me more about programming than any teacher ever accomplished back in high school.

    Chatgpt is also an excellent therapist, and has helped me deal with mental breakdowns on multiple occasions, while it was happening. I can’t find a real therapist’s phone number, much less schedule an appointment.

    I’m a real shitty writer, and I’m making a wiki of lore for a setting and ruleset for a tabletop RPG that I’ll probably never get to actually play. ChatGPT is able to turn my inane ramblings into coherent wiki pages, most of the time.

    If you set your expectations to what was advertised, then yeah, chatgpt is bullshit. Of course it was bullshit, and everyone who knew half of anything about anything called it. If you set realistic expectations, you’ll get realistic results. Why is this so hard for people to get?

    source
    • dmalteseknight@programming.dev ⁨10⁩ ⁨months⁩ ago

      Yeah it is as if someone invented the microwave oven and everyone over hypes it as being able to cook Michelin star meals. People then dismiss it entirely since it cannot produce said Michelin star meals.

      They fail to see that is a great reheating machine and a good machine for quick meals.

      source
      • interdimensionalmeme@lemmy.ml ⁨10⁩ ⁨months⁩ ago

        Also, you can make a michelin meal in a microwave, if you have the skills.

        source
    • Natanael@slrpnk.net ⁨10⁩ ⁨months⁩ ago

      Because few people know what’s realistic for LLMs

      source
      • oo1@lemmings.world ⁨10⁩ ⁨months⁩ ago

        Intelligence is a very loaded word and not very precise in general usage. And i mean that amongst humans and animals as well as robots.

        I’m sure the real AI and compsci researchers have precise terms and taxonomies for it and ways to measure it, but the word itself, in the hands of marketing people and the general population as an audience . . . not useful.

        source
    • AXLplosion@lemmy.zip ⁨10⁩ ⁨months⁩ ago

      Hah I had that exact same experience with Godot

      source
  • Shameless@lemmy.world ⁨10⁩ ⁨months⁩ ago

    Just reading the intro pulls you in

    We draw a distinction between two sorts of bullshit, which we call ‘hard’ and ‘soft’ bullshit

    source
  • fckreddit@lemmy.ml ⁨10⁩ ⁨months⁩ ago

    This is something I already mentioned previously. LLMs have no way of fact checking, no measure of truth or falsity built into. In the training process, it probably accepts every piece of text as true. This is very different from how our minds work. When faced with a piece of text we have many ways to deal with it, which range from accepting it as it is to going on the internet to verify it to actually designing and conducting experiments to prove or disprove the claim. So, yeah what ChatGPT is probably bullshit.

    Of course, the solution is that ChatGPT be trained by labelling text with some measure of truth. Of course, LLMs need so much data that labelling it all would be extremely slow and expensive and suddenly, the fast moving world of AI to screech to almost a halt, which would be unacceptable to the investors.

    source
    • FiniteBanjo@lemmy.today ⁨10⁩ ⁨months⁩ ago

      It’s even more than just “accepting everything as true” the machines have no concept of true. The machine doesn’t think. It’s a combination of three processes: prediction algorithm for the next word, algorithm that compares grammar and sentence structure parity, and at least one algorithm to help police the other two for problematic statements.

      Clearly the problem is with that last step, but the solution would be a human or a general intelligience, meaning the current models in use will never progress beyond this point.

      source
    • MenacingPerson@lemm.ee ⁨10⁩ ⁨months⁩ ago

      This is very different from how our minds work.

      Childrens’ minds work similarly.

      source
      • fckreddit@lemmy.ml ⁨10⁩ ⁨months⁩ ago

        Why do you even think that? Children don’t ask questions? Don’t try to find answers?

        source
        • -> View More Comments
    • iamkindasomeone@feddit.de ⁨10⁩ ⁨months⁩ ago

      Your statement on no way of fact checking is not a 100% correct as developers found ways to ground LLMs, e.g., by prepending context pulled from „real time“ sources of truth (e.g., search engines). This data is then incorporated into the prompt as context data. Well obviously this is kind of cheating and not baked into the LLM itself, however it can be pretty accurate for a lot of use cases.

      source
      • fckreddit@lemmy.ml ⁨10⁩ ⁨months⁩ ago

        Does using authoritative sources is fool proof? For example, is everything written in Wikipedia factually correct? I don’t believe so unless I actually check it. Also, what about reddit or stack overflow? Can they be considered factually correct? To some extent, yes. But not completely. That is why most of these LLMs give such arbitrary answers. They extrapolate on information they have no way knowing or understanding.

        source
        • -> View More Comments
  • Sibbo@sopuli.xyz ⁨10⁩ ⁨months⁩ ago

    Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.

    This is actually a really nice insight on the quality of the output of current LLMs. And it teaches about how they work and what the goals given by their creators are.

    They are but trained to produce factual information, but to talk about topics while sounding like a competent expert.

    For LLM researchers this means that they need to figure out how to train LLMs for factuality as opposed to just sounding competent. But that is probably a lot easier said than done.

    source
  • Colour_me_triggered@lemm.ee ⁨10⁩ ⁨months⁩ ago

    Wouldn’t it be funny if the article was written by chat GPT.

    source
  • veganpizza69@lemmy.world ⁨10⁩ ⁨months⁩ ago

    link.springer.com/article/…/s10676-024-09775-5

    source
  • julianschmulian@lemmy.blahaj.zone ⁨10⁩ ⁨months⁩ ago

    clearly they have never heard of harry g frankfurts (excellent) „on bullshit“

    source
    • GrabtharsHammer@lemmy.world ⁨10⁩ ⁨months⁩ ago

      The paper explicitly states that they are calling AI “bullshit” in the Frankfurtian sense and not merely the colloquial sense.

      You’d know this if you had read the paper or even checked whether your statement were true. But you didn’t actually care about the truth value of your own statement, which means your comment is, itself, bullshit.

      source
      • naevaTheRat@lemmy.dbzer0.com ⁨10⁩ ⁨months⁩ ago

        By grabthar’s hammer, what a put down!

        source
        • -> View More Comments
      • tquid@sh.itjust.works ⁨10⁩ ⁨months⁩ ago

        Sheesh, leave some for the rest of us to pick on, you savage!

        source
      • julianschmulian@lemmy.blahaj.zone ⁨10⁩ ⁨months⁩ ago

        jesus christ ofc i didn‘t read the paper, i was just making a joke ffs

        source
        • -> View More Comments
    • bobtimus_prime@feddit.org ⁨10⁩ ⁨months⁩ ago

      Actually, they reference him.

      source
  • xx3rawr@sh.itjust.works ⁨10⁩ ⁨months⁩ ago

    Unlike OpenAI, this article is actually open.

    source
  • Psythik@lemmy.world ⁨10⁩ ⁨months⁩ ago

    Can we please keep the AI hate in the fuck_ai community so that I don’t have to see it?

    I don’t care what Lemmy thinks, ChatGPT has improved my life for the better.

    source
    • Piemanding@sh.itjust.works ⁨10⁩ ⁨months⁩ ago

      Yes, but it also actively worsens people’s lives too.

      source
    • xor@lemmy.blahaj.zone ⁨10⁩ ⁨months⁩ ago

      Why? This is a scientific paper with a shitpost as the title

      source
    • Zoot@reddthat.com ⁨10⁩ ⁨months⁩ ago

      What AI hate? This is science memes, and that is a science publication. I’m glad I got to enjoy this sciencey meme

      source
    • androogee@midwest.social ⁨10⁩ ⁨months⁩ ago

      You can make or find a pro-ai community and stay in there.

      It’s not the rest of the world’s job to coddle you.

      source
    • WagyuSneakers@lemmy.world ⁨10⁩ ⁨months⁩ ago

      I wouldn’t trust the work you do at all.

      source
    • Tamo240@programming.dev ⁨10⁩ ⁨months⁩ ago

      And are you being paid more for your increased productivity, or is your company stealing that value?

      source
    • mriormro@lemmy.world ⁨10⁩ ⁨months⁩ ago

      Fucking lol.

      source
      • Aussiemandeus@aussie.zone ⁨10⁩ ⁨months⁩ ago

        Proper main character syndrome haha

        source
    • kaffiene@lemmy.world ⁨10⁩ ⁨months⁩ ago

      So don’t read the article. And maybe quit policing other people’s conversations

      source
      • Psythik@lemmy.world ⁨10⁩ ⁨months⁩ ago

        I just created a filter for the keyword “AI”.

        Goodbye and good riddance, haters. 😎✌️

        source
        • -> View More Comments
  • downpunxx@fedia.io ⁨10⁩ ⁨months⁩ ago

    When I say it they call me crass

    source