Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

But Claude said tumor!

⁨397⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨year⁩ ago⁩ by ⁨ElCanut@jlai.lu⁩ to ⁨technology@beehaw.org⁩

https://jlai.lu/pictrs/image/49b01ad2-3d4e-49cf-84b7-5c91bd5d6615.jpeg

source

Comments

Sort:hotnewtop
  • anlumo@feddit.de ⁨1⁩ ⁨year⁩ ago

    Using a Large Language Model for image detection is peak human intelligence.

    source
    • PerogiBoi@lemmy.ca ⁨1⁩ ⁨year⁩ ago

      I had to prepare a high level report to a senior manager last week regarding a project my team was working on.

      We had to make 5 professional recommendations off of data we reported.

      We gave the 5 recommendations with lots of evidence and references to why we came to that decision.

      The top question we got was: “What are ChatGPT’s recommendations?”

      Back to the drawing board this week because LLMs are more credible than teams of professionals with years of experience and bachelor-masters level education on the subject matter.

      source
      • rho50@lemmy.nz ⁨1⁩ ⁨year⁩ ago

        It is quite terrifying that people think these unoriginal and inaccurate regurgitators of internet knowledge, with no concept of or heuristic for correctness… are somehow an authority on anything.

        source
        • -> View More Comments
      • rutellthesinful@kbin.social ⁨1⁩ ⁨year⁩ ago

        you fool

        "these are chatgpt's recommendations we just provided research to back them up and verify the ai's work"

        source
        • -> View More Comments
      • SolarMech@slrpnk.net ⁨1⁩ ⁨year⁩ ago

        I think this points to a large problem in our society is how we train and pick our managers. Oh wait we don’t. They pick us.

        source
      • VeganCheesecake@lemmy.blahaj.zone ⁨1⁩ ⁨year⁩ ago

        I mean, as long as you are the one prompting ChatGPT, you can probably get it to spit out the right recommendations. Works until they fire you because they are convinced AI made you obsolete.

        source
    • tigeruppercut@lemmy.zip ⁨1⁩ ⁨year⁩ ago

      AI cars are still running over pedestrians and people think computers are to the point of medical diagnosis?

      source
      • rho50@lemmy.nz ⁨1⁩ ⁨year⁩ ago

        There are some very impressive AI/ML technologies that are already in use as part of existing medical software systems (think: a model that highlights suspicious areas on an MRI, or even suggests differential diagnoses). Further, other models have been built and demonstrated to perform extremely well on sample datasets.

        Funnily enough, those systems aren’t using language models 🙄

        (There is Google’s Med-PaLM, but I suspect it wasn’t very useful in practice, which is why we haven’t heard anything since the original announcement.)

        source
        • -> View More Comments
      • KeenFlame@feddit.nu ⁨1⁩ ⁨year⁩ ago

        They are already used in medicine reliably. Often. Welcome to the future. Computers are pretty good tools for many things actually.

        source
    • intensely_human@lemm.ee ⁨1⁩ ⁨year⁩ ago

      A picture is worth a thousand words

      source
    • jarfil@beehaw.org ⁨1⁩ ⁨year⁩ ago

      Peak intelligence, is realizing an LLM doesn’t care whether its tokens represent chunks of text, sound, images, videos, 3D models, paths, hand movements, floor planning, emojis, etc.

      The keyword is: “multimodal”.

      source
    • sukhmel@programming.dev ⁨1⁩ ⁨year⁩ ago

      Well, image models are getting better at producing text, just sayin’

      source
      • MagicShel@programming.dev ⁨1⁩ ⁨year⁩ ago

        I read the same thing in Nevvsweeek.

        source
  • enjoytemple@kbin.social ⁨1⁩ ⁨year⁩ ago

    I am glad that "I googled why I was coughing and it said I had cancer and would die in 7 days so farewell you are a good friend" will live on for more years.

    source
  • NeatNit@discuss.tchncs.de ⁨1⁩ ⁨year⁩ ago

    I’m not following this story…

    a friend sent me MRI brain scan results and I put it through Claude

    …

    I annoyed the radiologists until they re-checked.

    How was he in a position to annoy his friend’s radiologists?

    source
    • Cube6392@beehaw.org ⁨1⁩ ⁨year⁩ ago

      Seems made up tbh

      source
      • Anyolduser@lemmynsfw.com ⁨1⁩ ⁨year⁩ ago

        And then everyone clapped

        source
        • -> View More Comments
    • jarfil@beehaw.org ⁨1⁩ ⁨year⁩ ago

      Money. Guy is loaded, he can annoy anyone he wants.

      source
    • Synnr@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

      A friend sent me MRI brain scan results

      Without more context I have to assume friend was still convinced of brain tumor, knew friend knew and talked about Claude, said friend ran results through Claude and told friend who’s brain was scanned that Claude gave a positive result, and friend went to multiple doctors for a second, third, fourth opinion.

      In America we have to advocate hard when there is an ongoing, still unsolved issue, and that includes using all tools at your disposal.

      source
    • lseif@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

      maybe his friend is also a radiologist and sent op a picture of his own head

      source
  • rufus@discuss.tchncs.de ⁨1⁩ ⁨year⁩ ago

    Maybe consider a tool made for the task and not just some random Claude, which isn’t trained on this at all and just makes up some random impression of what an expert could respond in a drama story?!

    source
  • rho50@lemmy.nz ⁨1⁩ ⁨year⁩ ago

    I know of at least one other case in my social network where GPT-4 identified a gas bubble in someone’s large bowel as “likely to be an aggressive malignancy.” Leading to said person fully expecting they’d be dead by July, when in fact they were perfectly healthy.

    These things are not ready for primetime, and certainly not capable of doing the stuff that most people think they are.

    The misinformation is causing real harm.

    source
    • JohnEdwa@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

      This is nothing but a modern spin on “hey internet, what’s wrong with me? WebMD: it’s cancer, you’ll be dead in a week.”

      source
    • B0rax@feddit.de ⁨1⁩ ⁨year⁩ ago

      To be honest, it is not made to diagnose medical scans and it is not supposed to be. There are different AIs trained exactly for that purpose, and they are usually not public.

      source
      • rho50@lemmy.nz ⁨1⁩ ⁨year⁩ ago

        Exactly. So the organisations creating and serving these models need to be clearer about the fact that they’re not general purpose intelligence, and are in fact contextual language generators.

        I’ve seen demos of the models used as actual diagnostic aids, and they’re not LLMs (plus require a doctor to verify the result).

        source
  • helenslunch@feddit.nl ⁨1⁩ ⁨year⁩ ago

    “AI convinced me of something that’s completely incorrect, isn’t that amazing!”

    No. No, this is bad. Very bad.

    source
    • grrgyle@slrpnk.net ⁨1⁩ ⁨year⁩ ago

      That just sounds like a magic 8 ball with some statistics sprinkled over

      source
  • Aatube@kbin.melroy.org ⁨1⁩ ⁨year⁩ ago

    Didn't he conclude with "We're still early"? How is that believing the success?

    source
    • nxdefiant@startrek.website ⁨1⁩ ⁨year⁩ ago

      Claude told him to be confident

      source
  • kibiz0r@midwest.social ⁨1⁩ ⁨year⁩ ago

    I need help finding a source, cuz there are so many fluff articles about medical AI out there…

    I recall that one of the medical AIs that the cancer VC gremlins have been hyping turned out to have horribly biased training data. They had scans of cancer vs. not-cancer, but they were from completely different models of scanners. So instead of being calibrated to identify cancer, it became calibrated to identify what model of scanner took the scan.

    source
    • Flax_vert@feddit.uk ⁨1⁩ ⁨year⁩ ago

      Wasn’t there something about CV’s for job applications and the AI ended up figuring out that black people or women are less likely to get the job so adjusted accordingly?

      source
    • MNByChoice@midwest.social ⁨1⁩ ⁨year⁩ ago

      I am failing to find source, but there is also a story about an older predictive model that worked great at one hospital, but failed miserably at the next. There was just enough variation in everything that the model broke.

      (I think the New England Journal of Medicine podcast, but I am not finding the episode.)

      source
    • BarryZuckerkorn@beehaw.org ⁨1⁩ ⁨year⁩ ago

      Versions of this dataset bias have been circulating since the 1960s.

      source
  • Seasoned_Greetings@lemm.ee ⁨1⁩ ⁨year⁩ ago

    Unpopular opinion incoming:

    I don’t think we should ignore AI diagnosis just because they are wrong sometimes. The whole point of AI diagnosis is to catch things physicians don’t. No AI diagnosis comes without a physician double checking anyway.

    I also don’t think it’s necessarily a bad thing that an AI got it wrong for that reason. Suspicion was still there and physicians double checked. To me, that means this tool is working as intended.

    If the patient was insistent enough that something was wrong, they would have double checked or she would have gotten a second opinion anyway.

    Flaming the AI for not being correct is missing the point of using it in the first place.

    source
    • rho50@lemmy.nz ⁨1⁩ ⁨year⁩ ago

      I don’t think it’s necessarily a bad thing that an AI got it wrong.

      I think the bigger issue is why the AI model got it wrong. It got the diagnosis wrong because it is a language model and is fundamentally not fit for use as a diagnostic tool. Not even a screening/aid tool for physicians.

      There are definitely AI tools designed for medical diagnoses, and those are indeed a major value-add for patients and physicians.

      source
      • Seasoned_Greetings@lemm.ee ⁨1⁩ ⁨year⁩ ago

        Fair enough

        source
  • noodlejetski@lemm.ee ⁨1⁩ ⁨year⁩ ago

    that’s surprising, AI is actually incredibly good at reading MRIs hachyderm.io/@dfeldman/112149278408570324

    source
  • akrz@programming.dev ⁨1⁩ ⁨year⁩ ago

    And that guy is loaded and in investment. Really goes to show how capitalism fosters investments in the best minds and organizations…

    potentiacap.com/team/

    source
  • Mastengwe@lemm.ee ⁨1⁩ ⁨year⁩ ago

    The minute I see some tool praising the glory of AI, I block them. Engaging with them is a futile waste of time.

    source
    • Kuvwert@lemm.ee ⁨1⁩ ⁨year⁩ ago

      You’re an ai

      source
  • AVincentInSpace@pawb.social ⁨1⁩ ⁨year⁩ ago

    exactly how hard did beer person have to try to miss the point when they read a thread about how an AI confidently provided a wrong diagnosis and warning about how we shouldn’t always trust AI and proceed to reply accusing Misha Saul of being a tech bro who believed an AI over a human doctor

    source
  • Midnitte@beehaw.org ⁨1⁩ ⁨year⁩ ago

    I feel like the book I, Robot provides some fascinating insight into this… specifically Liar

    source
  • rutellthesinful@kbin.social ⁨1⁩ ⁨year⁩ ago

    is the brain tumor gone or is this a hallucination?

    source