I am glad that "I googled why I was coughing and it said I had cancer and would die in 7 days so farewell you are a good friend" will live on for more years.
But Claude said tumor!
Submitted 7 months ago by ElCanut@jlai.lu to technology@beehaw.org
https://jlai.lu/pictrs/image/49b01ad2-3d4e-49cf-84b7-5c91bd5d6615.jpeg
Comments
enjoytemple@kbin.social 7 months ago
NeatNit@discuss.tchncs.de 7 months ago
I’m not following this story…
a friend sent me MRI brain scan results and I put it through Claude
…
I annoyed the radiologists until they re-checked.
How was he in a position to annoy his friend’s radiologists?
jarfil@beehaw.org 7 months ago
Money. Guy is loaded, he can annoy anyone he wants.
Synnr@sopuli.xyz 7 months ago
A friend sent me MRI brain scan results
Without more context I have to assume friend was still convinced of brain tumor, knew friend knew and talked about Claude, said friend ran results through Claude and told friend who’s brain was scanned that Claude gave a positive result, and friend went to multiple doctors for a second, third, fourth opinion.
In America we have to advocate hard when there is an ongoing, still unsolved issue, and that includes using all tools at your disposal.
lseif@sopuli.xyz 7 months ago
maybe his friend is also a radiologist and sent op a picture of his own head
rufus@discuss.tchncs.de 7 months ago
Maybe consider a tool made for the task and not just some random Claude, which isn’t trained on this at all and just makes up some random impression of what an expert could respond in a drama story?!
rho50@lemmy.nz 7 months ago
I know of at least one other case in my social network where GPT-4 identified a gas bubble in someone’s large bowel as “likely to be an aggressive malignancy.” Leading to said person fully expecting they’d be dead by July, when in fact they were perfectly healthy.
These things are not ready for primetime, and certainly not capable of doing the stuff that most people think they are.
The misinformation is causing real harm.
JohnEdwa@sopuli.xyz 7 months ago
This is nothing but a modern spin on “hey internet, what’s wrong with me? WebMD: it’s cancer, you’ll be dead in a week.”
B0rax@feddit.de 7 months ago
To be honest, it is not made to diagnose medical scans and it is not supposed to be. There are different AIs trained exactly for that purpose, and they are usually not public.
rho50@lemmy.nz 7 months ago
Exactly. So the organisations creating and serving these models need to be clearer about the fact that they’re not general purpose intelligence, and are in fact contextual language generators.
I’ve seen demos of the models used as actual diagnostic aids, and they’re not LLMs (plus require a doctor to verify the result).
helenslunch@feddit.nl 7 months ago
“AI convinced me of something that’s completely incorrect, isn’t that amazing!”
No. No, this is bad. Very bad.
grrgyle@slrpnk.net 7 months ago
That just sounds like a magic 8 ball with some statistics sprinkled over
Aatube@kbin.melroy.org 7 months ago
Didn't he conclude with "We're still early"? How is that believing the success?
nxdefiant@startrek.website 7 months ago
Claude told him to be confident
kibiz0r@midwest.social 7 months ago
I need help finding a source, cuz there are so many fluff articles about medical AI out there…
I recall that one of the medical AIs that the cancer VC gremlins have been hyping turned out to have horribly biased training data. They had scans of cancer vs. not-cancer, but they were from completely different models of scanners. So instead of being calibrated to identify cancer, it became calibrated to identify what model of scanner took the scan.
Flax_vert@feddit.uk 7 months ago
Wasn’t there something about CV’s for job applications and the AI ended up figuring out that black people or women are less likely to get the job so adjusted accordingly?
MNByChoice@midwest.social 7 months ago
I am failing to find source, but there is also a story about an older predictive model that worked great at one hospital, but failed miserably at the next. There was just enough variation in everything that the model broke.
(I think the New England Journal of Medicine podcast, but I am not finding the episode.)
Seasoned_Greetings@lemm.ee 7 months ago
Unpopular opinion incoming:
I don’t think we should ignore AI diagnosis just because they are wrong sometimes. The whole point of AI diagnosis is to catch things physicians don’t. No AI diagnosis comes without a physician double checking anyway.
I also don’t think it’s necessarily a bad thing that an AI got it wrong for that reason. Suspicion was still there and physicians double checked. To me, that means this tool is working as intended.
If the patient was insistent enough that something was wrong, they would have double checked or she would have gotten a second opinion anyway.
Flaming the AI for not being correct is missing the point of using it in the first place.
rho50@lemmy.nz 7 months ago
I don’t think it’s necessarily a bad thing that an AI got it wrong.
I think the bigger issue is why the AI model got it wrong. It got the diagnosis wrong because it is a language model and is fundamentally not fit for use as a diagnostic tool. Not even a screening/aid tool for physicians.
There are definitely AI tools designed for medical diagnoses, and those are indeed a major value-add for patients and physicians.
Seasoned_Greetings@lemm.ee 7 months ago
Fair enough
noodlejetski@lemm.ee 7 months ago
that’s surprising, AI is actually incredibly good at reading MRIs hachyderm.io/@dfeldman/112149278408570324
akrz@programming.dev 7 months ago
And that guy is loaded and in investment. Really goes to show how capitalism fosters investments in the best minds and organizations…
Mastengwe@lemm.ee 7 months ago
The minute I see some tool praising the glory of AI, I block them. Engaging with them is a futile waste of time.
Kuvwert@lemm.ee 7 months ago
You’re an ai
AVincentInSpace@pawb.social 7 months ago
exactly how hard did beer person have to try to miss the point when they read a thread about how an AI confidently provided a wrong diagnosis and warning about how we shouldn’t always trust AI and proceed to reply accusing Misha Saul of being a tech bro who believed an AI over a human doctor
Midnitte@beehaw.org 7 months ago
I feel like the book I, Robot provides some fascinating insight into this… specifically Liar
rutellthesinful@kbin.social 7 months ago
is the brain tumor gone or is this a hallucination?
anlumo@feddit.de 7 months ago
Using a Large Language Model for image detection is peak human intelligence.
PerogiBoi@lemmy.ca 7 months ago
I had to prepare a high level report to a senior manager last week regarding a project my team was working on.
We had to make 5 professional recommendations off of data we reported.
We gave the 5 recommendations with lots of evidence and references to why we came to that decision.
The top question we got was: “What are ChatGPT’s recommendations?”
Back to the drawing board this week because LLMs are more credible than teams of professionals with years of experience and bachelor-masters level education on the subject matter.
rho50@lemmy.nz 7 months ago
It is quite terrifying that people think these unoriginal and inaccurate regurgitators of internet knowledge, with no concept of or heuristic for correctness… are somehow an authority on anything.
rutellthesinful@kbin.social 7 months ago
you fool
"these are chatgpt's recommendations we just provided research to back them up and verify the ai's work"
SolarMech@slrpnk.net 7 months ago
I think this points to a large problem in our society is how we train and pick our managers. Oh wait we don’t. They pick us.
VeganCheesecake@lemmy.blahaj.zone 7 months ago
I mean, as long as you are the one prompting ChatGPT, you can probably get it to spit out the right recommendations. Works until they fire you because they are convinced AI made you obsolete.
tigeruppercut@lemmy.zip 7 months ago
AI cars are still running over pedestrians and people think computers are to the point of medical diagnosis?
rho50@lemmy.nz 7 months ago
There are some very impressive AI/ML technologies that are already in use as part of existing medical software systems (think: a model that highlights suspicious areas on an MRI, or even suggests differential diagnoses). Further, other models have been built and demonstrated to perform extremely well on sample datasets.
Funnily enough, those systems aren’t using language models 🙄
(There is Google’s Med-PaLM, but I suspect it wasn’t very useful in practice, which is why we haven’t heard anything since the original announcement.)
KeenFlame@feddit.nu 7 months ago
They are already used in medicine reliably. Often. Welcome to the future. Computers are pretty good tools for many things actually.
intensely_human@lemm.ee 7 months ago
A picture is worth a thousand words
jarfil@beehaw.org 7 months ago
Peak intelligence, is realizing an LLM doesn’t care whether its tokens represent chunks of text, sound, images, videos, 3D models, paths, hand movements, floor planning, emojis, etc.
The keyword is: “multimodal”.
sukhmel@programming.dev 7 months ago
Well, image models are getting better at producing text, just sayin’
MagicShel@programming.dev 7 months ago
I read the same thing in Nevvsweeek.