Comment on But Claude said tumor!
Seasoned_Greetings@lemm.ee 8 months ago
Unpopular opinion incoming:
I don’t think we should ignore AI diagnosis just because they are wrong sometimes. The whole point of AI diagnosis is to catch things physicians don’t. No AI diagnosis comes without a physician double checking anyway.
I also don’t think it’s necessarily a bad thing that an AI got it wrong for that reason. Suspicion was still there and physicians double checked. To me, that means this tool is working as intended.
If the patient was insistent enough that something was wrong, they would have double checked or she would have gotten a second opinion anyway.
Flaming the AI for not being correct is missing the point of using it in the first place.
rho50@lemmy.nz 8 months ago
I think the bigger issue is why the AI model got it wrong. It got the diagnosis wrong because it is a language model and is fundamentally not fit for use as a diagnostic tool. Not even a screening/aid tool for physicians.
There are definitely AI tools designed for medical diagnoses, and those are indeed a major value-add for patients and physicians.
Seasoned_Greetings@lemm.ee 8 months ago
Fair enough