Comment on Google’s healthcare AI made up a body part — what happens when doctors don’t notice?
MagicShel@lemmy.zip 2 weeks ago
I don’t understand why people expect AI to work this way. First, you don’t give it the most information-free prompt you possible can. Second, it would be far better at discussing a diagnosis with an expert than just pronouncing a verdict.
It would be much better to provide as much patience demographic information as possible and then say something like:
- “Do you see anything suspicious or abnormal about [thing]?”
- “What are some possible causes of [unusual spot]?”
- “I suspect [diagnosis]. Identify and explain features of this image that either confirm or don’t support that conclusion. Is there a diagnosis that fits better or is more likely?”
Don’t rely on AI to perform the work, use it to make an expert faster or challenge them to be more accurate.
I don’t exactly know how medical AI works, but the fact that they are discussion prompts suggests LLMs play a role here and they can’t be trusted to function without an expert user.