This is exactly what is being done. My eldest child is in a Ph. D. program for human - robot interaction and medical intervention, and has worked on image analysis systems in this field. They’re intended use is exactly that - a “first look” and “second look”. A first look to help catch the small, easily overlooked pre-tumors, and tentatively mark clear ones. A second look to be a safety net for tired, overworked, or outdated eyes.
Comment on Breast Cancer
Wilzax@lemmy.world 4 months ago
If it has just as low of a false negative rate as human-read mammograms, I see no issue. Feed it through the AI first before having a human check the positive results only. Save doctors’ time when the scan is so clean that even the AI doesn’t see anything fishy.
Alternatively, if it has a lower false positive rate, have doctors check the negative results only. If the AI sees something then it’s DEFINITELY worth a biopsy. Then have a human doctor check the negative readings just to make sure they don’t let anything that’s worth looking into go unnoticed.
Either way, as long as it isn’t worse than humans in both kinds of failures, it’s useful at saving medical resources.
Railing5132@lemmy.world 4 months ago
Dkarma@lemmy.world 4 months ago
You in QA?
Dicska@lemmy.world 4 months ago
Wilzax@lemmy.world 4 months ago
HAHAHAHA thank fuck I am not
UNY0N@lemmy.world 4 months ago
Nice comment. I like the detail.
For me, the main takeaway doesn’t have anything to do with the details though, it’s about the true usefulness of AI. The details of the implementation aren’t important, the general use case is the main point.
match@pawb.social 4 months ago
an image recognition model like this is usually tuned specifically to have a very low false negative (well below human, often) in exchange for a high false positive rate (overly cautious about cancer)!