They did this exact thing for csam detection a while back, and were made to stop due to public outcry.
It might have been analyzed locally and before encryption then though, still however without consent of the user and sending problematic results to apple.
It is very realistic that here they would make the device decrypt and check the description against a database and make it send the file and description off for reporting when a match is found.
scrubbles@poptalk.scrubbles.tech 2 weeks ago
Apple killed it’s last version in August because it didn’t respect privacy. Where there’s object detection there’s csam detection. Which hey I think is good, and I wouldn’t expect an announcement about it. I just see how they did this, and this is exactly how I’d roll out a privacy focused csam detector if I was going to do it
From August 2023, they killed the non privacy focused one: wired.com/…/apple-csam-scanning-heat-initiative-l…
t3rmit3@beehaw.org 2 weeks ago
This is not true at all. A model has to be trained to detect specific things. It does not automatically inherit the ability to detect CSAM just because it can detect other objects. The method it previously used for CSAM image detection was killed for bad privacy implementation, and the article specifically notes that
So even images that the local detection model doesn’t match to CSAM would be being uploaded to their servers.