As I understand it, it's not interpreting the contents of the images in your photo library at all. It won't know the difference between a nude or a picture of a sandy valley. Instead, it's boiling down the photos to a small hash and comparing it to a database of known illegal image hashes. If they almost match, a "safety voucher" is generated. (What double-speak.) If it's not in the database, it probably won't match, even if it does happen to be an (unknown) illegal image.Can someone explain how this tech works? I find face scanning and photo regensition tech to get it wrong a fair amount of time. Is this going to flag pictures that might be an issue and upload them for someone to look at? People take a lot of nude photos, so is it going to be scanning for nude photos it thinks are young people and sending them for a person to look at? I don't get how it's supposed to work.
According to another article on MacRumors, Craig Federighi says about 30 vouchers are needed before action is taken. However, the action is indeed your photos (nudes or sandy valleys) that have vouchers attached being decrypted and inspected by Apple.
So they might well get sent to Apple, if the threshold is met.