So how are Apple going to manually check flagged accounts before reporting them to NCMEC - which they have promised - if they don't have the images that they supposedly matched? And, no, it's not necessarily always going to be a case of "this is a picture of a tree, not a naked child so it is obviously a false match" because the type of "fingerprint" they are using means that a false match is likely to be visually similar to a CSAM image in at least some way. Will Apple's checkers have the authority to make a judgement, or (more likely) will they be required to presume that a match is valid unless it's a glaringly obvious computer fault?
Apple checks the contents of the safety vouchers which contain an encrypted hash of the match plus a "visual derivative" of the image.
Again, from Gruber's article:
"Furthermore, one match isn’t enough to trigger any action. There’s a “threshold” — some number of matches against the CSAM database — that must be met. Apple isn’t saying what this threshold number is, but, for the sake of argument, let’s say that threshold is 10. With 10 or fewer matches, nothing happens, and nothing
can happen on Apple’s end. Only after 11 matches (threshold + 1) will Apple be alerted. Even then, someone at Apple will investigate, by examining the contents of the
safety vouchers that will accompany each photo in iCloud Photo Library. These vouchers are encrypted such that they can only be decrypted on the server side if threshold + 1 matches have been identified. From
Apple’s own description:
Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.
Even if your account is — against those one in a trillion odds, if Apple’s math is correct —
incorrectly flagged for exceeding the threshold, someone at Apple will examine the contents of the safety vouchers for those flagged images before reporting the incident to law enforcement. Apple is cryptographically only able to examine the safety vouchers for those images whose fingerprints matched items in the CSAM database. The vouchers include a “visual derivative” of the image — basically a low-res version of the image. If innocent photos are somehow wrongly flagged, Apple’s reviewers should notice."
Apple does not have the actual CSAM images - they cannot. Only NCMEC is allowed to have the actual images.