Then what do they think it means? Certainly not some highly technical and pedantic definition of "scan" that excludes reading the data to calculate a hash. The phrase "Photos are not scanned. Hashes are generated...." is classic smoke and mirrors (hashes are generated, yes, by
scanning the photos... unless you assign some highly specific meaning to "scan").
Which would be a great supporting argument if it was combined with end-to-end encryption so that, once the photos are on the server, Apple
couldn't decrypt them - but that doesn't seem to be the case, at least with iCloud photos (iMessages, maybe...) Also, note that Apple are promising that matches will be reviewed by humans before reporting - so (at least if that process has any meaning) they
do have a mechanism for viewing the photos anyway.
You're trying to discredit the idea by personalising it. No, it's not likely that one of your personal photos will get on to the database... but the decision about what
does go in is in the hands of a third-party agency, who might (say) decide that some widely distributed internet meme photo is unacceptable... or even make a fat-finger error and add a bunch of LOLCats images that they were using for testing to the database... Then you have to ask how robust and fair the process for dealing with suspected "hits" is going to be.
...plus, of course, nobody ever wrote malware that downloads porn to your devices, did they...?
It's also becoming clear that people think the "1 in 1 trillion chance of a false match" means that if you get flagged, the chances of you being innocent is one in one trillion. See:
https://en.wikipedia.org/wiki/Prosecutor's_fallacy
There are about 1 billion iPhone users. Who knows how many thousands of images are in the CSAM database? False positives are probably going to happen, so the question is do you really, really trust that they are going to be interpreted correctly?