This could have been a good thing but a bunch of complainers who didn't even realize before that Apple is already scanning their iCloud email for child porn suddenly went "but my privacy!"
Even ignoring the risks of this feature, it would not have been all that useful in accomplishing its desired goal. It would only be triggered when someone would download an image that was listed in the database of bad images
As I understood it (and I did read Apple's overview document at the time) the scanning happened
on the iDevice before upload - the whole idea being that you could still have end-to-end encryption and Apple still couldn't see your original pictures. All Apple got was a hash that they could compare against a list of hashes from known CSAM material.
The first problem was simply crossing the line and setting foot on a slippery slope by scanning
before the images were uploaded and sharing the
results with Apple. Makes it very easy to take the next step and expand that to
all images on the device.
The second problem was that Apple were reliant on "the authorities" for supplying the database of "CSAM hashes" and actually had no way of knowing what was being declared as "bad".
Then there was a
lot of smoke and mirrors about what a "hash" meant. "Hash" is a very general term in computing. One type of hash uses a particular, well-defined algorithm to generate an as-good-as-unique ID for a particular set of data, that will change in response to the slightest alteration to the data: that's ths sort of hash you use for cryptography and code signing. That would be useless for CSAM detection - change a couple of pixels in the image, let alone crop, re-size or adjust the colors and the hash would no longer match the "bad" fingerprint. The sort of hash we're talking about here is designed to produce the
same hash for
similar images so it won't be fooled by cropping, resampling, recoloring etc. and is usually generated using machine-learning-type techniques which make it difficult to explain which features of the image are leading to the result (not impossible - there are analysis techniques - but not something you'd want to explain to a jury or a CEO). With that, comes the inevitability of false positives. That's the sort of hash used for CSAM detection and Apple's report was full of praise for how it could defeat the wiley paedophiles who tried cropping and posterising their wares.
Trouble is, their solution to the false positive problem seemed to be pure
"Prosecutor's fallacy" - one match = false, ten matches = porno filth! - i.e. assuming that false positives were random and uncorrelated, whereas in reality the photos on one individual's camera will have dozens of photos of the same subjects or places - possibly including whatever triggered the false match.
If you dug down into the really technical papers, would it turn out that they'd have thought of that possibility and either investigated and refuted it, or found a clever solution? Maybe, but its a pretty crucial point, and a solution or refutation would be something to sing about in the executive summary.
Almost the only way of testing a system like that and finding out the true false positive rate would be a massive trial on real-life data with human confirmation of each match (and a comprehensive after-care program for the poor so-and-sos doing the comparisons). Nothing else would be representative.
There was also "reassurance" that matches would be checked by Apple staff before taking action - but according to their own description of the system, the only thing that Apple could possibly check was that, yes, the hash sent by your phone (based on an image we can't see) matches the blacklist (generated by the authorities from images that it would be illegal for us to see).