I am deeply rooted in the AI/ML world, and their language suggests they have embedding layers in the network (see word2vec and word embeddings) in order to assess semantic and perceptual likeness --- this is not simply face detection or object detection, they are looking for (and only looking for) images that nearly identically match.
And the fact that they suggest they can be so specific via our hardware also means they could be as specific in any target they set, whether it was law breaking, politics, commercial interest, or anything else, as it bypasses Apple's own System Integration protocols and where even Apple employees realise the potential for abuse.
So many posts ignore the actual issue. The issue is not the technicality of CSAM implementation, it is not about Apple's right to do with on its own servers, for then customers own hardware is sacrosanct.
This is about SURVEILLANCE. In history various excuses have been used to reduce privacy, inevitably they turn out to be just that excuses and history gives more than enough examples of the outcome where then subjugation was the outcome and even extermination!
This for me is primarily about Apple using equipment WE OWN, use processing power we have paid for, and electricity we pay for to initiate what is equivalent to iCloud pre check, but in any real sense it is SURVEILLANCE.
There can be no logical reason why Apple do not choose to emulate others in having such checks on their own server and where users then decide whether to use iCloud or not, but instead Apple have gone an insidious route, far more damaging than those they have criticised such as Facebook et al.
The nature of Apple software and safety revolves around System Integration Protection, but where Apple has a free pass on that, which allows them to adapt or add any software they choose, and in this case Surveillance if you strip out the emotive excuse for it, which I doubt will assist child safety anyway.
So all the assurances that this was designed ONLY for this, are toady words, because THIS might have been DESIGNED only for this, but the problem is its on YOUR DEVICES not Apple's.
Once that software is embedded in Operating Systems it is not correct to suggest it cannot be modified to include any surveillance that Apple wanted to initiate, and I can't recall Apple ever asking 1 billion users before this farce.
Once there then any update also bypasses System Integration Protection, so its comparatively easy then to modify the criterion on it, and if anyone believes it will not evolve, then its worth perusing history books!
In some cases existing users, will not even be able to use their equipment for the purposes they bought it, because having that software on it, many organisations would not be able to fulfil the remit either to customers or even government policy on security and privacy. Social Services or agencies involved in Child Abuse work often now use iPads in the field, with photos and information that is and should be not just confidential, but beyond question, and having an operating system capable of iCloud hash prechecks may make it untenable to use that equipment, whereas iCloud hash checks via the servers leave customers machines secure and sacrosanct as they should be, but still allows the intention (if it isn't just an excuse to get this software in) of CSAM, albeit no paedophile after Apple's backfoot attempts at explaining the surveillance, will be likely to ever use iCloud for photos, leaving innocent customers having embedded software on their own system for no technical reason at all.
I don't want to hear or see about how targeted this software is, or how its so anonymous, or how it is designed, how state of the art it is...history is littered with those excuses being a precursor to something rather more than was originally suggested. The fact is I do not want my operating system on my machine to be subject to censorship by Apple. I can do that myself! I don't want to tell my customers or my government that I can't guarantee the integrity of the equipment I own.
Have the checks conducted on iCloud by iCloud, not forced onto us via being part of the operating system where apple are do not have to be mindful of System Integration Protection so there is no protect from what Apple could do to modify this software, and why I suggest so many Apple employees have expressed their concern also, and they have likely far more access to how it is constructed and even how easy it is to modify than the public statements about it.
Just imagine social service, crime fighting agencies, government agencies, and contractors of them, working day in and day out to safeguard children, being subjected to software on their own devices, in many cases bought specifically because of Apple's public stance on privacy and security.
So all the arguments of technicality are obfuscation the substantive issue is Apple expecting 1 billion users to download software on their own hardware, software capable of much more modification. It is Surveillance and dressing it up into technicality or 'why would anyone object to pornography etc.' are erroneous comments, which I suspect is why Child safety was used, and where on some occasions its true Apple like others play a helpful role in fighting crime, but not in the manner they intend now.
Some will no doubt say you don't have to upload latest software, but you do, because for example those working on government issues, security issues, etc. etc., would find their business insurance would not indemnify them if they had not loaded the latest software, often for the reason of security updates.
My equipment becomes redundant if Apple do what they suggest and so would many customers who use Apple hardware for multiple situations, and yet if its iCloud based rather than OUR hardware, that is not the case.
User hardware should be sacrosanct from this, especially when Apple made such a thing about protecting our privacy with such features as managing website data et. al,. where Facebook and others kicked off about and where Apple's response looks decidedly crap now.