Actually that's what I would expect yes - for anything flagged by their system and escalated beyond a certain level. Maybe they're the same contractors that train your voice assistant, too.
But this is what I don't quite understand... No iCloud photo is ever seen, either by a human or a computer, so no image from any iCloud account is going to be "escalated beyond a certain level" based on content. Any so-called "escalated" image would have to be a known CSAM image, that had been flagged and registered by an outside organization—i.e., escalated by a totally separate body which had identified it as a CSAM image. Unless, when you say "flagged by their system", you're referring to whoever is registering the CSAM images, not Apple. Is that what you mean?
Otherwise, in order for such a mechanism to be abused, a specific image would have to be identified/flagged by an outside body, and a given iCloud account would have to contain the
exact same image. Not an image like it—i.e., not a picture of similar content, or even of the same person, but literally the same image. Now, could this be abused? Sure. For example, you could have a selfie on your phone, or maybe a photo of some incident or object, and that could be posted somewhere online, then some third party could flag that image for questionable reasons. Totally possible, and in that sense, yes, the privacy concerns are valid.
However, I guess what I find a bit startling is that, except when strong encryption is used, there's always an implicit degree of trust involved in storing
anything online. Literally any piece of unencrypted data that you've stored anywhere is insecure in precisely the same way as what we're talking about here—i.e., all that is required for a privacy violation to occur is that some organization fails to do what they're claiming to do. So it seems to me that's really all we're talking about here. The only way this gets abused is if the organization registering the CSAM images no longer acts in good faith, or the access and cooperation Apple is providing is extended beyond the issue of CSAM images (i.e., Apple no longer acts in good faith). But again, the same can be said for absolutely every instance of any piece of unencrypted data stored online with any organization or service, no?