Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Actually that's what I would expect yes - for anything flagged by their system and escalated beyond a certain level. Maybe they're the same contractors that train your voice assistant, too. ;)
But this is what I don't quite understand... No iCloud photo is ever seen, either by a human or a computer, so no image from any iCloud account is going to be "escalated beyond a certain level" based on content. Any so-called "escalated" image would have to be a known CSAM image, that had been flagged and registered by an outside organization—i.e., escalated by a totally separate body which had identified it as a CSAM image. Unless, when you say "flagged by their system", you're referring to whoever is registering the CSAM images, not Apple. Is that what you mean?

Otherwise, in order for such a mechanism to be abused, a specific image would have to be identified/flagged by an outside body, and a given iCloud account would have to contain the exact same image. Not an image like it—i.e., not a picture of similar content, or even of the same person, but literally the same image. Now, could this be abused? Sure. For example, you could have a selfie on your phone, or maybe a photo of some incident or object, and that could be posted somewhere online, then some third party could flag that image for questionable reasons. Totally possible, and in that sense, yes, the privacy concerns are valid.

However, I guess what I find a bit startling is that, except when strong encryption is used, there's always an implicit degree of trust involved in storing anything online. Literally any piece of unencrypted data that you've stored anywhere is insecure in precisely the same way as what we're talking about here—i.e., all that is required for a privacy violation to occur is that some organization fails to do what they're claiming to do. So it seems to me that's really all we're talking about here. The only way this gets abused is if the organization registering the CSAM images no longer acts in good faith, or the access and cooperation Apple is providing is extended beyond the issue of CSAM images (i.e., Apple no longer acts in good faith). But again, the same can be said for absolutely every instance of any piece of unencrypted data stored online with any organization or service, no?
 
I haven’t seen any discussion about regular porn regarding the new announcements.

If someone has regular porn, like self nudes, pro pornstars or amateurs from OnlyFans, in their library, how does the system differentiate and not throw false positives? Then there’s the potential of having ones iCloud completely disabled, basically bricking their phone??

I’m fine with the rooting out child porn aspect of it, although it feels like they’re taking a lot of liberties with personal privacy in order to achieve this. Like plenty of people have personal nudes of themselves or normal porn on their phones, so Apple’s then gonna be viewing these private pics if their system throws a false positive. Surely someone who’s committing child porn crimes could just disable their iCloud library and be on their way.
 
  • Like
Reactions: Euronimus Sanchez
If someone has regular porn, like self nudes, pro pornstars or amateurs from OnlyFans, in their library, how does the system differentiate and not throw false positives?
Don't worry, Apple says it won't throw false positives. You can trust them, they have your privacy in mind! I mean except the part where they scan your photos in the first place.
 
Under the false disguise of “privacy” iOS 15 introduces mass surveillance to billions of people all over the world. The sad thing is most people will blindly walk into this trap, because they don’t understand what’s really going on technically and/or have been brainwashed thinking Apple really cares about their privacy.

This iCloud “Private” Relay is simply another backdoor. It effectively routes all web traffic through Apple and their “trusted” partners, preventing websites from tracking your web activity, while at the same time exposing it to Apple and their “partners” whose identities they refuse to disclose. The unverifiable claim by Apple that all data is encrypted so they can’t access it doesn’t change a thing. It’s up to Apple to apply encryption meaning they can choose not to at any moment and without anyone knowing. On top of this, this is all happens without the user being aware of it, since this backdoor is enabled by default.
 
  • Like
Reactions: Euronimus Sanchez
But this is what I don't quite understand... No iCloud photo is ever seen, either by a human or a computer, so no image from any iCloud account is going to be "escalated beyond a certain level" based on content. Any so-called "escalated" image would have to be a known CSAM image, that had been flagged and registered by an outside organization—i.e., escalated by a totally separate body which had identified it as a CSAM image. Unless, when you say "flagged by their system", you're referring to whoever is registering the CSAM images, not Apple. Is that what you mean?

Otherwise, in order for such a mechanism to be abused, a specific image would have to be identified/flagged by an outside body, and a given iCloud account would have to contain the exact same image. Not an image like it—i.e., not a picture of similar content, or even of the same person, but literally the same image. Now, could this be abused? Sure. For example, you could have a selfie on your phone, or maybe a photo of some incident or object, and that could be posted somewhere online, then some third party could flag that image for questionable reasons. Totally possible, and in that sense, yes, the privacy concerns are valid.
It is stated that 'Apple can't access metadata or visual derivatives for matched CSAM images until a threshold of matches is exceeded for an iCloud Photos account.' (copy-pasted from Arstechnica-article) - that to me suggests that once the account has been flagged they will indeed be able to view the offending images and investigate what you keep on there.

Doesn't mean there is going to be an army of contractors idly browsing people's photo collections or the like which seems to be what some people are getting at.

As for how similar these pictures would have to be to trigger the system - who can say? Wouldn't be much of a technology if it only worked on the exact same file used for training. You'd surely want to be able to identify photos from the same shoot as your known image.
 
It is stated that 'Apple can't access metadata or visual derivatives for matched CSAM images until a threshold of matches is exceeded for an iCloud Photos account.' (copy-pasted from Arstechnica-article) - that to me suggests that once the account has been flagged they will indeed be able to view the offending images and investigate what you keep on there.

Doesn't mean there is going to be an army of contractors idly browsing people's photo collections or the like which seems to be what some people are getting at.
Yeah, exactly.
As for how similar these pictures would have to be to trigger the system - who can say? Wouldn't be much of a technology if it only worked on the exact same file used for training. You'd surely want to be able to identify photos from the same shoot as your known image.
Well, they mention checking hashes, which really does mean literally checking the same exact image—the hash would be an encoding of the exact image; a kind of fingerprint—not an approximation. If it's only an approximation, a hash doesn't really work, as you'd have collisions all over the place. (Of course, collisions happen in hashing, but the intention is certainly to avoid them.)
 
Last edited:
If you haven't already complained to Apple about the iCloud backdoor debacle, please do. The only way they're going to reverse this idiotic move is if they get a ton of complaints and a ton of bad press about it.
Also by disabling iCloud Photos and by unsubscribing from iCloud. They run stats and will notice users jumping off since the announcements.
 
I am not sure if it has been stated here but seeing as they are just scanning for hashes of known images would this not simply find people who saved images from the internet which should not have such images in the first place and probably would already catch the attention of groups like the fbi and not the people out there actually committing the crimes as those would have a new never before seen hash so this would actually solve nothing in that area and only be a privacy nightmare
 
Just updated to iOS 15 and trying to understand mail privacy protection.

How does this keep the sender of the emails with remote images from knowing that a particular email address is valid or not?
Whether Apple is loading the photos on your device or on their relay servers it’s still loading a remote image phoning home that “that” particular email address is valid and more will be sent.
I’ve sent bulk emails and we didn't use single hidden pixels but created unique main graphic urls for each unique email address. So if that graphic was loaded from anywhere that told us that email was valid.
Yes we might not see where they are because of ip masking but was more interested it was a valid email.
When they clicked thru to us then we could gather info unless they used a vpn, but some vpn don’t route pics thru their servers and our image server would get the IP address.

For now I’m leaving this off and blocking remote images. :rolleyes:
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.