Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That seems to be the salient point. The practical implications of this specific issue are almost non-existent, but the principle of installing software on your device to scan your files on behalf of a third party - with promises not to expand its use - crosses an important line in the sand.
Photos are not scanned. Hashes are generated and compared to CSAM hashes.
 
...and only if you have iCloud turned on... ;)
You would have to have multiple matches against a database of known CSAM before anything happens, and even then, a human determines whether to notify authorities and then the police determine whether to further their investigation. The chances of having enough false positives to even think about looking at your photos is practically zero (but not impossible obviously). The chances of being prosecuted if you're not doing illegal CSAM stuff is 0.

Can't wait to hear all of these innocent people that fall into that 1 in a trillion category.
 
Last edited:
  • Like
Reactions: MozMan68
So you trusted all of their marketing before, but now you suddenly don't?

This is what I don't get. People SUDDENLY think Apple is evil for doing something they've been doing for a while now and for some reason they expect that their entire phone will be scanning and uploading data even though they clearly said it's only for CSAM and it's only for iCloud Photos and if the iCloud Photos is turned off, there's no scanning done whatsoever. I don't know how much clearer they can be about it.

NO. That's the point. You don't trust. You verify. Why? Because history has shown us that any time a new technology can be exploited, it will be, however altruistic the original intent is. This is an entirely closed loop; there is no way to verify that the CSAM hashes they check against are actually that.
 
NO. That's the point. You don't trust. You verify. Why? Because history has shown us that any time a new technology can be exploited, it will be, however altruistic the original intent is. This is an entirely closed loop; there is no way to verify that the CSAM hashes they check against are actually that.
its called Trust but verify not Mistrust and Verify
 
NO. That's the point. You don't trust. You verify. Why? Because history has shown us that any time a new technology can be exploited, it will be, however altruistic the original intent is. This is an entirely closed loop; there is no way to verify that the CSAM hashes they check against are actually that.
Apple doesn't even know what the hashes actually look like either. The only organization that knows is the organization that holds all of these known CP photos.
 
Still doing digging into this subject.
Here is a video that does a good job at trying to look at all the angles and options including disinformation.
Good info no matter what side of the “argument” you are on.
It covers both the CSAM and Messages functions.

Includes actual screenshots on the iPhone.

 
Last edited:
Still doing digging into this subject.
Here is a video that does a good job at trying to look at all the angles and options including disinformation.

Can't trust Apple, but some dude on YouTube... okay.

Edit: I watched the video and I liked it. He basically explained exactly what Apple already explained in their white papers.
 
Last edited:
Can't trust Apple, but some dude on YouTube... okay.

Edit: I watched the video and I liked it. He basically explained exactly what Apple already explained in their white papers.
Whatever. :rolleyes:

I posted the link for folks who are looking for a good explanation with visuals with a Q&A.

You seem unable to understand I will keep digging until I fully understand the kit and kaboodle. Not there yet and this video was not a silver bullet.
 
IMO ….

I have been reading, listening, watching, and researching on this issue. I have learned a lot. There is a lot of misinformation out there out there however there are some folks / groups who are doing their best to provide faactual well knowledged responses and explanations on these “new” Apple features. I finds myself wanting to think Apple is doing the right thing but their lack of explanation and critical details (they have reasons for it) leaves the door open for not believing they are being completely “open and truthful”.

End of day I am stuck between what Apple states and what the EFF states. Caught in the middle as it were and still very concerned on this step Appple has taken.

As stated on https://daringfireball.net/2021/08/apple_child_safety_initiatives_slippery_slope webpage:
”But the “if” in “if these features work as described and only as described” is the rub. That “if” is the whole ballgame. If you discard alarmism from critics of this initiative who clearly do not understand how the features work, you’re still left with completely legitimate concerns from trustworthy experts about how the features could be abused or misused in the future.

I keep coming back to the SB Terror event and how hard Apple pushed back against the FBI with the main reason for not opening up their iphones:
it showed law enforcement and governments that Apple now had a way to flag certain content on a phone while maintaining its encryption. Apple has previously argued to the authorities that encryption prevents it from retrieving certain data.
This now no longer the case.

I can understand peoples outrage. For now, I‘ll watch and do my best to keep myself informed.
 
I keep coming back to the SB Terror event and how hard Apple pushed back against the FBI with the main reason for not opening up their iphones:
it showed law enforcement and governments that Apple now had a way to flag certain content on a phone while maintaining its encryption. Apple has previously argued to the authorities that encryption prevents it from retrieving certain data.
This now no longer the case.

I now believe this is what it's really all about.

With a way to scan content on devices, before the E2EE upload to iCloud stage, they will be able to silently fulfill surveillance requests made by LE and other agencies and interested parties (Govs, Corps....etc) --- and still tout, in the future, full E2EE
 
Photos are not scanned. Hashes are generated and compared to CSAM hashes.
The practical upshot is that the images are searched to detect any that match a database of known CSAM images. Most people would understand that as “scanning for offending images”. Using hashes is a great method of comparing two things without having a copy of both, and isn’t as scary as some so-called AI trying to guess if its a nude picture, but the precise method doesn’t change that basic fact that your images are being searched for known bad ‘uns.

The hashes aren’t “generated“ by the magic hash fairy, they’re calculated by reading the data in your images.

As I said earlier, this isn’t a practical problem, it’s a fine line being crossed by moving the search to your own hardware.
 
Still doing digging into this subject.
Here is a video that does a good job at trying to look at all the angles and options including disinformation.
Good info no matter what side of the “argument” you are on.
It covers both the CSAM and Messages functions.

Includes actual screenshots on the iPhone.

Rene did a good job with that video.
 
...and only if you have iCloud turned on... ;)
Which the iPhone set-up process actively encourages people to do, including (last time I did it) telling you about all the cool stuff you’ll lose if you don’t.

Can't wait to hear all of these innocent people that fall into that 1 in a trillion category.
I wonder how many photos are uploaded to iCloud in a year? If you’re checking millions of photos a day then that “1 trillion” rapidly starts to shed zeros, which is dangerous if people have the “1 in a trillion” figure stuck in their head. On the scale on which Apple operates, they probably will have to plan for false positives and make sure the human checking is fair and effective.

Hashing probably is sufficiently reliable - the problem I’d worry about is innocent images mistakenly getting in to the CSAM databases, especially if Apple’s checkers aren’t allowed to challenge an images inclusion. That’s even before getting on to more paranoid theories about deliberately expanding the criteria.
 
Are you serious?

Are you? You cannot with any degree of certainty tell us that the database contains ONLY what they say it does because no one has verified it independently. We live In a world with federal law enforcement doing their absolute best to end access to encryption, secret courts handing out -thousands- of secret warrants, and cops taking it upon themselves to use facial recognition software that runs on their phones. Are you really seriously saying that the idea of a court ordering the NCMEC to add other images to their database is a step too far to believe? Really?
 
The practical upshot is that the images are searched to detect any that match a database of known CSAM images. Most people would understand that as “scanning for offending images”. Using hashes is a great method of comparing two things without having a copy of both, and isn’t as scary as some so-called AI trying to guess if its a nude picture, but the precise method doesn’t change that basic fact that your images are being searched for known bad ‘uns.

The hashes aren’t “generated“ by the magic hash fairy, they’re calculated by reading the data in your images.

As I said earlier, this isn’t a practical problem, it’s a fine line being crossed by moving the search to your own hardware.
I don’t think most people understand that as the meaning of scan in this context. Many people in this thread with all the information at their fingertips are still repeating disinformation about the process.

Personally I am more comfortable with a hash being generated from my photo on my device than I am with my photo being unencrypted and hashed on Apple’s server.
 
I don’t think most people understand that as the meaning of scan in this context. Many people in this thread with all the information at their fingertips are still repeating disinformation about the process.

Personally I am more comfortable with a hash being generated from my photo on my device than I am with my photo being unencrypted and hashed on Apple’s server.
They no longer have to unencrypt them to search.
But then Apple doesn't search today except likely via sunpeona.
 
Are you? You cannot with any degree of certainty tell us that the database contains ONLY what they say it does because no one has verified it independently. We live In a world with federal law enforcement doing their absolute best to end access to encryption, secret courts handing out -thousands- of secret warrants, and cops taking it upon themselves to use facial recognition software that runs on their phones. Are you really seriously saying that the idea of a court ordering the NCMEC to add other images to their database is a step too far to believe? Really?
Let me get this straight...you think someone is going to steal YOUR innocent picture of let's say, a baby in a tub...and then, somehow get it uploaded into the database of horrible child pornography just so they can hope that you have that pic on your device AND iCloud turned on so it can be tagged as an "inappropriate image??"

And then, they would have to do this multiple times with other images just so Apple would be forced to do a review of your account?

Of course, they know you have these type of images on your phone....and have some way of hacking into it assuming you didn't post these online somewhere for everyone to "steal"...AND they are able to get these uploaded into a database that has been maintained for decades without any proof of this ever happening...all so Apple can do a personal investigation of your account...and do what? Turn you into the authorities for having innocent pictures of your own kids??

You must be a very important person.

Or maybe you are concerned about the innocent journalist who the bad government wants to discredit because they wrote a horrible story about their dictator?? Guess what, it would easier for them to just run them over with a car and not get caught instead of roping Apple into their nefarious scheme.
 
  • Haha
Reactions: Knowlege Bomb
Social justice is all fun and games until you're not in agreement with the cause. Apple has already doubled down on this, and like with any of the causes they support, they go at it 110%, with or without you.

I personally don't care for it. Call me a conspiracy theorist or whatever, but it opens the door for Apple to push more crap on you. Next they'll be doing what instagram did with the sensitivity filter. Oh and remember, if you don't agree with it you must be some sort of creep who just wants kiddie porn to spread rampantly, or something.

Emotionally baited arguments like the one over CSAM are designed to make you agree and comply unless you want to be labeled a "monster". Same goes with arguments surrounding vaccinations, mental health, etc. Debating the facts over whether something logically makes sense and works is now subordinate to "you just want [insert ridiculous-sounding disaster like everyone dying of covid] you *******!" if you dare even refute the argument. No, seriously.

We need to cut it out with the groupthink. Let the herd be for herd immunity, not debate and reasoning.
 
I don’t think most people understand that as the meaning of scan in this context.
Then what do they think it means? Certainly not some highly technical and pedantic definition of "scan" that excludes reading the data to calculate a hash. The phrase "Photos are not scanned. Hashes are generated...." is classic smoke and mirrors (hashes are generated, yes, by scanning the photos... unless you assign some highly specific meaning to "scan").

Personally I am more comfortable with a hash being generated from my photo on my device than I am with my photo being unencrypted and hashed on Apple’s server.
Which would be a great supporting argument if it was combined with end-to-end encryption so that, once the photos are on the server, Apple couldn't decrypt them - but that doesn't seem to be the case, at least with iCloud photos (iMessages, maybe...) Also, note that Apple are promising that matches will be reviewed by humans before reporting - so (at least if that process has any meaning) they do have a mechanism for viewing the photos anyway.

Let me get this straight...you think someone is going to steal YOUR innocent picture of let's say, a baby in a tub...and then, somehow get it uploaded into the database of horrible child pornography just so they can hope that you have that pic on your device AND iCloud turned on so it can be tagged as an "inappropriate image??"
You're trying to discredit the idea by personalising it. No, it's not likely that one of your personal photos will get on to the database... but the decision about what does go in is in the hands of a third-party agency, who might (say) decide that some widely distributed internet meme photo is unacceptable... or even make a fat-finger error and add a bunch of LOLCats images that they were using for testing to the database... Then you have to ask how robust and fair the process for dealing with suspected "hits" is going to be.

...plus, of course, nobody ever wrote malware that downloads porn to your devices, did they...?

It's also becoming clear that people think the "1 in 1 trillion chance of a false match" means that if you get flagged, the chances of you being innocent is one in one trillion. See: https://en.wikipedia.org/wiki/Prosecutor's_fallacy

There are about 1 billion iPhone users. Who knows how many thousands of images are in the CSAM database? False positives are probably going to happen, so the question is do you really, really trust that they are going to be interpreted correctly?
 
  • Like
Reactions: Pummers
Then what do they think it means? Certainly not some highly technical and pedantic definition of "scan" that excludes reading the data to calculate a hash. The phrase "Photos are not scanned. Hashes are generated...." is classic smoke and mirrors (hashes are generated, yes, by scanning the photos... unless you assign some highly specific meaning to "scan").


Which would be a great supporting argument if it was combined with end-to-end encryption so that, once the photos are on the server, Apple couldn't decrypt them - but that doesn't seem to be the case, at least with iCloud photos (iMessages, maybe...) Also, note that Apple are promising that matches will be reviewed by humans before reporting - so (at least if that process has any meaning) they do have a mechanism for viewing the photos anyway.


You're trying to discredit the idea by personalising it. No, it's not likely that one of your personal photos will get on to the database... but the decision about what does go in is in the hands of a third-party agency, who might (say) decide that some widely distributed internet meme photo is unacceptable... or even make a fat-finger error and add a bunch of LOLCats images that they were using for testing to the database... Then you have to ask how robust and fair the process for dealing with suspected "hits" is going to be.

...plus, of course, nobody ever wrote malware that downloads porn to your devices, did they...?

It's also becoming clear that people think the "1 in 1 trillion chance of a false match" means that if you get flagged, the chances of you being innocent is one in one trillion. See: https://en.wikipedia.org/wiki/Prosecutor's_fallacy

There are about 1 billion iPhone users. Who knows how many thousands of images are in the CSAM database? False positives are probably going to happen, so the question is do you really, really trust that they are going to be interpreted correctly?
You actually need to read this thread and the other main thread....we cover all of this in detail.

I personalized it to prove how ridiculous it is.....BECAUSE, as you mention, there are so many easier ways for someone to do this to any individual or phone without involving a highly secure database of images, the check guards in place by one of the world's largest companies, MUCH LESS regulated government agencies in most countries.

Can someone hack my phone and put content on it? Of course, very easily if they send a text or an email to me and I'm stupid enough to click on the included link. It would be easier to break into my iCloud account (assuming I'm stupid enough to not notice someone did that considering all of the warnings and dual-factor authentication I have turned on via Apple).

Of course it CAN happen....anything CAN happen...it's just stupid for anyone to think it would considering there are other methods that are much easier and proven to actually work versus this particular addition to iOS. What you are mentioning is spy movie BS where the bad guy gets thrown in jail, not for what he actually did, but the good guys used "the system" against them by planting kiddie porn on their phone so they can get arrested and put away forever. Makes for a great movie, not reality.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.