Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
All that is necessary for an adverse actor is to use the noise as a mask over legal pornographic material and the user is screwed. Apple's human reviewer isn't going to do an entire CSI analysis to make sure if the people depicted in the photo (or their body parts) were underage at the time the photo was taken.

They'll ask themselves one question: could this be CSAM? If the answer is yes then the account gets blocked and a report is files.

As for his statement that "it would require the production of over 30 colliding images", that's just intellectually dishonest. They don't have to be 30 unique images, it could be 30 of the same. And even if it would require unique images, generating a colliding image is trivial both in effort and time as has been demonstrated, to then apply that colliding image to legal porn is even less of a feat.

He also said "until they implement a filter" which Apple already have in place as part of the system.
 
I am sadly sure that this comment will be lost and buried, but the ongoing discussions on surveillance and privacy, both in general and in particular cases such as this, brought to mind Blackstone's ratio, which I had read a number of years ago. To wit:

It is better that ten guilty persons escape than that one innocent suffer.

John Adams, founding father of the United States, put it very eloquently:

It is more important that innocence should be protected, than it is, that guilt be punished; for guilt and crimes are so frequent in this world, that all of them cannot be punished.... when innocence itself, is brought to the bar and condemned, especially to die, the subject will exclaim, 'it is immaterial to me whether I behave well or ill, for virtue itself is no security.' And if such a sentiment as this were to take hold in the mind of the subject that would be the end of all security whatsoever.

In other words, as laudable as the ends may be, they cannot always justify the means. Moreover, a system that looks only for wrongdoing will invariably find it where none exists. And we all know the story of Jean Valjean...

But is it better for 1000 guilty persons to escape?
 
Well that depends really. Account security on social networks is about the same as on Apple (both support strong passwords and 2 factor authentication). Both encrypt their data.

The main difference is that if a hacker broke into your social network, the most they could likely do is impersonate you or if you're silly enough to do in-app purchases via facebook then they could buy stuff using the card on record (note that they could only buy stuff on facebook as your card details would be obscured on-site).

On Apple they could trigger the kiddie porn system by uploading to iCloud photos if you don't pay them, they could use a linked card to make purchases (potentially), and they would have access to all your previous purchases (apps, music, movies, etc).

They could also post CP on your Facebook wall or Instagram or send it to all your contact in Facebook Messenger.

I have read in some these threads Facebook reported 20 million cases of CP last year.
 
The point being there has already been a case where Apple gave into government pressure in order to have access to a market but there are many posts saying Apple said they will ”just say no” to requests to scan for other images.

No, the point being that oppressive governments who have survived for a long time know how to do the oppression in an effective way.

China will just look through the iCloud servers which is much easier and they gain almost all the data from the device.
 
On re-reading the Apple docs, you are right on this. Seems strange that Apple wouldn’t check that particular image produces that particular hash as a simple test for hash collisions. Or did I miss that step in their description?
They seem to be aware of the issues and the risk and simply try to mitigate it by using a threshold of around 30 flagged images.

The problem with the threshold however, aside from the fact one should ask themselves if all of this is truly worth it if you’re only going to catch people with 30 or more CSAM images, is that the chances of an individual image being falsely flagged exponentially increases the higher the threshold is if you’re aiming for a 1 in a trillion chance per year of falsely flagged accounts post-threshold, like Apple is aiming for.

To demonstrate this with a simple example that omits some of the other factors: if the threshold is 2 images then the chances for an individual image being wrongfully flagged is 1 in a million, because 1/1 million times 1/1 million is 1 in 1 trillion.

If the threshold is 3 images, then the chances for individual images increases to 1 in 10,000 because 1/10,000 times 1/10,000 times 1/10,000 is 1 in 1 trillion.

Etc. The higher the threshold for a 1 in 1 trillion chance of an account being falsely flagged post-threshold, the higher the chance an individual image is wrongfully flagged.

He also said "until they implement a filter" which Apple already have in place as part of the system.
His remark about a filter pertains to people spamming Apple with random images that are generated to cause a hash collision, not people using legal pornographic images to create hash collisions.

Aside from that, none of Apple’s documentation suggests they have a way to filter out hash collisions, wether it is random images that collide and people spam them with, or otherwise.
But I’m open to a reference in Apple’s documents saying otherwise.
 
  • Like
Reactions: Alwis
To demonstrate this with a simple example that omits some of the other factors: if the threshold is 2 images then the chances for an individual image being wrongfully flagged is 1 in a million, because 1/1 million times 1/1 million is 1 in 1 trillion.
Not disagreeing with your general thrust, but that math doesn’t work in this case : you can only multiply two probabilities like that if they relate to independent events: like rolling a fair die where each throw is unaffected by previous events. You have to ask whether, once you have picked someone out of the crowd on the basis of a 1 in 1m match, the probability of a second match is still 1 in 1m.

In context, if someone has one photo that triggers a match, how likely is it that they will have a second “visually similar” image in their collection that will also trigger a match? That might include cropped, resized, colour-adjusted versions of the same image which - apples docs show - should produce the same hash. Ans: insufficient data, but feasibly much, much higher than the chance of a second, random, false match.

To be clear, that doesn’t prove that Apple are wrong about their 30 matches = 1 in 1 trillion figure: but it’s not something you can trivially check with high school math, so it boils down to whether you trust Apple not only to be honest but to have avoided a very common honest mistake (I’ve previously posted a couple of links - the mistake already has a body count). Its an issue that Apple should have specifically discussed in their press releases.

It’s also why the “cryptographic hash” vs “perceptual hash” distinction is important.
 
You should take some time to understand how iCloud photos works. If someone emails or messages someone all of those illegal photos and they have the feature turned on it wouldn’t work. They have to move those photos into the photo library for them to then get uploaded to iCloud photos. Otherwise they are just in their messages or email app.

If their a law abiding person they’ll report that person to the authorities. In my case I would just hand over my phone to increase the chances of them catching that individual or entity. Inconvenient for me yes but worth it IMO to catch that scumbag.

Also even if it did work that way, it would certainly be an inconvenience and would probably become something some kids would prank other people over. At which point Apple would find ways to patch it out and law makers would create laws that would prohibit it, which technically already exists. But since that’s not how iCloud photos works it just an exercise in showing you that when systems are weak agencies and companies strengthen them.
Respectfully you have exactly zero idea how law enforcement functions. You hand over your phone with these files on it, your life is over.
 
Respectfully you have exactly zero idea how law enforcement functions. You hand over your phone with these files on it, your life is over.
I have dealt with law enforcement before. Not with this issue but on others occasions I have. I know there are good and bad cops, I personally think their powers needs a lot more oversight. However, I would not stand in their way of getting at a scumbag who is abusing children.

I would also get representation in a situation like this as well. I know they’re going to ask me a million questions, I know they are going to suspect I’m part of the problem. I simply do not care. I would protect myself and I would cooperate with them reasonably so they could catch whoever wanted to do this.

I do not put myself or my privacy above the abuse of others just so I can feel comfortable. I personally don’t have anything to hide so maybe thats why I don’t have these reservations. Not accusing you of anything but I won’t trust that something will get fixed if I do absolutely nothing when I could.
 
I do not put myself or my privacy above the abuse of others just so I can feel comfortable. I personally don’t have anything to hide so maybe thats why I don’t have these reservations.

As stated somewhere else before:

First they came for the paedophiles, and I did not speak out—
Because I was not a paedophile.
Then they came for the socialists, and I did not speak out—
Because I was not a socialist.
Then they came for the trade unionists, and I did not speak out—
Because I was not a trade unionist.
Then they came for the Jews, and I did not speak out—
Because I was not a Jew.
Then they came for me—and there was no one left to speak for me.
 
IKR?

How can a discussion about this technology and with Apple saying they won’t let rogue governments affect them not also involving taking politics some of the time.

I got a note from a moderator about not taking politics outside of a politics forum. *shrug*
I got warned by a mod for telling someone that they should learn to read, said I was insulting them 🙄🙄
 
Not disagreeing with your general thrust, but that math doesn’t work in this case : you can only multiply two probabilities like that if they relate to independent events: like rolling a fair die where each throw is unaffected by previous events. You have to ask whether, once you have picked someone out of the crowd on the basis of a 1 in 1m match, the probability of a second match is still 1 in 1m.

In context, if someone has one photo that triggers a match, how likely is it that they will have a second “visually similar” image in their collection that will also trigger a match? That might include cropped, resized, colour-adjusted versions of the same image which - apples docs show - should produce the same hash. Ans: insufficient data, but feasibly much, much higher than the chance of a second, random, false match.

To be clear, that doesn’t prove that Apple are wrong about their 30 matches = 1 in 1 trillion figure: but it’s not something you can trivially check with high school math, so it boils down to whether you trust Apple not only to be honest but to have avoided a very common honest mistake (I’ve previously posted a couple of links - the mistake already has a body count). Its an issue that Apple should have specifically discussed in their press releases.

It’s also why the “cryptographic hash” vs “perceptual hash” distinction is important.
I wholeheartedly agree with you, instead of calling it a "simple" example, I meant to say it was a "simplified" example, because it leaves out a bunch of factors.

The general point I was trying to make, is that the 1 in a trillion chance is a product of the probability tied to the individual images.

There are actually other factors that play an important role, like time and size of photo library. If you're interested in some of the other things that are relevant I can highly recommend this blog article by cryptography expert and executive director of Open Privacy Canada, Sarah Jamie Lewis.
At the time she used the threshold of 10 since that was used as an example threshold in Apple's white paper, but you can plug 30 into the formula to determine the chances for the different situations.
 
They're hashes. Basically checksums. They don't really contain the data, or even a portion of the data; they're a fraction of the size of the data, and all they're good for is you can take the original data and verify a match (or not).

Apple's OSes also contain XProtect, which are hashes of malware — doesn't mean your devices have malware on them.

Oh, I didn’t say it wasn’t an emotional response, but having anything remotely to do with child porn in my device is definitely nauseating, especially since I didn’t put it there. I don’t get the same queasy feeling from malware hashes.

But what’s worse is Apple’s method of going about it. It‘s the whole ‘presumption of guilt’ because they’re basically sifting through your personal device.
 
As stated somewhere else before:
The thing is no one to inclide myself care if they come for pedophiles. They need to catch them. I fully support this. Just like I fully support them catching terrorists. I just can't support putting a back door in everyone's iPhone to make this happen.
 
The general point I was trying to make, is that the 1 in a trillion chance is a product of the probability tied to the individual images.

Sure, I wasn't challenging the broad strokes - but the "independence" issue is pretty fundamental. You can make lots of simplifying assumptions, but if the events are correlated the results could be just plain wrong: it could make the difference between the chance of that second match being 1/100,000 and 1/100.

The blogger you linked seems to be assuming that false matches are independent (by plugging in the standard formulae) - which doesn't matter for her argument, which is probably strengthened by being a 'best case' scenario in Apple's favour.
 
His remark about a filter pertains to people spamming Apple with random images that are generated to cause a hash collision, not people using legal pornographic images to create hash collisions.

Aside from that, none of Apple’s documentation suggests they have a way to filter out hash collisions, wether it is random images that collide and people spam them with, or otherwise.
But I’m open to a reference in Apple’s documents saying otherwise.

"Once Apple's iCloud Photos servers decrypt a set of positive match vouchers for an ac- count that exceeded the match threshold, the visual derivatives of the positively matching images are referred for review by Apple. First, as an additional safeguard, the visual derivatives themselves are matched to the known CSAM database by a second, independent perceptual hash. This independent hash is chosen to reject the unlikely possibility that the match threshold was exceeded due to non-CSAM images that were adversarially perturbed to cause false NeuralHash matches against the on-device encrypted CSAM database. If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation." -Apple's Security Threat Model Review of Apple’s Child Safety Features
 
To demonstrate this with a simple example that omits some of the other factors: if the threshold is 2 images then the chances for an individual image being wrongfully flagged is 1 in a million, because 1/1 million times 1/1 million is 1 in 1 trillion.

So Apple's Neural Hash will wrongfully flag 2 out of 5 messages since

2,51188643150958^30 ≈ 1 trillion

?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.