Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

JonathanParker

macrumors regular
Original poster
Jul 1, 2021
106
344
Hi, currently seeing the news about apple scanning iPhones for CSAM and I saw that they use hash comparisons for their checks.

but the issue I have is that apparently it’s designed to flag the image even if it has been cropped or been turned into black and white image.

If any of you know how images work if you generate a cryptographic hash of an image you get a unique result, as soon as you change 1 single pixel of the image the whole hash changes.

At least, this is how traditional (and most) hashing works for example MD5 and SHA1.

So the question is what hashing algorithm is apple using that allows them to match images of the image is modified or maybe the more important question is how does apple compare CSAM image hashes of images that have been modified and match them??
 
I don't think anyone here knows exactly how Apple's system will work, including its limitations, so at best you're going to get theories.

That said, for any images stored and manipulated in the Photos app, Apple can ignore the edits (crops, color manipulation, etc). Apple Photos retains the original image file that was imported into the app's photo library, and any edits done to it within the app are non-destructive. Apple would do any hashing against the original, unedited image.
 
  • Like
Reactions: Shirasaki
There are different kinds of hashing. This isn’t the same as hashing the bits of a file like you’re referring to; but rather, it’s hashing the image visually. I think the previous post explains it best, or you can search on image hashing to find more info about various techniques.

This isn’t anything new either. There are reverse image search engines, like Google Images or TinEye. You upload a picture and it may find matches on the web. The matches could be the same image, but with different dimensions, file types, colors, etc…or it could be different, but visually similar images.
 
There are different kinds of hashing. This isn’t the same as hashing the bits of a file like you’re referring to; but rather, it’s hashing the image visually. I think the previous post explains it best, or you can search on image hashing to find more info about various techniques.

This isn’t anything new either. There are reverse image search engines, like Google Images or TinEye. You upload a picture and it may find matches on the web. The matches could be the same image, but with different dimensions, file types, colors, etc…or it could be different, but visually similar images.
Thank you, but visually ha
There’s some very basic info in their technical summary. They call it NeuralHash.

This was what I was looking for, thanks :)
 
  • Like
Reactions: steve62388
Hi, currently seeing the news about apple scanning iPhones for CSAM and I saw that they use hash comparisons for their checks.

but the issue I have is that apparently it’s designed to flag the image even if it has been cropped or been turned into black and white image.

If any of you know how images work if you generate a cryptographic hash of an image you get a unique result, as soon as you change 1 single pixel of the image the whole hash changes.

At least, this is how traditional (and most) hashing works for example MD5 and SHA1.

So the question is what hashing algorithm is apple using that allows them to match images of the image is modified or maybe the more important question is how does apple compare CSAM image hashes of images that have been modified and match them??
Search for the non-AppleBuzzWorded term "fuzzy hashing" maybe together CSAM, and you'll find more raw technical infos. Anyway, it's a compromise between false negatives and false positives.
 
I don't think anyone here knows exactly how Apple's system will work, including its limitations, so at best you're going to get theories.

That said, for any images stored and manipulated in the Photos app, Apple can ignore the edits (crops, color manipulation, etc). Apple Photos retains the original image file that was imported into the app's photo library, and any edits done to it within the app are non-destructive. Apple would do any hashing against the original, unedited image.
Not quite...the original "unedited" photo is part of the database your phone is comparing the image against.

What they are saying is that you could receive an edited photo from someone and save it to your phone. The edited version you receive is your "original." Apple then looks at that edited version and its software is able to determine if it is a match to the unedited one in the database of pictures.

You editing the photo really has nothing to do with how it works as it will know whether you have an original copy, a previously edited version, or if you edit it yourself prior to uploading to iCloud.

Your original photos have a 1 in one trillion chance of accidentally being so close to one of the database images...virtually impossible.

It also only uses the database images as a comparison, so technically a small group of horrible people could safely send pics to each other without fear, even if saved on their phones and iCloud. One of them would have to get caught and then the images would be added to the database for comparison if any of the photos got out beyond the original group.
 
What they are saying is that you could receive an edited photo from someone and save it to your phone. The edited version you receive is your "original." Apple then looks at that edited version and its software is able to determine if it is a match to the unedited one in the database of pictures.
That said, for any images stored and manipulated in the Photos app, Apple can ignore the edits (crops, color manipulation, etc). Apple Photos retains the original image file that was imported into the app's photo library, and any edits done to it within the app are non-destructive. Apple would do any hashing against the original, unedited image.
You're not wrong, but that's also not the point I was making. I was very explicit about what I was talking about.
 
You're not wrong, but that's also not the point I was making. I was very explicit about what I was talking about.
I do understand, but what you are saying isn't true as I do NOT have to save the original photo file. Apple gives you the option to throw that away and keep the edited file only. Or you could even screen shot the edited photo and delete the original before uploading it to iCloud.

My point is that it doesn't matter what you do...their software will detect it from an unedited or edited photo on your phone.

This has nothing to do with photos you take, it is about photos you have received and saved.

You can take all the horrible illegal photos you want. As long as they are not shared online to others, confiscated and added to the database, the hashes will never match your original (or edited) photos.

EDIT: Back to your point, you are jumping ahead and assuming just because Apple may have the ability to view the unedited phot, that is what they are doing. My point is that it doesn't matter....the assumption is that they are comparing what you have saved on your phone (unedited or not).
 
Not quite...the original "unedited" photo is part of the database your phone is comparing the image against.

What they are saying is that you could receive an edited photo from someone and save it to your phone. The edited version you receive is your "original." Apple then looks at that edited version and its software is able to determine if it is a match to the unedited one in the database of pictures.

You editing the photo really has nothing to do with how it works as it will know whether you have an original copy, a previously edited version, or if you edit it yourself prior to uploading to iCloud.

Your original photos have a 1 in one trillion chance of accidentally being so close to one of the database images...virtually impossible.

It also only uses the database images as a comparison, so technically a small group of horrible people could safely send pics to each other without fear, even if saved on their phones and iCloud. One of them would have to get caught and then the images would be added to the database for comparison if any of the photos got out beyond the original group.
Just for your info, Apple doesn’t claim it’s a one in one trillion chance of falsely matching an image. They say it’s a one in one trillion chance of falsely matching an account, there is an unpublished threshold before an account is flagged for further investigation.
 
  • Like
Reactions: hans1972
Just for your info, Apple doesn’t claim it’s a one in one trillion chance of falsely matching an image. They say it’s a one in one trillion chance of falsely matching an account, there is an unpublished threshold before an account is flagged for further investigation.
They also do not say how many "matches" have to come up before an account is even flagged for review....so, the odds are even higher than 1 in 1 trillion per picture (assuming they need more than one to actually flag an account).

The odds are so low of false positives, I'm not sure why everyone is so up in arms, or worse, worried about their own personal photos which are never viewed/shared.
 
What resources is it going to take on my phone. Will it slow things down while working? How much battery? How much battery?
 
  • Like
Reactions: Wildkraut
They also do not say how many "matches" have to come up before an account is even flagged for review....so, the odds are even higher than 1 in 1 trillion per picture (assuming they need more than one to actually flag an account).

The odds are so low of false positives, I'm not sure why everyone is so up in arms, or worse, worried about their own personal photos which are never viewed/shared.
Yeah, you’ve actually said the same thing again. It’s not one in one trillion per picture, it’s one in one trillion for incorrectly flagging an account. Given the number of photos people typically have it’s an important distinction.

The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account.
Source: https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf
 
Apple is most likely using a version of a Perceptual Hash. A good summary of the issues with it are here:


Basically, it's a system of "rough matching" because similar images will result in the same number. However, as the article points out, it's entirely possible to artificially generate an image which has the same hash but looks absolutely nothing like it.

So if someone leaks the CSAM database and generates a bunch of totally innocent looking images which match the CSAM hashes, they could send them around to adversaries and inundate Apple's flagging pipeline. Theoretically.
 
Yeah, you’ve actually said the same thing again. It’s not one in one trillion per picture, it’s one in one trillion for incorrectly flagging an account. Given the number of photos people typically have it’s an important distinction.


Source: https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf

Right…that’s why I clarified that the odds are actually higher because an account is only flagged with more than one detected picture.

People are already upset that the odds are 1 in one trillion, but that’s per reviewed account. And that assumes that at least more than one picture (Apple doesn’t say how many) are flagged.

The odds are so astronomically small that a personal phot may match one from the database, it’s ridiculous for that that to be a concern, right?
 
Right…that’s why I clarified that the odds are actually higher because an account is only flagged with more than one detected picture.

People are already upset that the odds are 1 in one trillion, but that’s per reviewed account. And that assumes that at least more than one picture (Apple doesn’t say how many) are flagged.

The odds are so astronomically small that a personal phot may match one from the database, it’s ridiculous for that that to be a concern, right?
I suppose it comes down to do you trust what Apple says about the sensitivity and one in one trillion? If you do then it’s reasonable to not be concerned. I am not considering what other purposes this tech can be used for, merely whether a non-offending user can be mistakenly identified as guilty. It then goes for manual human review anyway before being forwarded to authorities, so I don’t see how someone can.

Thinking about it, it’s possible Apple has a dial they can turn on the algorithm to adjust the sensitivity up or down. For example if they start getting false positives they can say, ‘this is too sensitive, turn it down’.
 
Last edited:
  • Like
Reactions: MozMan68
My point is that it doesn't matter....the assumption is that they are comparing what you have saved on your phone (unedited or not).
Yes, and you'll notice I never claimed otherwise because I was commenting on something very specific and left everything else for others to discuss.

There's nothing you need to prove to me or convince me of. We're not in contention. Have a great day.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.