Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I find the use/co-opting of this 1946 post WWII confessional regarding the Nazis in the way which it is used here to be almost irreverent and unquestionably offensive especially to those who had family murdered and persecuted during WWII.

Not to mention how differently it reads if it starts with: "First they came for the child molesters..."
 
If MacOS scans your files they will be no different than google, at that point you might as well get an Android

If they use a source code compiler they're no different than microsoft, at that point you might as well get Windows.
 
Look at the paragraph above the three images on page 5.
Cool, go read the paragraph yourself - it’s not saying what you claim it’s saying.

Also, you only use quotation marks when you are actually quoting something.
 
Except THAT IS NOT HOW IT WORKS. Unless those adults idea of “fun” is sharing known CSAM content which has already been cataloged and had its hashes stored as such, in which case they deserve what they get. Again, it’s hashes, which are already calculated and used as part of the data transfer process, but people would rather listen to FUD than read the articles I have linked countess times before on Cloudflare’s existing and FREE implementation of this.
Basically, it is like your hosts file, but instead of IPs it’s a list of “fuzzy hashes” against which your image is compared as part of the upload.
That’s it - unique hashes and AI-derived “fuzzy hashes” to account for attempts at circumventing a regular hash check (eg, cropping). Hashes are a one-way function, so it gives zero access to your data whatsoever.
Again if there is ZERO concerns, why is there even a threshold on false positives at all. If this is truly how it functions there should be absolutely zero false positives. So there is no need for a threshold.
 
Cool, go read the paragraph yourself - it’s not saying what you claim it’s saying.

Also, you only use quotation marks when you are actually quoting something.
“Visually identical” yes it is saying what I am. Things can be visually identical but not the same.

Check my original post before you do that. I was quoting it. This is what I said.

Straight up from the white paper. I can’t copy it verbatim from my phone for some reason. But check the NeuralHash section.
 
Last edited by a moderator:
I think you're reading too much into that photoshop line. They're talking about fairly basic photo editing functions.... crop/rotate, color profiles, watermarks... not the entirety of the feature set. If you content-aware fill a seaside village over all the CSAM content described by the hash, or change every pixel to x000000, or do anything that substantively alters the subjective content of the image, then of course it will "fool it," because it's not the same photo anymore.

I also think it's easy over estimate the probability of very very unlikely events, like the possibility that your innocent pool party pics will be mistaken for illegal porn. I once caught my teenage nephew trying to enter random bitcoin recovery phrases using a BIP-39 word list; after telling him that its still stealing if it worked, I asked why he even thought it would work... because it seemed possible and he was bored. And yeah, it is possible. But it's so vanishingly unlikely that it was deemed perfectly secure.
But we don't know to what degree a manipulation would be "safe". And the fact that there was a workflow, threshold and potential false positives BY APPLE, says it's not a 100% perfect match. The presence of false positives and reaching a threshold MEANS there will be some judgment. Otherwise it would be absolutely impossible.....no questions....as to a match.
 
I did read it. I have looked into this quite heavily when it was originally discussed. People even posted examples of one image being a flower and another being a curtain of said flower that produced the same hash.
 
Last edited by a moderator:
I did read it. I have looked into this quite heavily when it was originally discussed. People even posted examples of one image being a flower and another being a curtain of said flower that produced the same hash.
You clearly haven’t looked nearly hard enough.
 
You clearly haven’t looked nearly hard enough.
I did. If it works as you all state, that there is literally ZERO possible chance ANY image would produce the same hash, then there would be NO false positives thus no need for a threshold mechanism. But the pretense of such system means there WILL be SOME false positives.....what are false positives? Oh images that are not exact! Therefore it makes some judgements. The presence of how Apple was handling false positives is proof of what I am stating. Because if it DID NOT do that, there would be 0.000000000000000% chance of ANY false positives. Not even 0.000000001% chance of false positives.
 
I did read it. I have looked into this quite heavily when it was originally discussed. People even posted examples of one image being a flower and another being a curtain of said flower that produced the same hash.
You clearly haven’t looked nearly hard enough
“Visually identical” yes it is saying what I am. Things can be visually identical but not the same.

Also, stop being those keyboard warriors. Check my original post before you do that. I was quoting it. This is what I said.

Straight up from the white paper. I can’t copy it verbatim from my phone for some reason. But check the NeuralHash section.
Not a single quote that you have provided is a quote from the document, you have paraphrased at best and in the last quote in a very misleading way since it says “identical and visually similar”.
 
  • Like
Reactions: Mr. Heckles
You clearly haven’t looked nearly hard enough

Not a single quote that you have provided is a quote from the document, you have paraphrased at best and in the last quote in a very misleading way since it says “identical and visually similar”.
Yes it is! I said the words "visually similar images"

EXACT QUOTE.

The main purpose of the hash is to ensure that identical and visually similar images result in the same hash, and images that are different from one another result in different hashes. For example, an image that has been slightly cropped or resized should be considered identical to its original and have the same hash.

I was on my phone at the time and could not copy and paste this. So I just switched tabs for a section of words to make sure I said it exactly. Now I am on my Mac, I was able to copy/paste the above. Look....."visually similar images" is in there.

This was my original post on it.

Read my other comment. I looked into this heavily. It detects photo manipulations/distortions/cropping/photoshop/etc. It is NOT a perfect pixel by pixel comparison and those changes causes the hash to change. Yet it could still flag it.

“The purpose of the NeuralHash is to ensure that identical and visually similar images produce the same hash”

Straight up from the white paper. I can’t copy it verbatim from my phone for some reason. But check the NeuralHash section.

I missed the second word "main" and I changed "the hash" to "NeuralHash" to make it clear in the post. But it essentially the same as the white paper. I even stated a disclaimer that I couldn't copy and paste it so give me a bit of a break....geez.
 
“Visually identical” yes it is saying what I am. Things can be visually identical but not the same.

Things can be visually identical but not the same. What you're saying is that someone might take a photo that is visually identical to some of the worst know sexual abuse imagery of children, but it's not the same photo, it's a new one.

Is that really the argument you want to make here?

Again if there is ZERO concerns, why is there even a threshold on false positives at all. If this is truly how it functions there should be absolutely zero false positives. So there is no need for a threshold.

The point is that the false positive won't generally look anything like the target CSAM image. These two images have the same NeuralHash:

1693611806074.png
1693611849580.png


So do these:

1693611889630.png
1693611927074.png


In both cases images of dogs were used as the target database and other images were manipulated until they matched the hash-- so the false positive isn't a natural image it was modified until it matches (notice the color splotches on hte lower car image).

If you notice, a false positive doesn't mean you have another image similar to a dog and in the CSAM case it wouldn't mean you have a picture anything like a CSAM image.

So the reason to set a false positive threshold is to account for the fact that hashes are data reductive by definition and you can sometimes hit a match by accident. If that were to happen 30 times in images you were uploading to iCloud, then it would trigger a manual review- not of your images but of an encrypted derivative of the image. I'm not sure what that derivative is but presumably it's not the whole image because they don't want their reviewers to be subjected to the image from true positives.

So if it's a car rather than previously known child abuse, the manual reviewer would reject it. If it's a loving parent taking a photo of a baby in a bathtub rather than previously known child abuse, the manual reviewer would reject it. It if was previously unknown child abuse rather than previously known child abuse, the manual reviewer would reject it.

If you're unlucky enough to get flagged for an image that is visually identical to, but is not, previously known child abuse then your lawyer has their work cut out for them.
 
Last edited:
Things can be visually identical but not the same. What you're saying is that someone might take a photo that is visually identical to some of the worst know sexual abuse imagery of children, but it's not the same photo, it's a new one.

Is that really the argument you want to make here?



The point is that the false positive won't generally look anything like the target CSAM image. These two images have the same NeuralHash:

View attachment 2253909View attachment 2253910

So do these:

View attachment 2253911View attachment 2253912

In both cases images of dogs were used as the target database and other images were manipulated until they matched the hash-- so the false positive isn't a natural image it was modified until it matches (notice the color splotches on hte lower car image).

If you notice, a false positive doesn't mean you have another image similar to a dog and in the CSAM case it wouldn't mean you have a picture anything like a CSAM image.

So the reason to set a false positive threshold is to account for the fact that hashes are data reductive by definition and you can sometimes hit a match by accident. If that were to happen 30 times in images you were uploading to iCloud, then it would trigger a manual review- not of your images but of an encrypted derivative of the image. I'm not sure what that derivative is but presumably it's not the whole image because they don't want their reviewers to be subjected to the image from true positives.

So if it's a car rather than previously known child abuse, the manual reviewer would reject it. If it's a loving parent taking a photo of a baby in a bathtub rather than previously known child abuse, the manual reviewer would reject it. It if was previously unknown child abuse rather than previously known child abuse, the manual reviewer would reject it.

If you're unlucky enough to get flagged for an image that is visually identical to, but is not, previously known child abuse then your lawyer has their work cut out for them.
Yes that is what I am saying. Why wait until 30 flags? If this is so perfect as you are all saying, the FIRST occurrence of this should get flagged to the authorities because it CAN'T make false positives. You all are saying how perfect this system is and our concerns are "ridiculous" but then why not just set the threshold at 1 then? The presence of a threshold system and Apple's own words about a very rare false positive chance proves our concerns. That is the entire definition of a false positive.

And it's probably good that my ignorance is showing because I have NO CLUE what these images look like. My main concern is I know people that are in a consenting relationship and are adults that share images. But some bodies are different. And CSAM probably includes some "mature" looking 16/17 year olds posing and some "younger" looking 22-25 year olds could have matching bodies that would get flagged POSSIBLY.

And thank you for actually answering my question instead of nitpicking on my words when I clearly left a disclaimer. Appreciate it!
 
Last edited:
Obviously they knew this perfectly beforehand but they didn’t expect such a backlash.

And now they realise it’s far easier (and more important) to promote privacy in their ecosystem than protecting children
Whew. I was afraid no one would make an argument with a false equivalency.
 
Yes that is what I am saying. Why wait until 30 flags? If this is so perfect as you are all saying, the FIRST occurrence of this should get flagged to the authorities because it CAN'T make false positives. You all are saying how perfect this system is and our concerns are "ridiculous" but then why not just set the threshold at 1 then? The presence of a threshold system and Apple's own words about a very rare false positive chance proves our concerns. That is the entire definition of a false positive.

And it's probably good that my ignorance is showing because I have NO CLUE what these images look like. My main concern is I know people that are in a consenting relationship and are adults that share images. But some bodies are different. And CSAM probably includes some "mature" looking 16/17 year olds posing and some "younger" looking 22-25 year olds could have matching bodies that would get flagged POSSIBLY.

And thank you for actually answering my question instead of nitpicking on my words when I clearly left a disclaimer. Appreciate it!
You’ve claimed to have spent so much time reviewing this and yet you still make these posts 🤷🏼‍♂️

You are identifying things that will not be flagged.
 
You’ve claimed to have spent so much time reviewing this and yet you still make these posts 🤷🏼‍♂️
I have. The person I replied to EVEN PROVED what I found. A car and a dog matched. And they aren't even visually similar!
 
You are identifying things that will not be flagged.
Then why even have a manual review process then if it absolutely cannot flag false positives? If you get one hit you should go to jail....right? If you say no, then there is a chance of false positives. Which by definition means content in question doesn't match the database.
 
And they even explained to you why the examples you keep giving are wrong.
No, they said EXACTLY what I am saying. THEIR QUOTE.

So if it's a car rather than previously known child abuse, the manual reviewer would reject it. If it's a loving parent taking a photo of a baby in a bathtub rather than previously known child abuse, the manual reviewer would reject it. It if was previously unknown child abuse rather than previously known child abuse, the manual reviewer would reject it.

What are these examples given by SOMEONE ELSE? False positives. Exactly what I am saying. Why have a manual review process AT ALL then if ITS NOT POSSIBLE? One hit should send you to jail then.
 
Then why even have a manual review process then if it absolutely cannot flag false positives? If you get one hit you should go to jail....right? If you say no, then there is a chance of false positives. Which by definition means content in question doesn't match the database.
Read what that person said to you again.

Also whilst you are at it, go read some technical papers on Neural Hashing.
 
Last edited by a moderator:
Read what that person said to you again.
I did read. You all are saying there is no chance for false positives. But this person said the following.

So if it's a car rather than previously known child abuse, the manual reviewer would reject it. If it's a loving parent taking a photo of a baby in a bathtub rather than previously known child abuse, the manual reviewer would reject it. It if was previously unknown child abuse rather than previously known child abuse, the manual reviewer would reject it.
 
Last edited by a moderator:
I did read. You all are saying there is no chance for false positives. But this person said the following.

So if it's a car rather than previously known child abuse, the manual reviewer would reject it. If it's a loving parent taking a photo of a baby in a bathtub rather than previously known child abuse, the manual reviewer would reject it. It if was previously unknown child abuse rather than previously known child abuse, the manual reviewer would reject it.
Where have I said there isn’t a chance of a false positive? Please point me to exactly where I said that, I will wait.

“other images were manipulated until they matched the hash-- so the false positive isn't a natural image it was modified until it matches (notice the color splotches on hte lower car image).

If you notice, a false positive doesn't mean you have another image similar to a dog and in the CSAM case it wouldn't mean you have a picture anything like a CSAM image.
 
Where have I said there isn’t a chance of a false positive? Please point me to exactly where I said that, I will wait.

“other images were manipulated until they matched the hash-- so the false positive isn't a natural image it was modified until it matches (notice the color splotches on hte lower car image).

If you notice, a false positive doesn't mean you have another image similar to a dog and in the CSAM case it wouldn't mean you have a picture anything like a CSAM image.

You are fighting me about the entire process of false positives. Saying I am wrong and don't know how it works. Yes you are battling me about false positives.

Regarding the image.....because it was manipulated to show the example that a car and a dog can match. They would NOT demonstrate a known CSAM image....that would be illegal. So obviously it would not be the same example. And it still shows a car can match with a dog.

Not one statement I have made was incorrect if false positives are even POSSIBLE. The presence of a manual review process proves there are potential for false positives.

But you all just talked down to us saying that is NOT how the system works and claim we didn't do our research when we did. Other posts showed and supported it! My ENTIRE concern was the false positives.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.