Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If the bot flags too many photos it spits out a fail and a human checks to makes sure it’s right. But if you pass nothing happens and you get to continue living your life in anonymity and privacy. IE, a soulless robot does the checking, which has no memory, no prejudice, not even the slightest bit of interest or understanding of what is in your photos except that it doesn’t match a predetermined data set of child abuse material.

If a human came to check your home regularly that would be one thing, but having an autonomous system simply verify that you are not a child abuser is another thing entirely and you would do well to note the difference.
There is so much wrong with this that it's hard to know where to start. Let's try a beginning: 1) In America, every person has the right to not have to "verify" their innocence. Accusers have to demonstrate a) evidence of a crime to even investigate and b) then prove guilt beyond a reasonable doubt at a trial by a jury of your peers. You change this and you are back to the worst of evil regimes; 2) The Apple system makes everyone have to continue to "verify" their innocence daily, or as often as they use the phone; 3) In scan of 1 million images per year, Fotoforensics found the rate of CSAM images to be 0.056%. Therefore, the danger of false positives, and the ensuing personal disaster for you and your family, is likely greater than the odds of actually catching bad guys, who have undoubtedly already fled the system. So, the only people getting "verified" are soccer moms taking pictures of their kids in the bath.
 
There is so much wrong with this that it's hard to know where to start. Let's try a beginning: 1) In America, every person has the right to not have to "verify" their innocence. Accusers have to demonstrate a) evidence of a crime to even investigate and b) then prove guilt beyond a reasonable doubt at a trial by a jury of your peers. You change this and you are back to the worst of evil regimes; 2) The Apple system makes everyone have to continue to "verify" their innocence daily, or as often as they use the phone; 3) In scan of 1 million images per year, Fotoforensics found the rate of CSAM images to be 0.056%. Therefore, the danger of false positives, and the ensuing personal disaster for you and your family, is likely greater than the odds of actually catching bad guys, who have undoubtedly already fled the system. So, the only people getting "verified" are soccer moms taking pictures of their kids in the bath.
Apple will review pictures internally before involving PE.

If anything let's have a thought for the poor souls that will have to review those datasets.
 
how to remove Apple-written background processes from my iPhone






sorry I meant to google that
 
There is so much wrong with this that it's hard to know where to start. Let's try a beginning: 1) In America, every person has the right to not have to "verify" their innocence. Accusers have to demonstrate a) evidence of a crime to even investigate and b) then prove guilt beyond a reasonable doubt at a trial by a jury of your peers. You change this and you are back to the worst of evil regimes; 2) The Apple system makes everyone have to continue to "verify" their innocence daily, or as often as they use the phone; 3) In scan of 1 million images per year, Fotoforensics found the rate of CSAM images to be 0.056%. Therefore, the danger of false positives, and the ensuing personal disaster for you and your family, is likely greater than the odds of actually catching bad guys, who have undoubtedly already fled the system. So, the only people getting "verified" are soccer moms taking pictures of their kids in the bath.
And that "Manual Review" process is another big can of worms from a privacy perspective. So somehow I get a bit of false positives. Suddenly its okay for a COMPLETE STRANGER to look at my pictures and verify the contents? How are people okay with this?
 
And once it gets on THEIR servers, then they can scan. They don't need to have it scan while on my phone. People need to look towards the future because it will be TOO LATE by the time something bad happens. This will NOT just be limited to CSAM and ONLY when you have iCloud Photos on. Think about things a year or two from now...Do that NOW because it will be too late when it actually DOES happen.
I don't disagree that they can change things down the line, but it wouldn't be an evolution of the neural hash network they're using now. They would have to implement something entirely different --- but seriously, at this point it's just conjecture and conspiratorial.
 
  • Like
Reactions: JahBoolean
And that "Manual Review" process is another big can of worms from a privacy perspective. So somehow I get a bit of false positives. Suddenly its okay for a COMPLETE STRANGER to look at my pictures and verify the contents? How are people okay with this?
Didn't you say a second ago that you're OK with them scanning your photos when on their servers?
And once it gets on THEIR servers, then they can scan.
 
I don't disagree that they can change things down the line, but it wouldn't be an evolution of the neural hash network they're using now. They would have to implement something entirely different --- but seriously, at this point it's just conjecture and conspiratorial.
Not necessarily, they can just easily remove the "iCloud Photos" condition and then scan all your pictures whether you have that setting enabled or not. No change to the algorithm or database involved, just the change of validating you have the setting enabled. Could be a simple boolean variable flip in the code.
 
This tech is like that superpower, but using ML instead of fiction. It doesn’t scan and catalog your photos, it looks for something specific and spits out a pass/fail. If the bot flags too many photos it spits out a fail and a human checks to makes sure it’s right. But if you pass nothing happens and you get to continue living your life in anonymity and privacy. IE, a soulless robot does the checking, which has no memory, no prejudice, not even the slightest bit of interest or understanding of what is in your photos except that it doesn’t match a predetermined data set of child abuse material.

If a human came to check your home regularly that would be one thing, but having an autonomous system simply verify that you are not a child abuser is another thing entirely and you would do well to note the difference.

And what right does Apple have to subject any of us to "checking" with no cause, no reasonable suspicion and no warrant?

I don't care if they are looking for hashes, hashish or hash browns

They have no right to go through my data, on my device, with no warrant
 
Last edited:
And that "Manual Review" process is another big can of worms from a privacy perspective. So somehow I get a bit of false positives. Suddenly its okay for a COMPLETE STRANGER to look at my pictures and verify the contents? How are people okay with this?

You put that crap onto their servers.
The law says they can take reasonable measures to catch that crap.
Btw it’s a low-res version of your pics that the human reviewer will be able to see. It’s embedded in the security voucher.
Plus, you can’t “somehow” get multiple positives. It’s anti-vaxxer-like math to even think this.
 
You put that crap onto their servers.
The law says they can take reasonable measures to catch that crap.
Btw it’s a low-res version of your pics that the human reviewer will be able to see. It’s embedded in the security voucher.
Plus, you can’t “somehow” get multiple positives. It’s anti-vaxxer-like math to even think this.
Then how can someone get multiple matches? If I have the same photo duplicated, it will result in multiple positives.
 
Not necessarily, they can just easily remove the "iCloud Photos" condition and then scan all your pictures whether you have that setting enabled or not. No change to the algorithm or database involved, just the change of validating you have the setting enabled. Could be a simple boolean variable flip in the code.
It still would only generate hashes of all your photos, then compare against the database of hashes stored in iOS. Look up neural hash, and hashing in general. This is not scanning your photos for certain contents, it's checking if you have near-exact matches of known child porn. Even if the internal database was updated to include photos that weren't child porn, you would still have to have a near-exact match of whatever they chose to store in the iOS database. They don't have hashes of your photos that you take, so your photos wouldn't be matched against anything they store in the database, unless they hash your photos, then store them in the database for all iOS users to compare against. Again, don't store a bunch of child porn, and there will be no undue reviewing of your content.
 
Last edited:
Once the govt starts demanding Apple scan for things in addition to CP, the existence of this horrible “feature” takes away their previous argument that they’re being asked to develop a backdoor that doesn’t yet exist and that could be compromised by bad actors. Y’know the same argument that allowed them to not unlock the shooter’s iPhone for the FBI a few years back.
 
It still would only generate hashes of all your photos, then compare against the database of hashes stored in iOS. Look up neural hash, and hashing in general. This is not scanning your photos for certain contents, it's checking if you have near-exact matches of known child porn. Even if the internal database was updated to include photos that weren't child porn, you would still have to have a near-exact match of whatever they chose to store in the iOS database. They don't have hashes of your photos that you take, so your photos wouldn't be matched against anything they store in the database, unless they hash your photos, then store them in the database for all iOS users to compare against. Don't store a bunch of child porn, and you'll be safe.
"Scanning" is the general term provided here. I know what hashing is. I deal with it all the time generating MD5 and SHA-1 hash for downloads for my services so people can verify what they downloaded matches what I offer. And I have looked at Apple's whitepaper about this.

Yes, generally speaking it is a "scan". Photos one by one are examined with a hash. That is essentially scanning going on. Going from one item to the next, generate a hash and check for a match. It is scanning for a match based on the hashes that are produced.

I don't want to go into the technicalities of the definition of scanning, but it is essentially a scanning process that is going on.
 
What amazes me is the consumer uproar against this change when people have so readily adoped smart assistants.
I disabled Hey Siri two years ago when I found out Apple had contractors listening to the recordings and I’ve barely used it since. I’ve been staunchly opposed to smart speakers and consider them corporate spyware (which is why I didn’t understand why Apple made one- until now).

I always knew a lot of Apple’s privacy marketing was just that- marketing, but I genuinely believed that were at least better than their competitors. Not anymore. No one else is doing client-side scanning. I don’t know if I’ll be buying a new Apple products, and that’s a real disappointment. This is much worse than throttle-gate, and Apple gave in then. To all the Apple employees who were willing to protest the hiring of a guy who wrote a satirical book with misogyny years ago, and in-person work- do you think this is worth protesting over?
 
It still would only generate hashes of all your photos, then compare against the database of hashes stored in iOS. Look up neural hash, and hashing in general. This is not scanning your photos for certain contents, it's checking if you have near-exact matches of known child porn. Even if the internal database was updated to include photos that weren't child porn, you would still have to have a near-exact match of whatever they chose to store in the iOS database. They don't have hashes of your photos that you take, so your photos wouldn't be matched against anything they store in the database, unless they hash your photos, then store them in the database for all iOS users to compare against. Don't store a bunch of child porn, and you'll be safe.
Quite frankly you are missing the point. The policy that governs CSAM can be expanded either by policy change at Apple or government court order. The CSAM software is installed on your device which creates a backdoor that allow Apple to access anything on your device and the user would never know what was scanned and sent back to Apple. I am not against cloud companies scanning that data server side but have issues with it being done device side due to the privacy and security issues that presents doing it device side.
 
So it is IMPOSSIBLE for a duplicate to generate the same hash? How? What makes picture 1 and picture 1 duplicate different if it is AN EXACT DUPLICATE?

Could you just please just be nice FOR ONCE? I am getting so irritated with your constant hate and comments like this towards me.

And I get the same MD5 hash when I duplicate a file EXACTLY AS IT IS. So yes, duplicates can produce the same hash.

That is essentially what hash matches do....find duplicates.....
 
Last edited by a moderator:
I disabled Hey Siri two years ago when I found out Apple had contractors listening to the recordings

I have Hey Siri disabled mostly because it made me insane that any one of the devices around the house would randomly decide it would be the one to "respond" 🤣

Apple is so bad at this stuff..
 
I disabled Hey Siri two years ago when I found out Apple had contractors listening to the recordings and I’ve barely used it since. I’ve been staunchly opposed to smart speakers and consider them corporate spyware (which is why I didn’t understand why Apple made one- until now).

I always knew a lot of Apple’s privacy marketing was just that- marketing, but I genuinely believed that were at least better than their competitors. Not anymore. No one else is doing client-side scanning. I don’t know if I’ll be buying a new Apple products, and that’s a real disappointment. This is much worse than throttle-gate, and Apple gave in then. To all the Apple employees who were willing to protest the hiring of a guy who wrote a satirical book with misogyny years ago, and in-person work- do you think this is worth protesting over?
You should change your username to 'Privacy_Girl.' :D
 
I disabled Hey Siri two years ago when I found out Apple had contractors listening to the recordings and I’ve barely used it since. I’ve been staunchly opposed to smart speakers and consider them corporate spyware (which is why I didn’t understand why Apple made one- until now).

I always knew a lot of Apple’s privacy marketing was just that- marketing, but I genuinely believed that were at least better than their competitors. Not anymore. No one else is doing client-side scanning. I don’t know if I’ll be buying a new Apple products, and that’s a real disappointment. This is much worse than throttle-gate, and Apple gave in then. To all the Apple employees who were willing to protest the hiring of a guy who wrote a satirical book with misogyny years ago, and in-person work- do you think this is worth protesting over?
This is still leaps and bounds ahead of what the competition is offering (for non-afficionados that is the case)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.