Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
"Scanning" is the general term provided here. I know what hashing is. I deal with it all the time generating MD5 and SHA-1 hash for downloads for my services so people can verify what they downloaded matches what I offer. And I have looked at Apple's whitepaper about this.

Yes, generally speaking it is a "scan". Photos one by one are examined with a hash. That is essentially scanning going on. Going from one item to the next, generate a hash and check for a match. It is scanning for a match based on the hashes that are produced.

I don't want to go into the technicalities of the definition of scanning, but it is essentially a scanning process that is going on.
OK, so you know what hashing is. Then you should know that you won't be flagged unless they store hashes of your photos in the internal iOS database. Unless you are creating child porn which is then widely disseminated so that watch agencies store it and send to Apple so that the internal iOS database is updated in some future version, then your photos won't be flagging anything.
 
This is part of what gets me about all of this. It's an opt-out "feature" for the criminals. It's going to do very little to help slow them down.
Catch the dumb ones and occasional perpetrators. Still enough to maybe get IMEIs off the metadata and continue the search
 
Quite frankly you are missing the point. The policy that governs CSAM can be expanded either by policy change at Apple or government court order. The CSAM software is installed on your device which creates a backdoor that allow Apple to access anything on your device and the user would never know what was scanned and sent back to Apple. I am not against cloud companies scanning that data server side but have issues with it being done device side due to the privacy and security issues that presents doing it device side.
CSAM software is not installed on the device. A database of hashes of known child abuse is stored within iOS. You're fundamentally misunderstanding how this works.
 
  • Disagree
Reactions: ececlv
OK, so you know what hashing is. Then you should know that you won't be flagged unless they store hashes of your photos in the internal iOS database. Unless you are creating child porn which is then widely disseminated so that the internal iOS database is updated in some future version, then your photos won't be flagging anything.
Then why even have a manual review process in the first place? Why even state that there is a false-positive rate if there is no chance something could be a mistake? If this was true 1:1 hashing (which it is not), you would hear a different side of me.
 
  • Like
Reactions: Grey Area
Then why even have a manual review process in the first place? Why even state that there is a false-positive rate if there is no chance something could be a mistake? If this was true 1:1 hashing (which it is not), you would hear a different side of me.

I would get a vaccine if it was 100% effective.
 
CSAM software is not installed on the device. A database of hashes of known child abuse is stored within iOS. You're fundamentally misunderstanding how this works.
It is scanning software that is installed with iOS 15. That software is going to receive constant updates for updated hashes and other parameter's to look for. This scanning is governed by Apple policy which can be expanded.
 
Then why even have a manual review process in the first place? Why even state that there is a false-positive rate if there is no chance something could be a mistake? If this was true 1:1 hashing (which it is not), you would hear a different side of me.
We don't have a legal framework that could handle automatic delation without human review
 
Then why even have a manual review process in the first place? Why even state that there is a false-positive rate if there is no chance something could be a mistake? If this was true 1:1 hashing (which it is not), you would hear a different side of me.
It is near-exact matches, not true 1:1, hence the ridiculously low false positive rate. Manual review protects users in the statistically unlikely chance that this does flag them, in which case the photo(s) in question are disregarded and you're not deactivated and reported. Again, if you're uploading to iCloud Photos anyway, why would you assume that your photos are not already able to be looked at manually? If they already have that ability, then this diminishes your privacy how exactly?
 
We don't have a legal framework that could handle automatic delation without human review
So that means there is a chance for some images to falsely be flagged. Which means there is a chance some random person (could be me could be you) that will have their personal pictures reviewed by someone you don't know. And you won't even know it happened.
 
So it's trespassing then? ;)

Waiter, there’s some Apple in my iPhone.

Specifically an inanimate process with no way to phone home if I don’t allow it to. It is “Apple” as much as spotlight indexing is “Apple”.

And Terms&Condition will be probably be updated to ask consent for this inanimate process.
 
My anonymous Macrumors account is basically the only social media I use. Still have to use WhatsApp though- can’t get my friends to switch to signal.
MR is the only social I do as well. I refuse to use FB, Instagram, Twitter and the like.

I am shocked you are using WhatsApp, given your proclivity towards privacy. I will leave that alone for another time. :D
 
It is near-exact matches, not true 1:1, hence the ridiculously low false positive rate. Manual review protects users in the statistically unlikely chance that this does flag them, in which case the photo(s) in question are disregarded and you're not deactivated and reported. Again, if you're uploading to iCloud Photos anyway, why would you assume that your photos are not already able to be looked at manually?
I don't hear people going around and looking at people's iCloud Photos. Again you typically need warrants for those kind of searching.
 
And who is going to check the ones who provide the hashes that the photos are being compared against? Who checks them? I'm not worried about Apple, I'm worried about whatever government gets their hands on what checks are applied. Suddenly it allows them to also search for anything or anybody.

Man, privacy was what's keeping me away from Android. Now Apple does this...?
 
It is near-exact matches, not true 1:1, hence the ridiculously low false positive rate.
He doesn’t understand that the wiggle room allowed by the trained AI in matching hashes allows to catch slightly edited versions of the offending CSAM photo but not an unrelated photo.
He just doesn’t get it.
 
So that means there is a chance for some images to falsely be flagged. Which means there is a chance some random person (could be me could be you) that will have their personal pictures reviewed by someone you don't know. And you won't even know it happened.
I own a dslr and analog cameras for family events and smut, if they want to get pictures of my restaurant menus, so be it.
 
  • Like
Reactions: Ethosik
Waiter, there’s some Apple in my iPhone.

Specifically a inanimate process with no way to phone home if I don’t allow it to. It is “Apple” as much as spotlight indexing is “Apple”.
Please don't engage in conversation with me. We went in circles yesterday and you're incapable of thinking about anything that opposes the narrative in your head.

My comment was meant to be a joke as a response to Apple Robert's previous post.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.