Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If this feature is accepted, the consequence may be that sooner or later more will follow, such as
- Scanning for other content
- Removal of the limitation for iCloud Photos => scanning of all photos on the device
- Other manufacturers will follow

The feature may be well-intentioned but can be misused in the future.

Btw. Macs are also affected!
 
I still don’t think this is at in stone but if it happens not like all others won’t follow
 
I still don’t think this is at in stone but if it happens not like all others won’t follow

Yes, especially as more phones and computers are getting dedicated ML processors. There's nothing to stop Google or Samsung from doing exactly the same thing. Hands-off hash comparison on the device. They're the ones that will be watching Apple and looking to see how well their system is working before deciding their own policies and objectives.

If Apple can do it, so can others.
 
Is everyone is this sub a pedophile or something?

Well done! You said it! If you’re against the surveillance you must be a pedophile!!! Bravo! But the again, who wants all their pictures to be scanned. Who wants to be watched and monitored all the time. You like that sort of stuff? You like to be watched? That’s so sick and perverted. Please, people like that must be reported, preferably automatically.

You do understand this is much bigger than some CSAM images? This is about privacy and increase in surveillance. It’s about misuse of power. It’s worth mentioning that Apple doesn’t plan to release this feature in Europe because it would be highly illegal there.
 
Here’s how it works: Hashes of the known CSAM are stored on the device, and on-device photos are compared to those hashes. The iOS device then generates an encrypted “safety voucher” that’s sent to iCloud along with the image. If a device reaches a certain threshold of CSAM, Apple can decrypt the safety vouchers and conduct a manual review of those images. Apple isn’t saying what the threshold is, but has made clear a single image wouldn’t result in any action.

Source: https://www.engadget.com/apple-chil...tdxSv1YwoaYa4qV9Sjii119t-BzdRsdUcvT0eS7NmldWT

Bottom line: It will be reported to Law Enforcement. Just imagine the FBI is knocking on your door with a search warrant to go through your iPhone because of a detection caught during CSAM.

CSAM are images that have been pulled from convicted pedophiles phones, computers etc and added to a database as hashes. If these EXACT images are matched via this new system, the image will be flagged. So I have to ask the question, if you happen to have EXACT matches for images of CSAM that were identified from pedophiles device, how is it you have a problem with that?
 
People who think those who don't have kiddie porn shouldn't be worried. First of all, I can almost guarantee you that those who actually have that stuff, don't have it on their phones, especially not stored in iCloud. So scanning this will be moot because they won't actually catch any (or at least VERY few) sickos.

Secondly, there are a lot of gray zones. What if you have photos of your kids in a bathing suit? Or naked in bath tub? You'll be flagged for manual review, and while they might (hopefully) realize it's wrong and not flag you to the authorities, a random person at Apple will then have seen your possibly naked child, or at the very least your private photos - like how are people OK with that??
 
CSAM are images that have been pulled from convicted pedophiles phones, computers etc and added to a database as hashes. If these EXACT images are matched via this new system, the image will be flagged. So I have to ask the question, if you happen to have EXACT matches for images of CSAM that were identified from pedophiles device, how is it you have a problem with that?
To simply put. I don’t want Apple scanning my iPhone whether it’s via AI or Hash. I want my privacy to be protected and to keep to myself. That is my fundamental human right.
 
CSAM are images that have been pulled from convicted pedophiles phones, computers etc and added to a database as hashes. If these EXACT images are matched via this new system, the image will be flagged. So I have to ask the question, if you happen to have EXACT matches for images of CSAM that were identified from pedophiles device, how is it you have a problem with that?

No no no… It’s not about exact matches. It’s about content which can be seen similar to those images. The AI has been trained with approx. 200 000 images. The information gathered from that will be used to find possible new images. If image falls within set range of neural hash then it will be flagged and manually checked.
 
If you don't see a problem with this technology. Apple is definitely playing you. Pretty sure you don't want Apple to be looking/scanning/analyzing/identifying your wife's pictures. This CSAM stuff needs to be shut down. Apple needs to respect our privacy period. Apple needs to figure out another way if they are really interested in catching pedophiles... God knows for what reason.

View attachment 1818079
They can't do that. Its a hash of known images. They couldn't compare an image your wife has just take on her phone. They cant actually see the photo, its just a comparison of two numbers that are generated by a algorithm. I dont think people understand the technology here. Maybe Apple need to explain it better.
 
...Specifically, employees are concerned that governments could force Apple to use the technology for censorship by finding content other than CSAM.

I said it on another thread, but whether Apple uses this existing technology to scan for CSAM or not has no bearing on Apple's capability or intent (current or future) to use it for any other purpose. Protesting and convincing Apple not to undertake CSAM scanning won't alter that at all. The genie is out of the bottle - the technological capability exists and has done for many years now. This 'new' fear of a government forcing Apple to scan for censored images could just have easily happened 6 years ago when iCloud Photo Library was first launched, CSAM changes nothing here.
 
Last edited:
CSAM are images that have been pulled from convicted pedophiles phones, computers etc and added to a database as hashes. If these EXACT images are matched via this new system, the image will be flagged. So I have to ask the question, if you happen to have EXACT matches for images of CSAM that were identified from pedophiles device, how is it you have a problem with that?
It's not that simple.
Photos can be saved under different file names. It is also possible to save only a section of a photo. If Apple considers such possibilities, then more has to happen than just matching a hash.
Slight changes to photos, like a small filter, can confuse such systems.

abb1-35945c104cd19df9.png


 
What I'm not getting is when this is said…

"Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC."

If Apple aren't "looking at your photos" and they're only looking at a hash that's coming from an iCloud user's end. Then the only thing Apple could possibly check that against is another hash. So in reality Apple can basically be checking anything against anything. The only thing they can be sure of is what they are reporting the matching to, in this case supposedly NCMEC.

And I really can't imagine that the human reviewer at Apple, they are an Apple employee and not a 3rd party like those Siri recording reviewers… right?!?!, is thrown up the actual image that is matched each time.

So what does "human review" actually mean??
Is it too much to ask that you read the published material on how it works before getting excited? The human reviewer gets access to the safety voucher of the user’s image (not the image itself) which contains a ‘visual derivative’ of the user’s uploaded image.

What that is and how high resolution it is, we don’t know, and more information would help a lot, but when you look at the stated design of the process, there’s no need for much visual clarity on those derivatives. The Apple reviewer wouldn’t need to be tasked with, for example, verifying the photo contains a minor - that level of interpretation has already been done at NCMEC on the original photo. Also the reviewer can’t match the derivative to the original CSAM, because Apple won’t have the original. The reviewer’s job can only be to check for hash collisions by looking at the collection of successful hash matches and verify that the collection looks visually suspicious enough to pass on to NCMEC for proper like-for-like matching. A tiny blurry thumbnail would suffice for that purpose, thus protecting innocent users’ privacy, the Apple reviewer’s mental health, and the original victims. Time will probably tell if they did the obvious thing here.
 
  • Like
Reactions: artfossil
This is happening when you try to be transparent to the people
at this time, i would not be sure if Apple will say that they will remove this scanning to the people, but internally still do it in some way
 
This is happening when you try to be transparent to the people
at this time, i would not be sure if Apple will say that will remove this scanning to the people, but internally still do it in some way
Apple would not jeopardize to do that. If they do… huge lawsuit is coming.
 
No no no… It’s not about exact matches. It’s about content which can be seen similar to those images. The AI has been trained with approx. 200 000 images. The information gathered from that will be used to find possible new images. If image falls within set range of neural hash then it will be flagged and manually checked.
Where did you hear that?
My understanding is that they have a particular list of photos that no one should have on their phones. These photos are turned into a specific number (hashed) via an algorithm. Its this number that has to match the number generated by any image on your phone.

This is not the same as google scanning a photo and working out what's in it via machine learning. (something that probably every cloud platform does in some or fashion).

So there is zero possibility that any photo you take personally will match anything on that CSAM database.
I think the idea is to stop people spreading the actual known CSAM images around.

Also fundamentally, no one fully "owns" their phone in the way you think you do. The OS and software is licensed for you to use. The very fact that your using iCloud anyway will imply Apple can receive data from your phone etc.
Same with gmail and all other services. So I think everyone is out of luck if they want mainstream products to allow you to have data that is completely private from the companies that provide those services. In the end you have to trust those companies with your stuff. Its that simple.

Or go run some linux phone or something where you control it all...
 
People are really taking this seriously and planning on leaving Apple. They don't care about iOS 15 anymore... They don't care about the Apple's Fall line up.
Actually it looks like you are taking it way too seriously. You might want to chill a bit and stop all the crazy sensationalistic nonsense you are sprouting. Apple have held fast to their privacy stances and I see no indication of that changing.
 
I think that Apple should stop implementing these kinds of techniques. It doesn't matter how you look at it, it's still a backdoor and this creates an opportunity for authorities to state that Apple should "scan" for other cases within a person's his or her phone, and Apple might find itself in a position where they can't refuge because

a. it's possible for them to apply
b. they can be forced to accept these requests because they need to follow the law of that specific country

Even though the intentions of Apple are noble, they do open the box of pandora with this.
 
No no no… It’s not about exact matches. It’s about content which can be seen similar to those images. The AI has been trained with approx. 200 000 images. The information gathered from that will be used to find possible new images. If image falls within set range of neural hash then it will be flagged and manually checked.

That's not how it works. Have a read up, Apple have put out a technical white paper on it.

It's not that simple.
Photos can be saved under different file names. It is also possible to save only a section of a photo. If Apple considers such possibilities, then more has to happen than just matching a hash.
Slight changes to photos, like a small filter, can confuse such systems.

abb1-35945c104cd19df9.png



File names don't come into it. Also Apples implementation compensates for crops, tweaks and resizes to the original file. Still using hashes.
 
They can't do that. Its a hash of known images. They couldn't compare an image your wife has just take on her phone. They cant actually see the photo, its just a comparison of two numbers that are generated by a algorithm. I dont think people understand the technology here. Maybe Apple need to explain it better.
They do content matching. Apple's NeuralHash algorithm extracts features from the picture and computes a hash in a way that ensures that perceptually and semantically similar images get similar fingerprints. Apple so far has not told how flexible this system is, but its aims go well beyond traditional pixel-based hash matching.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.