Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
But people here are against CSAM detection just being installed on the device even if its dormant. They are afraid it can be misused or extended.

I'm against it just being installed as well. I trust Apple to reject misuse for now, though.

You have the exact same problem with the iCloud backup software.

But there's a primary difference in the intentionality behind the backup software vs. the intentionality of the CSAM tool

If the government can get images into the CSAM database, they also have the power to get the warrant.

Exactly. But Apple is not getting a warrant to place a scanning tool on our devices that could report back to them > authorities. If the police feel the need to get a warrant to do that, have at it.

My arguments is that there are much worse technology already in iOS which with small changes is much better tools for oppressive governments than the CSAM Detection tool.

We might just have to disagree there. There is no other technology built into iOS that's sole intention is to find and report criminal activity, even if Apple cannot see it at first.

Also, Apple isn't doing any scanning if iCloud Photo Library is turned off.

Right, but we shouldn't have to turn off a highly valuable service to avoid being lumped into a group that deserves being found and reported.
 
You've distilled it perfectly.

Instead of Apple continuing to fight the good fight and hopefully push these debates about privacy and surveillance back up to where they should be had (in the open, public arena, political debate...)

They are punting to a way that's good for them

This change will allow them to comply with surveillance agencies and state entities who want to "see everything" (or at least whatever they are looking for at the minimum) while Apple gets to go fully out of the loop and say "we can't see any of it! What you send to us is fully E2EE"

This is the huge thing everyone.

Apple is getting out of the way and clearing the path to do what surveillance agencies want.

This is the start of them fully punting on the original "Privacy" schtick.
I think this is it.

Apple wants to be seen as a privacy focused company. That means being able to say "We never look at your data"- including in the cloud. The alphabet agencies and NCMEC don't want any E2EE in iCloud, on device, or in messaging because that interferes with their ability to solve crimes (and conduct illegal surveillance programs on US citizens). They put pressure on Apple. What to do...?

With on-device scanning, everyone can be happy. Apple gets to keep E2EE (which isn't really E2EE anymore) and claim they never look at your stuff, even in iCloud, which will be true- your own device does it, which is far worse. The alphabet agencies and NCMEC get a backdoor into your device, which is even better than one in the cloud, while Apple quietly looks the other way. No more need for extending posturing against the FBI like with San Bernardino.

I guess this shows that Apple doesn't really care about privacy at all, only the appearance of it. They figured this way would give them the most PR. Joke's on them though, because their creepy system is being rightly called out for what it is, and their privacy reputation has been destroyed. Unfortunately, if they are rolling this out because of government pressure, no amount of outrage from us as consumers will change their behavior, short of a massive drop in iPhone sales in the holiday quarter. Sadly, that is unlikely to happen.
 
I think this is it.

Apple wants to be seen as a privacy focused company. That means being able to say "We never look at your data"- including in the cloud. The alphabet agencies and NCMEC don't want any E2EE in iCloud, on device, or in messaging because that interferes with their ability to solve crimes (and conduct illegal surveillance programs on US citizens). They put pressure on Apple. What to do...?

With on-device scanning, everyone can be happy. Apple gets to keep E2EE (which isn't really E2EE anymore) and claim they never look at your stuff, even in iCloud, which will be true- your own device does it, which is far worse. The alphabet agencies and NCMEC get a backdoor into your device, which is even better than one in the cloud, while Apple quietly looks the other way. No more need for extending posturing against the FBI like with San Bernardino.

I guess this shows that Apple doesn't really care about privacy at all, only the appearance of it. They figured this way would give them the most PR. Joke's on them though, because their creepy system is being rightly called out for what it is, and their privacy reputation has been destroyed. Unfortunately, if they are rolling this out because of government pressure, no amount of outrage from us as consumers will change their behavior, short of a massive drop in iPhone sales in the holiday quarter. Sadly, that is unlikely to happen.


Folks -- this is it ^^^

@huge_apple_fangirl has very succinctly summarized it to a tee

Beautifully done
 
Last edited:
  • Like
Reactions: 09872738
Where does the manual review process kick in, though?

These are all the steps spelled out, go to Step 7 for manual/human review.

Step 0
- Apple releases iOS15
- iOS 15 contains the hashes database, downloaded in full to every iPhone
- subsequent updates to the database can only happen as an iOS update, so the system cannot be abused for time sensitive searches (like targeting dissidents that were at a march “last week”)

Step 1
- the user activates iCloud Photos, basically surrendering his photos to Apple servers, like calling the police and saying “I am going to bring the whole content of my home to the local police station”
- then and only then the local scanning process begins
- said scanning process by itself has no way to phone home to Apple HQ

Step 2
- the scanning process creates fingerprints of the user photos (the photos that the user has already promised to surrender to Apple servers, not photos that the user “hasn’t shared with anyone” like some privacy advocate said, not a contradiction to “what happens on iphone stays on iphone”)

Step 3
- said fingerprints are compared by a super smart trained AI to the fingerprints in the database
- the AI is needed not to look at the content of the picture (the content is no longer part of the equation since Step 2) but to have some leeway, some wiggle room to be able to catch slightly modified (cropped, etc.) versions of the known offending picture
- the system is engineered to only match the known offending old photos from the NCMEC repository, it can’t look for new/personal children-related content

Step 4
- the output of the matching process is a label called a “security voucher”, attached to every photo
- this label only says 2 possible things
1) YES, this photo matches a known CSAM photo from the NCMEC repository
2) NO, this photo doesn’t match a known photo from the NCMEC repository
- at this stage though this label is still cryptographically secured GIBBERISH, no human on Earth can read it, not even someone having physical access to your phone
- embedded in the security voucher there’s also a low-res version of the user’s photo
- the label will remain gibberish till the end of time if Step 6b (see below) doesn’t happen
- (is a search whose output is gibberish on a post-it note that just sits there till the end of times an actual search?)

Step 5
- the user uploads the photos to iCloud Photos just like HE promised to do in Step 1
- now and only now the company known as Apple Inc. is involved in any way
- at this time, Apple Inc. can do one thing and one thing only: count the positive matches security vouchers
- now 2 things can happen
1) the number of positive security vouchers is smaller than the threshold —> go to step 6a
2) the number of positive security vouchers is bigger than the threshold —> go to step 6b

Step 6a
- the security vouchers remain unreadable gibberish till the end of times, well after we are all dead
- not even Tim Cook, the Pope, God, Thanos with all the stones, etc. can crack their multi factor encryption, it’s like granpa Abe Simpson Hellfish unit treasure in that classic Simpsons episode, you need a set number of keys to open the vault, that’s why the “threshold” system is not a policy decision that could be changed easily by Apple Inc. but a technical safeguard that’s built-in in the system: no one could ever end up in Step 6b and Step 7 because of a single unlucky wrong match (or “false positive”)
- Apple Inc. says that a good ballpark estimate of the chance of getting enough false positives to surpass the threshold is 1 in 1 trillion per year; some people dismiss this as “yeah how do I know they’re not being too optimistic” but it should be pointed out that Apple Inc. has given 3 external experts some access to the system, and that even if that quote was wrong by tenfold (1 in 10^11 instead of 1 in 10^12) it would be still be an extremely rare event (one innocent account flagged every 117 years); moreover, the order of magnitude of said quote is perfectly plausible since we’re talking about the compound probability of multiple rare events (as an example, it would be easy to get to 1 in 10^12 as the compound probability of six 1 in 10^2 rare events)

Step 6b
- if the number of positive security vouchers is above the threshold, finally Apple Inc. has enough cryptographic keys to decrypt the positive security vouchers
- now and only now said security vouchers stop being gibberish, basically any user that only reaches Step 6a has his privacy completely preserved (compare this to server-side searches of decrypted data on servers that equally invade the privacy of both innocent and not-so-innoncent users)

Step 7 - HUMAN REVIEW
- now and only now the positive security vouchers, no longer gibberish, can be looked at by a human reviewer at Apple Inc. HQ
- the human reviewer will be able to look at a low-res version on the user’s supposedly offending photo
- if the low-res photo is something innocuous like a sunset, a bucket of sand, a cat, a goldfish, etc., (and remember: the matching is based on hashes, not content, so the content won’t necessarily be children-related, that’s not the kind of similarity the AI would catch, don’t worry about the pics of your kids, they have no more probability of being accidentally flagged than any other subject), the human reviewer will acknowledge the system made an error and discard it, no automatic calls to the cops
- if the low-res photo actually looks like actual kiddie p0rn (that gotta be the worst job on Earth and these reviewer are sometimes psychologically scarred), then Apple Inc. will disable your iCloud account and maybe report you or maybe not (depending on the follow up internal investigation)

(phew that was long…imagine people trying to encapsulate the complex implications of all of this in a buzzword, a meme with dogs, or a simplistic hot take or an imprecise real-world analogy)

Users with 0 matches (likely 99%+ of iCloud accounts) will never go past Step 6a and never reach human review.

Users with 1 match will never go past Step 6a and never reach human review.

Users with 2 matches will never go past Step 6a and never reach human review.

Users with n-1 matches (with “n” being the unknown threshold Apple has chosen to use) will never go past Step 6a and never reach human review.


Hope this long recap post helps as many people as possible make an informed evaluation of how this works and what are the actual privacy implications.

I dedicate this to the “it’s not about the tech” crowd who’d wish to bury all this under blind buzzwordy outrage.
 
Last edited:
For me the main issue isn’t about the hashing, it’s that for the report to be verified an Apple employee will review the photos to confirm they are a match.

If they can do this to confirm the validity of the report what it stopping them checking other photos?

And not just Apple employees, what’s stopping other people building a back door into my phone and doing the same?

In their own words, once a back door is there, it’s there for everyone and not just the “good” guys.
 
I'm not worried about this and would love stronger tools to use in keeping my child safe online. Apple has proven at every turn to be trustworthy and is in fact alone in that regard among the tech giants.
 
So there is no way the CSAM database will get updated? What’s in there now will remain in there, no more or no less content, for years? If it gets updated, then yes it is externally controlled.

Apple Inc. can update the database but only by updating iOS. You can block iOS updates, you actively approve iOS updates.
And they still can’t remotely see what the process is doing and what the process is producing (security vouchers), they can’t remotely poll it, the can’t control the scanning process. The scan process results are forever separated from Apple Inc. while on the local device.
 
I'm not worried about this and would love stronger tools to use in keeping my child safe online. Apple has proven at every turn to be trustworthy and is in fact alone in that regard among the tech giants.

This shows a breathtakingly shallow understanding (if we can even use the word "understanding") of what is going on here.
 
For me the main issue isn’t about the hashing, it’s that for the report to be verified an Apple employee will review the photos to confirm they are a match.

If they can do this to confirm the validity of the report what it stopping them checking other photos?

See my long post above.


A cryptographic Fort Knox vault that requires multiple keys to be open is stopping them.
 
Why are you defending Apple so desperately? Have some dignity, how the technology works is not even the point.
How the technology works is not even the point?? It's PRECISELY the point. How it works determines how secure it is, how it protects our privacy, and literally everything else significant to this conversation.
 
I said in a prior post these system should prevent transmission of CSAM instead of blurring it. Just block it from being sent or uploaded.

Even better, the CSAM system should destroy the perverted image.

Even better than that, if the CSAM system detects a big library of these disgusting filthy images then the phone should explode and kill the nonce.
We have people debating Apple doing searches without a warrant and you want corporations to be Judge Dredd.
 
I don’t think you read any, or much, of the material on how it works.
When someone builds a nuclear warhead, however it works doesn’t matter. It kills people.
Apple creates this advanced scanning tool. However fancy it is doesn’t matter. What’s matter is it SCANS private Photos.
Please share, I would like to learn more.
I doubt it, based on your responses.
Rather than just reacting based on no facts, get informed. You might be surprised by what you learn. Your absolutist statement is clearly in denial of how technology works. Check out the alternatives other tech companies are doing before condemning Apple’s approach. I would also encourage you to learn more about the problem that all companies are trying to resolve.

I have to wonder how many people who are reacting so strongly to what Apple is doing also use Facebook, Google, etc. and are oblivious to the the truly invasive techniques those companies use to tackle the problem. People, get informed.
Do you know what’s the meaning of the word “proactive”? Do you know what “proactive approach” mean? Do you know what “pandora box” mean? Why every recent (2020 to 2021) Facebook related posts there are always a group of people saying “who uses Facebook nowadays” or just bashing Facebook one way or another?
“Proactive” has the meaning of “act based on history and reasonable expectation of the consequences”. History shows privacy and safety is a hard trade off: more privacy means less safety and vice versa. Machine learning based tools have been proven the potential to surpass human in the task they have designed to excel, such as beating chess. Apple plays this dangerous game to let machine learning to protect child online safety, which means privacy will be sacrificed in the way. For apple as a “privacy advocate”, such move is bound to receive lots of criticism, and sure they did in the past week.
Then, based on previous observation of power of machine learning and the trade off between privacy and safety, people reasonably assume apple’s move to bring machine learning into this can have untold consequences. So they voice their concern and addresses it’s potential to be misused. Sure, this machine learning tool can be extremely good at finding CSAM as time goes on, but this tool can be just as good to find other images since it’s designed to do so. What will stop Apple from changing policies to allow other countries to search their own images of concern? When we talk about trusting Apple, what we trust is people who runs Apple to not put the tool to the hand of bad guys. Do you trust a bunch of strangers thousands miles away and probably never gonna meet you in person caring about your privacy and safety at the same time? If you do, then good for you, cause we here don’t.
 
How the technology works is not even the point?? It's PRECISELY the point. How it works determines how secure it is, how it protects our privacy, and literally everything else significant to this conversation.
It’s not the point cause that technology is a tool, and people with nefarious intention will use the same tool for nefarious purposes. The issue boils down to people, which said technology has no influence whatever. That tool does what user tells them to do, very well. And they care nothing else.
 
It’s not the point cause that technology is a tool, and people with nefarious intention will use the same tool for nefarious purposes. The issue boils down to people, which said technology has no influence whatever. That tool does what user tells them to do, very well. And they care nothing else.

It's like people don't understand that a hammer can be used to:

1. Pound nails to build a house
2. Break a window to save someone
3. Murder someone by bashing their head over and over

Tools don't have opinions or even necessarily specific things they are "meant for"

How they are used and by whom and for what is all that matters.
 
How the technology works is not even the point?? It's PRECISELY the point. How it works determines how secure it is, how it protects our privacy, and literally everything else significant to this conversation.
Sure, nice hat.
 
  • Haha
Reactions: Shirasaki
also apple do not tell u what the threshold is. a high threshold could be 0 photos or 1 photo or it could be 2 or it could be 10. regardless, any threshold is unacceptable.

It isn't zero. Zero would mean every user of iCloud Photo Library would reach the threshold the instant they upgraded to iOS 15.x. Makes no sense.

1 or 2 are highly unlikely since having just 1 or 2 CSAM pictures are not illegal in most circumstances.

Apple has stated the the false positive probability is "1 in 1 trillion accounts per year" leads me to believe its 10 or more.
any government (or apple or anyone that apple wants) can provide the entire namespace of hashes to look at every photo they want. apple do not tell us what hashes are included. they have probably already included the whole hash namespace, which means they can view every single photo.

I assume that by "provide the entire namespace of hashes" you mean every possible hash value is part of the CSAM hash table?

if so, every photo ever taken and every photo which is going to be taken in the future will result in a match which defeats the system and makes it useless.
 
And where is your limit? Should the postal service be allowed to open all letters and check their contents. Whould it be fine with you to regulary check every apartment. And do not forget, that most abuse of children happens in their family, so maybe we should question and examine all children regulary?
Anyone who, in public, expresses an argument that to save “even one” XYZ is publicly outing themselves as a simpleton (and so many did with the CV19).

Life is about trade off. We could stop all
murder. We could stop lots of things. You just have a totalitarian society of such immense power that everyone is always under strict control and surveillance. All freedom leads to an increasing possibility of some bad outcomes and different people have different opinions about optimums. But those sliders all explode when you propose that XYZ is worth it for “even one” life. It’s like public policy equivalent of diving by zero. We never operate this way personally or in life with others. We allow all sorts of mayhem because we realize life is about trade offs not extremes. Even really bad things like child abuse and murder.
 
Thank you for summarizing your own posts
ciao
I’m going to play devils advocate here, and say this:


These are all the steps spelled out, go to Step 7 for manual/human review.

Step 0
- Apple releases iOS15
- iOS 15 contains the hashes database, downloaded in full to every iPhone
- subsequent updates to the database can only happen as an iOS update, so the system cannot be abused for time sensitive searches (like targeting dissidents that were at a march “last week”)

Step 1
- the user activates iCloud Photos, basically surrendering his photos to Apple servers, like calling the police and saying “I am going to bring the whole content of my home to the local police station”
- then and only then the local scanning process begins
- said scanning process by itself has no way to phone home to Apple HQ

Step 2
- the scanning process creates fingerprints of the user photos (the photos that the user has already promised to surrender to Apple servers, not photos that the user “hasn’t shared with anyone” like some privacy advocate said, not a contradiction to “what happens on iphone stays on iphone”)

Step 3
- said fingerprints are compared by a super smart trained AI to the fingerprints in the database
- the AI is needed not to look at the content of the picture (the content is no longer part of the equation since Step 2) but to have some leeway, some wiggle room to be able to catch slightly modified (cropped, etc.) versions of the known offending picture
- the system is engineered to only match the known offending old photos from the NCMEC repository, it can’t look for new/personal children-related content

Step 4
- the output of the matching process is a label called a “security voucher”, attached to every photo
- this label only says 2 possible things
1) YES, this photo matches a known CSAM photo from the NCMEC repository
2) NO, this photo doesn’t match a known photo from the NCMEC repository
- at this stage though this label is still cryptographically secured GIBBERISH, no human on Earth can read it, not even someone having physical access to your phone
- embedded in the security voucher there’s also a low-res version of the user’s photo
- the label will remain gibberish till the end of time if Step 6b (see below) doesn’t happen
- (is a search whose output is gibberish on a post-it note that just sits there till the end of times an actual search?)

Step 5
- the user uploads the photos to iCloud Photos just like HE promised to do in Step 1
- now and only now the company known as Apple Inc. is involved in any way
- at this time, Apple Inc. can do one thing and one thing only: count the positive matches security vouchers
- now 2 things can happen
1) the number of positive security vouchers is smaller than the threshold —> go to step 6a
2) the number of positive security vouchers is bigger than the threshold —> go to step 6b

Step 6a
- the security vouchers remain unreadable gibberish till the end of times, well after we are all dead
- not even Tim Cook, the Pope, God, Thanos with all the stones, etc. can crack their multi factor encryption, it’s like granpa Abe Simpson Hellfish unit treasure in that classic Simpsons episode, you need a set number of keys to open the vault, that’s why the “threshold” system is not a policy decision that could be changed easily by Apple Inc. but a technical safeguard that’s built-in in the system: no one could ever end up in Step 6b and Step 7 because of a single unlucky wrong match (or “false positive”)
- Apple Inc. says that a good ballpark estimate of the chance of getting enough false positives to surpass the threshold is 1 in 1 trillion per year; some people dismiss this as “yeah how do I know they’re not being too optimistic” but it should be pointed out that Apple Inc. has given 3 external experts some access to the system, and that even if that quote was wrong by tenfold (1 in 10^11 instead of 1 in 10^12) it would be still be an extremely rare event (one innocent account flagged every 117 years); moreover, the order of magnitude of said quote is perfectly plausible since we’re talking about the compound probability of multiple rare events (as an example, it would be easy to get to 1 in 10^12 as the compound probability of six 1 in 10^2 rare events)

Step 6b
- if the number of positive security vouchers is above the threshold, finally Apple Inc. has enough cryptographic keys to decrypt the positive security vouchers
- now and only now said security vouchers stop being gibberish, basically any user that only reaches Step 6a has his privacy completely preserved (compare this to server-side searches of decrypted data on servers that equally invade the privacy of both innocent and not-so-innoncent users)

Step 7 - HUMAN REVIEW
- now and only now the positive security vouchers, no longer gibberish, can be looked at by a human reviewer at Apple Inc. HQ
- the human reviewer will be able to look at a low-res version on the user’s supposedly offending photo
- if the low-res photo is something innocuous like a sunset, a bucket of sand, a cat, a goldfish, etc., (and remember: the matching is based on hashes, not content, so the content won’t necessarily be children-related, that’s not the kind of similarity the AI would catch, don’t worry about the pics of your kids, they have no more probability of being accidentally flagged than any other subject), the human reviewer will acknowledge the system made an error and discard it, no automatic calls to the cops
- if the low-res photo actually looks like actual kiddie p0rn (that gotta be the worst job on Earth and these reviewer are sometimes psychologically scarred), then Apple Inc. will disable your iCloud account and maybe report you or maybe not (depending on the follow up internal investigation)

(phew that was long…imagine people trying to encapsulate the complex implications of all of this in a buzzword, a meme with dogs, or a simplistic hot take or an imprecise real-world analogy)

Users with 0 matches (likely 99%+ of iCloud accounts) will never go past Step 6a and never reach human review.

Users with 1 match will never go past Step 6a and never reach human review.

Users with 2 matches will never go past Step 6a and never reach human review.

Users with n-1 matches (with “n” being the unknown threshold Apple has chosen to use) will never go past Step 6a and never reach human review.


Hope this long recap post helps as many people as possible make an informed evaluation of how this works and what are the actual privacy implications.

I dedicate this to the “it’s not about the tech” crowd who’d wish to bury all this under blind buzzwordy outrage.
Is not buzzword repeating.

With that said. I don’t think anyone would disagree with using this implementation to catch predators. What we’re rightfully worried about is any expansion upon this system.

The bandaid has beed ripped off so to speak. We question when this surveillance method will be expanded upon. And we should be concerned for our own safety because the government has no repercussions for any abuse of their systems.

Apple we can withhold money from, can refuse to buy their products or sue them in a court of law.

We cannot do the same for the government.
 
With that said. I don’t think anyone would disagree with using this implementation to catch predators.

I do disagree with it quite honestly.

With my device and my data, I don't want to be a part of a dragnet.

My phone and my data are not here to be combed through to "catch predators" (or any other goal of some third party).

Just because technology has made this now easier and "possible", I really wish people wouldn't just mentally capitulate to it so quickly.

Maybe that's something you're ok with - all good if so - reasonable minds can disagree.


With respect to the user you quoted (I don't see their posts anymore), the tone of some of the messaging at others I object to and thus am not engaging with that person any longer.
 
Last edited:
also apple do not tell u what the threshold is. a high threshold could be 0 photos or 1 photo or it could be 2 or it could be 10. regardless, any threshold is unacceptable.
I do want to point out here that the “threshold” Apple is talking about is mainly how many features they have found in said photos, but it can also be “how many photos found in said user’s photo library”. Apple makes no effort to clarify this part.
 
I do disagree with it quite honestly.

With my device and my data, I don't want to be a part of a dragnet.

My phone and my data are not here to be combed through to "catch predators" (or any other goal of some third party).

Just because technology has made this now easier and "possible", I really wish people wouldn't just mentally capitulate to it so quickly.

Maybe that's something you're ok with - all good if so - reasonable minds can disagree.


With respect to the user you quoted (I don't see their posts anymore) the tone of some of the messaging at others I object to and thus am not engaging with that person any longer.
I tried to word it in a way that allowed for your point of view, because I agree with it. Still I tried to include those who would like predators caught but have reservations about the implications of the system. I’m not very good with words.
 
  • Like
Reactions: turbineseaplane
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.