Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Exactly. Nothing is fault free. It’s disgusting that an employee could see a picture, if flagged up incorrectly, a PRIVATE picture of my daughter in the paddling pool for example. It’s just unacceptable. [snip]


I was a little bit unnerved when the news initially broke about CSAM and lots of talking heads took to twitter to lambast something they had very little information about. As I read more about it, I've gotten far more comfortable. I think, unfortunately, there's still way too much FUD (Fear, Uncertainty, Doubt) as illustrated by your comment above.

It doesn't sound like you have, but I would urge you (and others who express similar concerns) to read at least part of Apple's CSM Detection Technical Summary:


It plainly answers questions such as yours in the introduction. it does start getting a little dense the further into the document, especially if you fork off and read the linked Apple PSI (Private Set Intersection) document, but key take aways can be found right in the Introduction (included below)


CSAM Detection enables Apple to accurately identify and report iCloud users who store known Child Sexual Abuse Material (CSAM) in their iCloud Photos accounts. Apple servers flag accounts exceeding a threshold number of images that match a known database of CSAM image hashes so that Apple can provide relevant information to the National Center for Missing and Exploited Children (NCMEC). This process is secure, and is expressly designed to preserve user privacy.

CSAM Detection provides these privacy and security assurances:

  • Apple does not learn anything about images that do not match the known CSAM database.
  • Apple can’t access metadata or visual derivatives for matched CSAM images until a threshold of matches is exceeded for an iCloud Photos account.
  • The risk of the system incorrectly flagging an account is extremely low. In addition, Apple manually reviews all reports made to NCMEC to ensure reporting accuracy.
  • Users can’t access or view the database of known CSAM images.
  • Users can’t identify which images were flagged as CSAM by the system.
    For detailed information about the cryptographic protocol and security proofs that the CSAM Detection process uses, see The Apple PSI System.
 
Last edited by a moderator:
Google has been doing this with GMail since 2014


No one bats an eye for that

But when Apple does it, NOW everyone gets upset

Why is this?
Apple has used marketing-speak words implying that security and privacy were a focus for them…and some people believed it. With these latest developments, even the most ardent Apple excuse makers are forced to justify Apple’s latest actions by making comparisons with Google.
 
My concern with this, and people have never really addressed other than being patronizing because I am ASKING for what I am missing here. We have two scenarios.

Picture 1: I stand in front of my house, thumbs up next to my mailbox.
Picture 2: I keep the pose because my eyes were a bit closed and they took another picture seconds later.

Picture 1 and 2 are VERY VERY CLOSE in terms of a pixel match.

Okay lets walk through the scenario. Assume Picture 1 made it on CSAM. Apple's claims and documentations state that any modifications to picture 1 would result in a collision. Its not a 1:1 hash match to the entry in CSAM. We on the same page so far? Picture 1 gets flagged but picture 2 does not. Correct in that assumption yes? From what people have said, this is indeed correct.

Okay, so I launch Photoshop and adjust picture 1 to match exactly like picture 2. I spend hours to the point where it matches picture 2 pixel for pixel. Still with me? I now have three pictures. Picture 1, modified Picture 1 and Picture 2.

According to Apple any crops, color adjustments, rotations, transforms and edits to the picture will still result in a collision. So are they essentially saying now my modified picture 1 will result in a collision? Yet people are somehow stating picture 2 will NOT be a collision? 1/1trillion odds it will be a collision?

This is a clear contradiction according to Apple's own statements (modification of image results in collision) and people's own explanation of the logic (picture 2 CANNOT be matched).

The reason this type of scenario is concerning is that there might be quite a bit of legit pictures that have the same lets say "pixel attributes" that might produce the same hash as a matched image.

And please, do not respond with just "I don't understand". This is EXACTLY why I am posting this scenario because if someone DOES know why picture 2 truly will not match but modified picture 1 does and there is no concern here, then please explain it. I looked through Apple's document (modifications will be a match and its using AI to get the match). I am looking to understand this type of scenario. How can two identical images not match, yet any modifications to the matched images will match?!

Dear god not this crap again..
 
Apple is known to be all about securing users privacy. Tell me what happened to this campaign? They do not align together in the year of 2021 anymore.

View attachment 1816471
"What happens on your iPhone, stays on your iPhone" and similar lies.
We bought 2 new Macbooks in November and feel scammed. They've lost us as customers. Pixels+GrapheneOS for us this autumn instead of new iPhones.
 
Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups.
“… other child safety groups.” can mean many things. Russia’s anti-gay law that passed in 2013 was done so to “protect children” from homosexual imagery and to prohibit the distribution of "propaganda of non-traditional sexual relationships.”
 
I use duckduckgo. Its really OK. Mail I use proton mail (except of work mails of course), threema as communicator although I also have signal. Never had Facebook or instagram which I consider the gold mine for companies and the 0 net value for the user. I have of course iCloud but only syncing bookmarks between the phones and computers in safari. 1 password I also use and sync via my own NAS as I know that apple can decrypt any files on iCloud and although its encrypted 2 lauers I still have doubts. The photos that I find questionable (private, gf etc etc I store only offline).
Super helpful details, thank you!

iCloud Bookmark syncing is soooo nice -- but I bet could be replicated on self-hosting somehow. Now that I'm looking closely at iCloud, it appears to be a bunch of nice little things. So divide and conquer could work. But iOS seems like it would be the main hurdle, would need a more open OS.

I saw something the other day about 1Passwird exploring self hosted options: https://news.ycombinator.com/item?id=28104134
 
One of many good things resulting from this debacle is, that we now have proof positive, that all the major tech companies are 'Surveillance Capitalists' (ref. for instance Shoshana Zuboff). Apple, Facebook, Google, Microsoft et. al. can now be measured with the same yardstick, given that Apple has shown, it will survey and seek to profit from information surplus gathered from what was previously its customers, but now has become its suppliers.

With above in mind, we can now simplify our buying decisions from: (A) "price x quality x features x privacy" to (B) "price x quality x features".

CSAM scan is now simply a feature we can evaluate as positive or negative according to our world-view. Oh, and debate for sure, after all that's why we are here 🥳.

Apple might very well continue to top the (B) equation, but we certainly have fewer variables to evaluate, which makes buying decisions easier and less uncertain.

Good times for sure 🥳.
 
All companies check for CSAM of stored contents on their servers.
Apple does exactly the same at the moment.
Apple just moved the process from the server to the device. This is more privacy focused and paves the way for fully-encrypted photo libraries while still complying with child safety laws.

People just go with the hype and don't even bother to read the documentation of how this feature works 🤦‍♂️
Correction: Apple just moved the process from THEIR server to MY device.

And since everyone is calling out those who are against it for "conspiracy" theories, let me point out the other conspiracy theory going around on here--there is no proof that Apple plans end-to-end encryption for iCloud photos. If there was, it would have been smart to announce it at the same time because that would make this news a little more tenable.
 
that is not the same. not even close
Just because people can’t wrap their heads around cryptographic double blind checks and balances.
It IS the same.
If you don’t “speed” multiple times, the results of your local scan will remain unreadable nonsense to Apple.
 
[snip]

Also, how does the manual review verify the subject is legal? I have seen people in their 20s that look 16. Heck I looked 15 until I was in my 30s!


From the Apple CSAM Detection Technical Summary
https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf

I think Apple also attempts to address your other question regarding similar images.


Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.

The hashing technology, called NeuralHash, analyzes an image and converts it to a unique number specific to that image. Only another image that appears nearly identical can produce the same number; for example, images that differ in size or transcoded quality will still have the same NeuralHash value.
 
Some REALLY bad takes here.

Could governments force Apple to add non-CSAM images to the hash list?
Apple will refuse any such demands. Apple's CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government's request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.

Rigggggggght. If someone like China says you cant sell your devices here unless you give us hashing of mocking of our supreme leader, I'm 100% confident Apple will take the hit and lose billions. NOT.

Apple has already caved to China MULTIPLE times with App Store and other policies.
Can non-CSAM images be "injected" into the system to flag accounts for things other than CSAM?
Our process is designed to prevent that from happening. The set of image hashes used for matching are from known, existing images of CSAM that have been acquired and validated by child safety organizations. Apple does not add to the set of known CSAM image hashes.

That says nothing other than APPLE can't add to the database. Never thought they could. It is another "yar trust us we be the government" The same government that has used terrorism to invade privacy for decades now, especially since 9/11.




And to the "buh other companies have done it." Yes, and they NEVER promised privacy as Apple has pushed publicly for years why to use their products. No one should have ever reasonably thought your Google Photos were private; nor was that ever publicly stated.

No one expected that going into those other services and ha the rug pulled out. VERY different scenario here.
 
Last edited:
  • Like
Reactions: 09872738
From a different perspective, I don’t think Apple really has any desire to “police” their users, “become law enforcement” or jump down the rabbit hole to enable governments to control you ..

Apple is a for profit, publicly traded company, who basically has two primary goals - 1) provide shareholder value 2) reduce risk that could impact shareholder value.

I suspect Apple has little desire to store millions of image of child exploitation on their servers. This could potentially open them up for possible legal risk - and it legit further exploits these victims everytime an image is shared.

At first, I was as angry as everyone else - the more I think logically and rationally about it - I feel this move is primarily around reducing their personal liability and refusing to be a platform that victimizes children.

This is why they are so focused on implementing it via iCloud photos vs on device.
 


Apple has published a FAQ titled "Expanded Protections for Children" which aims to allay users' privacy concerns about the new CSAM detection in iCloud Photos and communication safety for Messages features that the company announced last week.

apple-privacy.jpg

"Since we announced these features, many stakeholders including privacy organizations and child safety organizations have expressed their support of this new solution, and some have reached out with questions," reads the FAQ. "This document serves to address these questions and provide more clarity and transparency in the process."

Some discussions have blurred the distinction between the two features, and Apple takes great pains in the document to differentiate them, explaining that communication safety in Messages "only works on images sent or received in the Messages app for child accounts set up in Family Sharing," while CSAM detection in iCloud Photos "only impacts users who have chosen to use iCloud Photos to store their photos… There is no impact to any other on-device data."

From the FAQ: The rest of the document is split into three sections (in bold below), with answers to the following commonly asked questions:

  • Communication safety in Messages
  • Who can use communication safety in Messages?
  • Does this mean Messages will share information with Apple or law enforcement?
  • Does this break end-to-end encryption in Messages?
  • Does this feature prevent children in abusive homes from seeking help?
  • Will parents be notified without children being warned and given a choice?
  • CSAM detection
  • Does this mean Apple is going to scan all the photos stored on my iPhone?
  • Will this download CSAM images to my iPhone to compare against my photos?
  • Why is Apple doing this now?
  • Security for CSAM detection for iCloud Photos
  • Can the CSAM detection system in iCloud Photos be used to detect things other than CSAM?
  • Could governments force Apple to add non-CSAM images to the hash list?
  • Can non-CSAM images be "injected" into the system to flag accounts for things other than CSAM?
  • Will CSAM detection in iCloud Photos falsely flag innocent people to law enforcement?
Interested readers should consult the document for Apple's full responses to these questions. However, it's worth noting that for those questions which can be responded with a binary yes/no, Apple begins all of them with "No" with the exception of the following three questions from the section titled "Security for CSAM detection for iCloud Photos:"
Apple has faced significant criticism from privacy advocates, security researchers, cryptography experts, academics, and others for its decision to deploy the technology with the release of iOS 15 and iPadOS 15, expected in September.

This has resulted in an open letter criticizing Apple's plan to scan iPhones for CSAM in iCloud Photos and explicit images in children's messages, which has gained over 5,500 signatures as of writing. Apple has also received criticism from Facebook-owned WhatsApp, whose chief Will Cathcart called it "the wrong approach and a setback for people's privacy all over the world." Epic Games CEO Tim Sweeney also attacked the decision, claiming he'd "tried hard" to see the move from Apple's point of view, but had concluded that, "inescapably, this is government spyware installed by Apple based on a presumption of guilt."

"No matter how well-intentioned, Apple is rolling out mass surveillance to the entire world with this," said prominent whistleblower Edward Snowden, adding that "if they can scan for kiddie porn today, they can scan for anything tomorrow." The non-profit Electronic Frontier Foundation also criticized Apple's plans, stating that "even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor."

Article Link: Apple Publishes FAQ to Address Concerns About CSAM Detection and Messages Scanning
Where is the link to


Apple has published a FAQ titled "Expanded Protections for Children" which aims to allay users' privacy concerns about the new CSAM detection in iCloud Photos and communication safety for Messages features that the company announced last week.

apple-privacy.jpg

"Since we announced these features, many stakeholders including privacy organizations and child safety organizations have expressed their support of this new solution, and some have reached out with questions," reads the FAQ. "This document serves to address these questions and provide more clarity and transparency in the process."

Some discussions have blurred the distinction between the two features, and Apple takes great pains in the document to differentiate them, explaining that communication safety in Messages "only works on images sent or received in the Messages app for child accounts set up in Family Sharing," while CSAM detection in iCloud Photos "only impacts users who have chosen to use iCloud Photos to store their photos… There is no impact to any other on-device data."

From the FAQ: The rest of the document is split into three sections (in bold below), with answers to the following commonly asked questions:

  • Communication safety in Messages
  • Who can use communication safety in Messages?
  • Does this mean Messages will share information with Apple or law enforcement?
  • Does this break end-to-end encryption in Messages?
  • Does this feature prevent children in abusive homes from seeking help?
  • Will parents be notified without children being warned and given a choice?
  • CSAM detection
  • Does this mean Apple is going to scan all the photos stored on my iPhone?
  • Will this download CSAM images to my iPhone to compare against my photos?
  • Why is Apple doing this now?
  • Security for CSAM detection for iCloud Photos
  • Can the CSAM detection system in iCloud Photos be used to detect things other than CSAM?
  • Could governments force Apple to add non-CSAM images to the hash list?
  • Can non-CSAM images be "injected" into the system to flag accounts for things other than CSAM?
  • Will CSAM detection in iCloud Photos falsely flag innocent people to law enforcement?
Interested readers should consult the document for Apple's full responses to these questions. However, it's worth noting that for those questions which can be responded with a binary yes/no, Apple begins all of them with "No" with the exception of the following three questions from the section titled "Security for CSAM detection for iCloud Photos:"
Apple has faced significant criticism from privacy advocates, security researchers, cryptography experts, academics, and others for its decision to deploy the technology with the release of iOS 15 and iPadOS 15, expected in September.

This has resulted in an open letter criticizing Apple's plan to scan iPhones for CSAM in iCloud Photos and explicit images in children's messages, which has gained over 5,500 signatures as of writing. Apple has also received criticism from Facebook-owned WhatsApp, whose chief Will Cathcart called it "the wrong approach and a setback for people's privacy all over the world." Epic Games CEO Tim Sweeney also attacked the decision, claiming he'd "tried hard" to see the move from Apple's point of view, but had concluded that, "inescapably, this is government spyware installed by Apple based on a presumption of guilt."

"No matter how well-intentioned, Apple is rolling out mass surveillance to the entire world with this," said prominent whistleblower Edward Snowden, adding that "if they can scan for kiddie porn today, they can scan for anything tomorrow." The non-profit Electronic Frontier Foundation also criticized Apple's plans, stating that "even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor."

Article Link: Apple Publishes FAQ to Address Concerns About CSAM Detection and Messages Scanning
“Interested readers should consult the document for Apple's full responses to these questions.”

Where is the link to the actual Apple FAQ?
 
From a different perspective, I don’t think Apple really has any desire to “police” their users, “become law enforcement” or jump down the rabbit hole to enable governments to control you ..

Apple is a for profit, publicly traded company, who basically has two primary goals - 1) provide shareholder value 2) reduce risk that could impact shareholder value.

I suspect Apple has little desire to store millions of image of child exploitation on their servers. This could potentially open them up for possible legal risk - and it legit further exploits these victims everytime an image is shared.

At first, I was as angry as everyone else - the more I think logically and rationally about it - I feel this move is primarily around reducing their personal liability and refusing to be a platform that victimizes children.

This is why they are so focused on implementing it via iCloud photos vs on device.
Now that you've had time to process it, what does it mean for you as an Apple customer? Initially when you were angry, what actions were you thinking of taking? And now that you've had more thoughts, what are your next steps?
 
From a different perspective, I don’t think Apple really has any desire to “police” their users, “become law enforcement” or jump down the rabbit hole to enable governments to control you ..

Apple is a for profit, publicly traded company, who basically has two primary goals - 1) provide shareholder value 2) reduce risk that could impact shareholder value.

I suspect Apple has little desire to store millions of image of child exploitation on their servers. This could potentially open them up for possible legal risk - and it legit further exploits these victims everytime an image is shared.

At first, I was as angry as everyone else - the more I think logically and rationally about it - I feel this move is primarily around reducing their personal liability and refusing to be a platform that victimizes children.

This is why they are so focused on implementing it via iCloud photos vs on device.

Bit of a red herring argument.


If they are liable for what is on their servers, CSAM is the LEAST of their worries in terms of volume.


Sharing nudes of others without their permission, pics of drugs, illegal guns, rape/sexual assault, etc etc. Those are all illegal; maybe just morally "under' CSAM.

I would bet there are 100+ times the photos of other illegal activities than CSAM.


I also get the bad press aspect, but this is a moral decision by Apple which should not be interjected to its users what is a more or less moral crime than others. But that is not an excuse to invade privacy that they tout as a feature of their devices.

If that is the slippery slope then how long before celeb nudes are hashed, or sexual assault hashes? What makes one crime more or less moral than another to check for?

They went how many years of icloud without such a policy? A decade now? How many times have they been sued for hosting CSAM. I've never heard once. So why now then? THAT is the million-dollar question here.
 
Last edited:
  • Haha
  • Like
Reactions: ian87w and MozMan68
Well a picture is just a bunch of pixels. And there is some leeway to the hash. Therefore an adult/legal subject could have similar colors, scenes, pose and other similarities.

Granted I have NEVER seen an example, nor do I want to. So I’m not sure what types of pictures we are talking about here. So that is good ignorance on my part I guess! Adult subjects can be naked in the bathtub and it might produce a hash that fits in the threshold for example.

Also, how does the manual review verify the subject is legal? I have seen people in their 20s that look 16. Heck I looked 15 until I was in my 30s!
I don't think you understand the complexity of what is being compared here..and that's okay. Read the white papers included in the original article as they give very good examples of compared images and how difficult it is to create images with identical hashes.

To your second point, the reviewer is not examining th picture for content, they are examining the liklihood that it is identical in whole or in part to the image in the database...nothing more. The chances that you have more than one image in your account that are not a match to database images is one in one trillion.
 
  • Like
Reactions: CouldBeWorse
In 5 years: “It is the law that we have to scan for government critical images. Apple only follows regional laws.“

And even worse, if the the CSAM database is compromised there will be plenty of positives for whatever image which will then be flagged to Apple and that can be taken as a signal to any intelligence agency watching that person X has whatever the image is.

Apple wouldn't even have to be aware of it.

This is opening a can of worms for Apple and it will end badly. It isn't just a slippery slope, it is a nearly vertical slope covered in ice.

This isn't about child exploitation, no one (well, except the child molesters) is in favor of child porn or anything like it. It is about what happens next. Once you have the technology built in to scan the photos for a checksum, then you can scan photos for any checksum for any image and that will eventually be required.

At least we now know why Apple wouldn't implement full end to end encryption for iCloud photos etc.
 
Last edited:
Dear god not this crap again..

Please be helpful. If you know it won’t happen then explain about it. I’m not born with this knowledge and that’s part of what forums are for. I’m explicitly stating I am asking for the reason.
 
That's the probability stated by Apple, and they refuse to justify that number.
It’s a totally believable ballpark for anyone able to calculate compound probability of rare events, since you’d need multiple errors to raise a red flag.

Let’s say the hash matching system is very dumb and makes 1 in 100 errors a year per iCloud account.

If apple sets the “yellow cards” (as in soccer) threshold to be 6 matches to get a “red card”, that would make 1 in a trillion. (1/100 x 1/100 x 1/100 x 1/100 x 1/100 x 1/100)

And we’re assuming the system to be pretty dumb at 1 in 100 chance of error.
 
I don't think you understand the complexity of what is being compared here..and that's okay. Read the white papers included in the original article as they give very good examples of compared images and how difficult it is to create images with identical hashes.

To your second point, the reviewer is not examining th picture for content, they are examining the liklihood that it is identical in whole or in part to the image in the database...nothing more. The chances that you have more than one image in your account that are not a match to database images is one in one trillion.


I have read the white papers. And watched videos. And listened to podcasts.

Apples claim here is that ANY MODIFICATION to the image will still flag it. But have you seen how much people can modify an image in photoshop? They must be overstating this.
 
Now that you've had time to process it, what does it mean for you as an Apple customer? Initially when you were angry, what actions were you thinking of taking? And now that you've had more thoughts, what are your next steps?
Step 1: on IPP and Apple Phone 12: Settings -> General -> Software Update -> Automatic Updates: switch from “On” to “Off” while I evaluate my next steps.
 
The fact is no matter how you sell it that they are snooping in your device. All the rest are details.

Buzzword virtue signalling.
They’re using a creative way to pre-label illegal stuff in a double blind way that doesn’t get revealed until you upload the data to their server anyway.
Genius and elegant solution if you ask me.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.