Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
66,067
34,920


It has now been over a year since Apple announced plans for three new child safety features, including a system to detect known Child Sexual Abuse Material (CSAM) images stored in iCloud Photos, an option to blur sexually explicit photos in the Messages app, and child exploitation resources for Siri. The latter two features are now available, but Apple remains silent about its plans for the CSAM detection feature.

iCloud-General-Feature.jpg

Apple initially said CSAM detection would be implemented in an update to iOS 15 and iPadOS 15 by the end of 2021, but the company ultimately postponed the feature based on "feedback from customers, advocacy groups, researchers, and others."

In September 2021, Apple posted the following update to its Child Safety page:
Previously we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them and to help limit the spread of Child Sexual Abuse Material. Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.
In December 2021, Apple removed the above update and all references to its CSAM detection plans from its Child Safety page, but an Apple spokesperson informed The Verge that Apple's plans for the feature had not changed. To the best of our knowledge, however, Apple has not publicly commented on the plans since that time.

We've reached out to Apple to ask if the feature is still planned. Apple did not immediately respond to a request for comment.

Apple did move forward with implementing its child safety features for the Messages app and Siri with the release of iOS 15.2 and other software updates in December 2021, and it expanded the Messages app feature to Australia, Canada, New Zealand, and the UK with iOS 15.5 and other software releases in May 2022.

Apple said its CSAM detection system was "designed with user privacy in mind." The system would perform "on-device matching using a database of known CSAM image hashes" from child safety organizations, which Apple would transform into an "unreadable set of hashes that is securely stored on users' devices."

Apple planned to report iCloud accounts with known CSAM image hashes to the National Center for Missing and Exploited Children (NCMEC), a non-profit organization that works in collaboration with U.S. law enforcement agencies. Apple said there would be a "threshold" that would ensure "less than a one in one trillion chance per year" of an account being incorrectly flagged by the system, plus a manual review of flagged accounts by a human.

Apple's plans were criticized by a wide range of individuals and organizations, including security researchers, the Electronic Frontier Foundation (EFF), politicians, policy groups, university researchers, and even some Apple employees.

Some critics argued that Apple's child safety features could create a "backdoor" into devices, which governments or law enforcement agencies could use to surveil users. Another concern was false positives, including the possibility of someone intentionally adding CSAM imagery to another person's iCloud account to get their account flagged.

Article Link: Apple Remains Silent About Plans to Detect Known CSAM Stored in iCloud Photos
 
Last edited:
Being silent probably means something is coming down the line.
One of the rumoured theories at the time was: Apple will realise what a terrible idea this is, and how terrible it makes them look (and PR image is EVERYTHING to Apple, as we all know), so they will drop it and never mention it again. I’m taking the current situation (Apple pretending as if this idea never existed) as evidence for that take.

A spokesperson saying none of their plans have changed is just preserving the status quo. That’s comfortable for them right now. If they make change now, one way or the other, they’ll still open a can of worms they’ll hardly be able to close. It’s not quite all water under the bridge yet.
 
Let’s hope this gets introduced. Harmful material and the individuals who share it could be held to account.

Those who think Apple will be spying on their photos need to learn how hashing works.
I guess you must be smarter than “security researchers, the Electronic Frontier Foundation (EFF), politicians, policy groups, university researchers, and even some Apple employees.”

You also must have missed this part of the article:

Some critics argued that Apple's child safety features could create a "backdoor" into devices, which governments or law enforcement agencies could use to surveil users. Another concern was false positives, including the possibility of someone intentionally adding CSAM imagery to another person's iCloud account to get their account flagged.

I’m all for protecting children and anyone in general from abuse, but invading the privacy of the entire rest of the population to do it isn’t the way to go. You don’t let a someone into your house to check for any illegal substances or content just because you might have it.
 
One of the rumoured theories at the time was: Apple will realise what a terrible idea this is, and how terrible it makes them look (and PR image is EVERYTHING to Apple, as we all know), so they will drop it and never mention it again. I’m taking the current situation (Apple pretending as if this idea never existed) as evidence for that take.

A spokesperson saying none of their plans have changed is just preserving the status quo. That’s comfortable for them right now. If they make change now, one way or the other, they’ll still open a can of worms they’ll hardly be able to close. It’s not quite all water under the bridge yet.
It could be that way, but I guess there’s the possibility of Apple implementing such technology without them announcing or making it public.

I say this, because while for Apple would be easier to drop plans on implementing such technology, I think the European Comission or the European Parliament are asking ways to detect this content in an easy way, as well as ending or limiting the E2E encryption. I don’t have the sources at hand but I’ve read news about it.

Bad times for privacy I’m afraid. Good times for governments who want more and more power over their citizens.
 
I hope everyone who has CSAM in their iCloud photos gets what's coming to them, which is Bubba in a 5 x 7 cell.

For the ignorant people who don't know how hashing works, take five minutes out of your day and learn about it. Take another five minutes and read a few victim impacts statements from children who are/were impacted by distribution of CSAM. If you're not victimizing children, then move along, you have nothing to worry about. Yeah, hashing iCloud albums will create a "back door". Ignorance at its best. More people that don't know how hashing works.

BTW, Facebook, Google, Dropbox and a host of others have been hashing images and videos for years now. And thank god, it's put a lot of pedophiles behind bars. Have you heard about any "back doors" from these providers?
 
Apple planned to report iCloud accounts with known CSAM images to the National Center for Missing and Exploited Children (NCMEC), a non-profit organization that works in collaboration with U.S. law enforcement agencies. Apple said there would be a "threshold" that would ensure "less than a one in one trillion chance per year" of an account being incorrectly flagged by the system, plus a manual review of flagged accounts by a human.
I don’t believe Apple planned to report iCloud accounts with known CSAM images because Apple would have no idea if there were known CSAM images or not. They’d ONLY know that a large number of imagine in an account matched with the hash of known CSAM. So Apple, just like everyone else maintaining storage for users, would report accounts that have a high number of hash matches.
 
QUOTE="I7guy, post: 31335119, member: 863841"]
Being silent probably means something is coming down the line.
[/QUOTE]
If Apple ever reports that iCloud is 100% encrypted, such that they don’t even have access to a user’s images, they’re absolutely doing what they announced earlier, scanning on the device. The government would not allow Apple’s cloud storage to be some safe haven for those who really MUST keep large numbers of CSAM images on their device and in their cloud storage.
 
I hope everyone who has CSAM in their iCloud photos gets what's coming to them, which is Bubba in a 5 x 7 cell.

For the ignorant people who don't know how hashing works, take five minutes out of your day and learn about it. Take another five minutes and read a few victim impacts statements from children who are/were impacted by distribution of CSAM. If you're not victimizing children, then move along, you have nothing to worry about. Yeah, hashing iCloud albums will create a "back door". Ignorance at its best. More people that don't know how hashing works.

BTW, Facebook, Google, Dropbox and a host of others have been hashing images and videos for years now. And thank god, it's put a lot of pedophiles behind bars. Have you heard about any "back doors" from these providers?
Cambridge Analytica ring a bell?
 
One of the rumoured theories at the time was: Apple will realise what a terrible idea this is, and how terrible it makes them look (and PR image is EVERYTHING to Apple, as we all know), so they will drop it and never mention it again. I’m taking the current situation (Apple pretending as if this idea never existed) as evidence for that take.

A spokesperson saying none of their plans have changed is just preserving the status quo. That’s comfortable for them right now. If they make change now, one way or the other, they’ll still open a can of worms they’ll hardly be able to close. It’s not quite all water under the bridge yet.

It's only terrible in the minds of those that don't understand the extensive technological process that Apple implemented to maintain privacy for end users.

It is a well-designed system that is worlds apart from how other companies handle photo gallery scanning.

As for false positives, if someone is intentionally 'planting' photos into someone's iCloud Photo Gallery, then there's another issue that needs to be addresses. The account is compromised, and that has nothing to do with this feature.

Regarding the possibility of back doors, this is just a matter of trust. Apple already clearly stated that they would not allow this feature to be abused by law enforcement agencies for any other purpose, and I choose to believe that. If they were going to allow that, they could do it in secret, and they never have to date.
 
I hope everyone who has CSAM in their iCloud photos gets what's coming to them, which is Bubba in a 5 x 7 cell.

For the ignorant people who don't know how hashing works, take five minutes out of your day and learn about it. Take another five minutes and read a few victim impacts statements from children who are/were impacted by distribution of CSAM. If you're not victimizing children, then move along, you have nothing to worry about. Yeah, hashing iCloud albums will create a "back door". Ignorance at its best. More people that don't know how hashing works.

BTW, Facebook, Google, Dropbox and a host of others have been hashing images and videos for years now. And thank god, it's put a lot of pedophiles behind bars. Have you heard about any "back doors" from these providers?

There's a BIG difference between how Apple's feature is handling this and how the other providers you mentioned handle it. They all do it on the server. Apple's feature operates entirely ON DEVICE. The hashes never leave the device unless significant thresholds are crossed.

This was clearly explained by Apple, but those against this feature continue to compare it to how the other providers are doing it... on their servers, where privacy invasion can happen much more easily. That has never been Apple's approach with this feature.
 
Nope, I don't believe that is correct. Apple's website said "Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC."
Apple manually reviews a report to confirm that the “hash” matches. Because, it IS possible that the algorithm failed and yielded a false match. If they verify that the hash represented a positive match, then they disable the account and sends the report. Apple does not have, and will not be provided access to, the actual images, they are only provided with the hashes.
 
Apple manually reviews a report to confirm that the “hash” matches. Because, it IS possible that the algorithm failed and yielded a false match. If they verify that the hash represented a positive match, then they disable the account and sends the report. Apple does not have, and will not be provided access to, the actual images, they are only provided with the hashes.
I added the word "hashes" to that paragraph of the story to be abundantly clear.
 
Imagine if the Chinese government could one day use this system to check who has the Tank Man image in their cloud storage and deport that person to a muder camp the next day without question.

Apple can only enforce the local law. If the law is different in a different country, will it enforce that for its citizens? Say, everyone agrees that child abuse is bad. But what if in Russia, where homosexuality is pretty much a crime, anything labeled "LGBT propaganda aimed at minors" such as an informative book about an LGBT subject would be called "child abuse" for political reasons, and thus be illegal. Would Apple play international judge and pick and choose what it considers right and wrong based on its own morals, or would it strictly abide by the respective laws of each country, even if they go against Apple's initial "good intentions"? What happens when a government puts pressure on Apple to hand over control of this system to them "or else"? Will they do the right thing or will there come a point where money will matter more? (Hint: money eventually always takes priority over morals).

It sounds good but it gets messy the more questions you ask, which is not a good omen.
 
I think now is the perfect time for Macrumors to suspend comments. When this story gets some traction we know it’s going to be the race to the bottom and will get nasty like previous stories about this.

Meanwhile +1 to Apple for keeping this going. I hate paedophiles.
 
One of the rumoured theories at the time was: Apple will realise what a terrible idea this is, and how terrible it makes them look (and PR image is EVERYTHING to Apple, as we all know), so they will drop it and never mention it again. I’m taking the current situation (Apple pretending as if this idea never existed) as evidence for that take.
Legally, they’re absolutely doing something unless you feel that folks that want to maintain large repositories of CSAM images would be perfectly protected from prying eyes on an iCloud drive :) If they’ve changed anything, they’ve pushed back on the idea of encrypting everyone’s images.
 
Let’s hope this gets introduced. Harmful material and the individuals who share it could be held to account.

Those who think Apple will be spying on their photos need to learn how hashing works.
CSAM, tank man, Whinnie the Pooh, pro/anti-Trump(whichever side you find yourself), etc…

It’s not the ACTUAL subject, it’s the fact that they CAN and are eager to do it. I’m less inclined to be pissed when it’s iCloud, since it’s their storage. But Apple wanted to search our PERSONAL device. You know if they are scanning our devices, any despot can knock on Apple’s local office door, with many armed thugs and order the scanning of anything the despot desires. Apple has proven to bend over backwards to the CCP already, so it would only be a matter of time.

Screw that and anyone who supports on-device scanning.
 
Last edited:
I think there is potential for this system to go badly where innocent photos are viewed in a different context. For example…

Let’s say, you are a parent and you take completely innocent photos of your child.
Some parents take photos where the child may be nude (doesn’t everyone have the classic embarrassing “baby butt” shot in their childhood photo album?) but nobody is offended because everyone knows there is no malintent behind the image. It’s just a child in a state of undress.
So, you have a photo like this of your kid, or your kid in the bath, or your kid at the pool/beach, etc. And you post it to social media, and nobody thinks anything of it because to anyone with a properly working brain, there is nothing to think about it.


But THEN, some creeper trolls public social media accounts like Facebook and Instagram for pictures other people post of their children, sees a few that excite them for reasons of their own, saves the good ones to their computer and shares them online on some sicko forum, or trades them with other perverts, etc.

Now when one of them gets caught, or their website gets raided, etc. all their files get flagged as CSAM because of the context in which they were being distributed and viewed by these people, completely unbeknownst to you, the child’s parent, who now still has this original photo on their phone or in their iCloud. And the checksums match because it’s the same file. Do you see how this goes wrong?

I do not know nearly enough about the process in which material is determined to be CSAM but this scenario doesn’t seem implausible to me.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.