Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I suspect you're missing one of the big concerns: It's not this specific implementation that's necessarily the problem, although there has been some evidence to suggest it is, or could become, problematical. It's also the precedent of doing any on-device scanning for the purpose of ferreting-out illegal activity that's a problem.

If doing on-device scanning for CSAM is ok, then why not on-device scanning for prohibited <name your thing>? Weapons? Political gatherings? If scanning images that people plan to upload to cloud storage is ok, then why not scan images regardless of whether they're to be uploaded anywhere? If scanning for image matches is ok, then why not scanning for "hate speech?" (Some countries do have "hate speech" laws and there are people, right here in "the land of the free," that would like to see them here, too.) Or planned protests? Or...?

Yes, this is a slippery slope argument. But that doesn't necessarily make it fallacious, as some here are wont to claim.

Bottom line: It is felt by many, and by all security researchers and privacy advocates, that this crosses a line that should not be crossed. I agree. Emphatically.

Besides: Viscerally, having some kind of scanner not under my direct control, on my devices, is... icky
puking-01.gif
This argument still fails to assert why anyone trusted Apple for the last X time units to not use your device for spying purposes. Take any of the prohibited <name your thing> ideas for any jurisdiction, and the (much more efficient) tech to accomplish it has already been present in your phone for many years now --- all data on your phone has not only been freely scanned by iOS for processing purposes (make the picture pretty!), but has been scanned for specific content tagging [photos has object/face detection, spotlight indexes contents of all files, etc.]. The rhetoric that this sets some sort of precedent of on-device scanning simply doesn't follow; Apple has already been scanning all of your data, on your device. The "why we scan for it" (to "root out CSAM", or by your argument, to "ferret-out illegal activity") doesn't matter (according to Ed Snowden, at least (https://edwardsnowden.substack.com/p/all-seeing-i). That works both ways, and should apply to all scanning, not just this new method of scanning --- the argument has been repeated endlessly: government X will compel Apple to scan for Y: what's stopped them from doing that already? The tech has already been there, so why wait for a harder-to-be-abused (if Apple's documentation is to be believed) technique to be pushed?
 
  • Love
Reactions: jhollington
Looks like they want to clear the path for iPhone season.
I'm late to this topic but I was NOT going to upgrade from an iPhone 7 this fall because I would be forced to use iOS 15 with this terrible CSAM feature that we've all chatted about the past few weeks.

With this announcement I am eager to see what is announced in a few weeks!! :)
 
  • Wow
Reactions: Euronimus Sanchez
I suspect you're missing one of the big concerns: It's not this specific implementation that's necessarily the problem, although there has been some evidence to suggest it is, or could become, problematical. It's also the precedent of doing any on-device scanning for the purpose of ferreting-out illegal activity that's a problem.
That's a fair statement, but I keep coming back to the fact that Apple has been doing on-device scanning for things in your photos for at least five years. Granted, that's not for the purpose of ferreting out illegal activity, but the exact same slippery slope argument could be made for that.

Apple created the technology, and I'm sure there have been government agencies salivating at the idea of using it for their own nefarious purposes. We simply have to believe that Apple has already been saying "no" to these requests for years.

If doing on-device scanning for CSAM is ok, then why not on-device scanning for prohibited <name your thing>? Weapons? Political gatherings? If scanning images that people plan to upload to cloud storage is ok, then why not scan images regardless of whether they're to be uploaded anywhere? If scanning for image matches is ok, then why not scanning for "hate speech?" (Some countries do have "hate speech" laws and there are people, right here in "the land of the free," that would like to see them here, too.) Or planned protests? Or...?
Open your Photos app and do a search for something like "firearms." I don't know about you, but I get at least a few hits. The technology is already there to do this, and has been since Apple released iOS 10 back in 2016.... and guess what? It runs on your iPhone whether you're using iCloud Photo Library or not, and there's no way to turn it off.

Compared to this, the CSAM Detection algorithm, which only runs when iCloud Photo Library is on, and can only compare photos to known images, is really nothing to get excited about.

Bottom line: It is felt by many, and by all security researchers and privacy advocates, that this crosses a line that should not be crossed. I agree. Emphatically.
While guys like Snowden enjoy hearing the sound of their own voices far too much, I think at least part of the pushback from some of the more professional organizations like the EFF is simply a matter of not being sure that Apple has thought this one through — and they're probably right.

Apple is being too smart and cocky for its own good here. It did the same thing with the AirTag's anti-stalking features, where it clearly had to go on the defensive instead of consulting with domestic violence advocacy organizations in the first place. I'm sure if Apple had actually involved the EFF and other similar organizations when this thing was still on the drawing board, it could have come up with a solution that would have satisfied their concerns.

In fact, it's conceivably possible the solution as it exists may have satisfied their concerns, if Apple had actually let them peek behind the curtain and confirm that all of the necessary precautions are in place to prevent it from being abused. Of course, that assumes that they really are — I suspect there are things that Apple hasn't thought of, which is why they should have involved others in the development of this whole thing in the first place, and likely exactly why they're finally backing off with their tail in between their legs and realizing that this shouldn't have been created in a vacuum.

Besides: Viscerally, having some kind of scanner not under my direct control, on my devices, is... icky
puking-01.gif
That point I sort of get. I think I'd feel the same level of icky if I hadn't read all of the technical papers on it and had some understanding of what's going on — and most significantly the fact that it's directly and solely tied to iCloud Photo Library. Personally, I feel like it's only a semantic difference whether photos are scanned on my iPhone before they're uploaded to iCloud or after they arrive on Apple's servers. That's just me, of course, and I understand how some may feel differently about that.

I guess I'm also just more trusting of Apple, but that's at least partially because if the company really wanted to spy on us, that ship already would have sailed a long time ago.
 
But this inherently means that eventually non-infringing material will be sent to Apple for review.
Yes, one of the several flaws to which I alluded in my previous post. That, in turn, demands a certain degree of trust, not just in Apple (or whomever is doing the manual reviews), but also on the individuals doing the reviews. In a day-and-age where people with convictions for child abuse have been found to have been hired into positions of trust involving children, one might reasonable be excused for being a bit concerned about the possibilities. Far-fetched? Paranoid? Would you ever have believed a school district would be so careless as to hire a bus driver that was a registered sex offender? Yet it's happened.

(Though, admittedly, this speaks more to the flaw in doing any scanning of privately-owned material than it does Apple's plan, specifically.)
 
According to their documentation, the implementation is even more secure than this (i.e., this situation, if Apple's rhetoric is to be believed, is itself not even plausible) --- specifically, if China wants to catch dissidents, they'd need to both 1) compel their local CSAM maintainers to upload dissident photos (easy), and 2) compel at least 1 foreign jurisdiction's CSAM maintainer to upload the same dissident photos (much harder).
Exactly, and then there's also the fact that Apple has no short-term plans to launch this outside of the U.S. in the first place. That makes it far easier to say "no" to foreign governments.

my bet is Apple spent more than a brief moment, unless their idea from the get-go was malevolent.
Personally, I think Apple's goals are noble here, but I also think that they made a huge tactical error by not getting the EFF and other agencies on board from the get-go. Apple may have put a lot of thought into this, but there may still be things that they haven't thought of... but more importantly, they haven't given anybody enough of a peek behind the curtain to really verify whether they've dotted all the i's and crossed all the t's.

It's pretty much exactly what Apple did with AirTags, where it clearly didn't consult with any meaningful advocacy groups or experts when it came to how the anti-stalking features were going to work.
 
  • Like
Reactions: dk001
For all the proud mommies and daddies taking pics of their child splash-splashing and cavorting in a tub of suds, welcome to pedophile-dom. This is dumb. It would be fun to see some progressive warriors get stung.
 
This is a partial win - but people need to keep the pressure on to scrap the whole thing. Apple is a technology company and has absolutely no business in dropping their core values that people literally base their purchase decisions to buy their products on to do a parents job.
 
Last edited:
Really, that's weird cause the last few thousand of my photos are both on iCloud and my device. It's only the older photos that have been moved to the cloud, but still there are thumbnails of them on my phone.

Also, the ability to scan a user's phone without them having an iCloud account is part of this.
Context. My post was in reply to someone trying to make fun of Apple’s “what happens on iPhone, stays on iPhone” slogan. When you upload something to iCloud, it no longer falls under the category of “what happens on iPhone”.

It’s idiotic to expect complete privacy on data you chose to upload to someone else’s server. It’s Apple’s servers, on its perfectly reasonable for them to care what’s stored on them. I’m just against the on device scanning.
 
So - anyone disturbed just a tad that there is a whole group at Apple that is studying how to identify Child Porn and how to program some computer to recognize it? That means that have to have examples of it....that means they have to study it, that means they have to develop requirements for this SW, that means they have to develop algorithms to figure out that this picture is child porn vs a kid taking a bath or in a swimming pool....

Then someone has to review these results to make sure they are correct and meet the requirements of the SW product.

What kind of staff are working this task?
By having and reviewing examples of the Kiddie Porn, they are breaking the very same laws.
Who is vetting these Apple employees?
This is making me queazy to think about.
Exactly this
 
  • Like
Reactions: hatchettjack
@jhollington, I think we're kinda sorta on the same page, we just have different levels of trust.

I'm a paranoid SOB, and no mistake. Example: When I was still employed as an IT Admin I made it a habit to touch bases with new employees to caution them "I don't know what you're used to where you used to be employed, but here, we, I, take network security very seriously. I can almost guarantee there are things you're used to doing that will run afoul our our policies." (I was a bit more diplomatic than that, but you get the idea.) One new sales guy looked at me skeptically, so I told him "I guarantee you I'm the most anally-retentive security weenie you're likely ever to meet." "I doubt it," he replied, "they were pretty strict where I came from." "I'll get back to you in a month or so," I told him, with a smile. And I did. "Well?" I asked him. "You win," he replied, laughing.

Hell, I once suspended my manager's network access for violating security policy. Got way with it, too ;)
 
... but I also think that they made a huge tactical error by not getting the EFF and other agencies on board from the get-go. Apple may have put a lot of thought into this, but there may still be things that they haven't thought of... but more importantly, they haven't given anybody enough of a peek behind the curtain to really verify whether they've dotted all the i's and crossed all the t's.
I believe the less emotionally-charged pushback from the security community has been exactly this. This was a huge blunder on Apple's part IMO, and some who enjoy their fame (lookin' at you, Snowden) (or five minutes of it [
Kulshrestha and Mayer]) capitalized to ignite a frenzy of emotionally-driven rhetoric. Any technique like this with noble goals should, IMO, be open-sourced for as much wide-spread, openly invited scrutiny as possible.
 
Additionally, there is the verification step once the scanner triggers enough "matches" -- your "matched" data is sent to Apple for review. Are enterprises really OK with this? The scanner thinks it's a match, so it's going to send potentially proprietary company information to ... someone ... at Apple?
As the feature is currently laid out, the data being sent to Apple for review would only be photos that are already on Apple's servers, since it only scans photos being uploaded to iCloud Photo Library. If enterprises are already okay with employees uploading proprietary company information to iCloud, this doesn't really change anything, and if they're not, then iCloud Photo Library should be disabled, or employees shouldn't be using the Photos app for company information, in which case nothing gets scanned.

Obviously the theory is that it's only sending when there's high confidence it's CSAM and not something innocuous. But the existence of the review step indicates that this is not the same as certainty. And of course we all know there can't be certainty without human review. But this inherently means that eventually non-infringing material will be sent to Apple for review.
Apple claims there's a one in a trillion chance of a false positive, and while that may be a bit of a higher estimate, it's still going to be very uncommon. In the case of the most common hashing algorithms like MD5, the probability of two files having the same hash (known as a "hash collision") accidentally is estimated to be somewhere around 1.47*10-29, although it's fairly trivial to generate files that compute to the same hash deliberately.

Although the probabilities for the CSAM hashes could be different, it's still going to be extremely rare that another photo triggers a false positive. Then there's the fact that there would need to be 30 such false positives before anybody at Apple would know about it.

A false positive would almost certainly look nothing like an actual CSAM photo. It could be a photo of at tractor, a cloud, or anything really, as it's just a mathematical coincidence that two photos compute to the same hash.

However, since we're talking about reporting somebody to a law enforcement organization, which is extremely serious, it stands to reason that Apple wants to be absolutely, indisputably certain before taking that step.
 


Apple has delayed the rollout of the Child Safety Features that it announced last month following negative feedback, the company has today announced.

Child-Safety-Feature-Blue.jpg

The planned features include scanning users' iCloud Photos libraries for Child Sexual Abuse Material (CSAM), Communication Safety to warn children and their parents when receiving or sending sexually explicit photos, and expanded CSAM guidance in Siri and Search.

Apple confirmed that feedback from customers, non-profit and advocacy groups, researchers, and others about the plans has prompted the delay to give the company time to make improvements. Apple issued the following statement about its decision:

Following their announcement, the features were criticized by a wide range of individuals and organizations, including security researchers, the privacy whistleblower Edward Snowden, the Electronic Frontier Foundation (EFF), Facebook's former security chief, politicians, policy groups, university researchers, and even some Apple employees. Apple has since endeavored to dispel misunderstandings and reassure users by releasing detailed information, sharing FAQs, various new documents, interviews with company executives, and more.

The suite of Child Safety Features were originally set to debut in the United States with an update to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. It is now unclear when Apple plans to roll out the "critically important" features, but the company still appears to be intent on releasing them.

Article Link: Apple


Apple has delayed the rollout of the Child Safety Features that it announced last month following negative feedback, the company has today announced.

Child-Safety-Feature-Blue.jpg

The planned features include scanning users' iCloud Photos libraries for Child Sexual Abuse Material (CSAM), Communication Safety to warn children and their parents when receiving or sending sexually explicit photos, and expanded CSAM guidance in Siri and Search.

Apple confirmed that feedback from customers, non-profit and advocacy groups, researchers, and others about the plans has prompted the delay to give the company time to make improvements. Apple issued the following statement about its decision:

Following their announcement, the features were criticized by a wide range of individuals and organizations, including security researchers, the privacy whistleblower Edward Snowden, the Electronic Frontier Foundation (EFF), Facebook's former security chief, politicians, policy groups, university researchers, and even some Apple employees. Apple has since endeavored to dispel misunderstandings and reassure users by releasing detailed information, sharing FAQs, various new documents, interviews with company executives, and more.

The suite of Child Safety Features were originally set to debut in the United States with an update to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. It is now unclear when Apple plans to roll out the "critically important" features, but the company still appears to be intent on releasing them.

Article Link: Apple Delays Rollout of Controversial Child Safety Features to Make Improvements

To make their voices heard everyone should go on Apple customer feedback page and leave their opinion on this feature:

 
... but I also think that they made a huge tactical error by not getting the EFF and other agencies on board from the get-go.
That was certainly a tactical and strategic error, but I'm not certain it would have mattered to some because, no matter what they did or how they implemented it, they'd still be crossing that line many feel should not be crossed. Their arguments then may have been purely philosophical, but they'd likely still have made them.
 
This is apple's way of saying.. "We will silently be scanning your photos and having our AI learn more."
 
Apple probably did design this feature with privacy in mind, but it sets precedent for future scans for many different things (that we know and don’t know about) and its a slippery slope once its activated. Reason and technical understanding should not place FULL 100% trust in Apple and there must be a mention or even an acknowledgement that this can lead to future breaches of privacy both intentional and unintentional.

Can someone explain to me how this "sets a precedent for future scans"? I've been following this for weeks but still don't really understand this point.

I'm not gonna be one of those that at say "I have nothing to hide so this doesn't bother me". Quite the contrary; I value my privacy and have photos on my device that I'd prefer to keep away from prying eyes - although nothing illegal, unless having nude photos of your 40 y/o wife is suddenly illegal ;)

As I understand it, and using my above example of personal photos, the only way my photos can be flagged by the system (and ultimately seen by other human eyes) is if those exact photos are in the CSAM database for the hashes to match. I don't see this as being a credible concern unless I'm missing something.

The idea of abuse is banded around a lot and I guess this is what's meant by "future scans" but I still don't follow. If a regime change suddenly made it illegal for me to have my aforementioned personal photos and I was now at risk persecution than the point above still holds - my personal photos will not be on any database to give any matches. Even if the new nefarious regime manipulated the database to include hundreds of thousands for photos of nude women - none will be my wife so there cannot be any matches (and if any are of my wife then I've got bigger problems).

The same applies to all the other examples mentioned - persecution of political activists is one that comes up a lot. If I were such an activist (or insert any other persecuted demographic) and I have photos of me at some demonstration or rally (or insert any other compromising activity), how can a match by flagged without the actual photos first being in the hashed database?

Clearly, legitimate comparison of image hashes to the CSAM database (even if the database content is compromised) is of little concern (at least to my understanding). So then where is the concern?

Is it the potential to compromise the hash comparison algorithm such that less and less exact matches can be garnered? This to me seems to be the only way to "trick" the system into reporting my unique photos as matching the CSAM (or other) database, however, I don't see this one articulated too clearly in any of the arguments presented. Also I don't see this as being particularly efficient since in order the ensure retrieval of any targeted photo the bar would have to be set so low that all photos would be most likely be reported and sent for investigation.
 
A decade is a bit of an exaggeration, but it's definitely not something that happens as quickly as most people think.

Firstly, there has to be a critical mass of a photo caught circulating, but modern social media and "dark web" channels have sped that process up dramatically in recent years.

More importantly, however, CSAM circulates for years, and the disturbed people who collect this stuff can't ever get enough of it. There's a very high probability that anybody in this situation will have enough photos in their collection that also happen to be in the CSAM database, which is also likely why the threshold is only set at 30. I haven't ever heard of a case where a consumer of CSAM had fewer than several hundred photos in their collection.

Sadly, you're partially right that it's not going to do anything to stop "active abusers" — at least not directly. The animals who are creating CSAM are usually smart enough to avoid public online services in the first place, but even if 30 of their photos strayed into iCloud, they'd be too new to be caught by the CSAM Detection algorithm.

However, this is where old-fashioned detective work and forensics come in, and from the law enforcement agents I've spoken to, more often than not a collector of CSAM provides invaluable leads to track down the distributors and creators.

I agree with everything you say.

I am mainly annoyed by people who pretend that this is going to magical detect and catch someone in the act of abusing a child, and every day you delay its implementation is another day a child gets abused.

I assume in the not to distant future, AI will get good enough that your phone will be able to detect if you are photographing or filming child abuse (similar to how scanners can detect and block scanning currency now).
 
For all the proud mommies and daddies taking pics of their child splash-splashing and cavorting in a tub of suds, welcome to pedophile-dom. This is dumb. It would be fun to see some progressive warriors get stung.
Will not have any effect on those photos, photos have to be known child porn in a data base, scan compares hash from database with uploaded photos.
 
Can someone explain to me how this "sets a precedent for future scans"? I've been following this for weeks but still don't really understand this point.

I'm not gonna be one of those that at say "I have nothing to hide so this doesn't bother me". Quite the contrary; I value my privacy and have photos on my device that I'd prefer to keep away from prying eyes - although nothing illegal, unless having nude photos of your 40 y/o wife is suddenly illegal ;)

As I understand it, and using my above example of personal photos, the only way my photos can be flagged by the system (and ultimately seen by other human eyes) is if those exact photos are in the CSAM database for the hashes to match. I don't see this as being a credible concern unless I'm missing something.

The idea of abuse is banded around a lot and I guess this is what's meant by "future scans" but I still don't follow. If a regime change suddenly made it illegal for me to have my aforementioned personal photos and I was now at risk persecution than the point above still holds - my personal photos will not be on any database to give any matches. Even if the new nefarious regime manipulated the database to include hundreds of thousands for photos of nude women - none will be my wife so there cannot be any matches (and if any are of my wife then I've got bigger problems).

The same applies to all the other examples mentioned - persecution of political activists is one that comes up a lot. If I were such an activist (or insert any other persecuted demographic) and I have photos of me at some demonstration or rally (or insert any other compromising activity), how can a match by flagged without the actual photos first being in the hashed database?

Clearly, legitimate comparison of image hashes to the CSAM database (even if the database content is compromised) is of little concern (at least to my understanding). So then where is the concern?

Is it the potential to compromise the hash comparison algorithm such that less and less exact matches can be garnered? This to me seems to be the only way to "trick" the system into reporting my unique photos as matching the CSAM (or other) database, however, I don't see this one articulated too clearly in any of the arguments presented. Also I don't see this as being particularly efficient since in order the ensure retrieval of any targeted photo the bar would have to be set so low that all photos would be most likely be reported and sent for investigation.
The oft-repeated argument against this is "we don't care how it's supposed to work, we care that's it's happening on our devices." But, to answer your question more directly: there are many who think it would be trivially easy to adjust the existing method to scan for arbitrary things (naked 40 y/o women), although my albeit anecdotal experience is that it would be much easier to just co-opt existing scanning algorithms in the Photos app (facial recognition, object detection, etc.) for nefarious purposes. Transfer learning in DNNs is certainly a thing, but it's not a magic button like some seem to think.
 
  • Like
Reactions: jhollington
So - anyone disturbed just a tad that there is a whole group at Apple that is studying how to identify Child Porn and how to program some computer to recognize it? That means that have to have examples of it....that means they have to study it, that means they have to develop requirements for this SW, that means they have to develop algorithms to figure out that this picture is child porn vs a kid taking a bath or in a swimming pool....

Then someone has to review these results to make sure they are correct and meet the requirements of the SW product.

What kind of staff are working this task?
By having and reviewing examples of the Kiddie Porn, they are breaking the very same laws.
Who is vetting these Apple employees?
This is making me queazy to think about.
Um, no. No one at Apple needs any actual CSAM do do their jobs. Just the hashes that have been provided by the appropriate organization.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.