Apple’s statement did not age well.So Apple cares about the screeching minority after all?
Apple’s statement did not age well.So Apple cares about the screeching minority after all?
This argument still fails to assert why anyone trusted Apple for the last X time units to not use your device for spying purposes. Take any of the prohibited <name your thing> ideas for any jurisdiction, and the (much more efficient) tech to accomplish it has already been present in your phone for many years now --- all data on your phone has not only been freely scanned by iOS for processing purposes (make the picture pretty!), but has been scanned for specific content tagging [photos has object/face detection, spotlight indexes contents of all files, etc.]. The rhetoric that this sets some sort of precedent of on-device scanning simply doesn't follow; Apple has already been scanning all of your data, on your device. The "why we scan for it" (to "root out CSAM", or by your argument, to "ferret-out illegal activity") doesn't matter (according to Ed Snowden, at least (https://edwardsnowden.substack.com/p/all-seeing-i). That works both ways, and should apply to all scanning, not just this new method of scanning --- the argument has been repeated endlessly: government X will compel Apple to scan for Y: what's stopped them from doing that already? The tech has already been there, so why wait for a harder-to-be-abused (if Apple's documentation is to be believed) technique to be pushed?I suspect you're missing one of the big concerns: It's not this specific implementation that's necessarily the problem, although there has been some evidence to suggest it is, or could become, problematical. It's also the precedent of doing any on-device scanning for the purpose of ferreting-out illegal activity that's a problem.
If doing on-device scanning for CSAM is ok, then why not on-device scanning for prohibited <name your thing>? Weapons? Political gatherings? If scanning images that people plan to upload to cloud storage is ok, then why not scan images regardless of whether they're to be uploaded anywhere? If scanning for image matches is ok, then why not scanning for "hate speech?" (Some countries do have "hate speech" laws and there are people, right here in "the land of the free," that would like to see them here, too.) Or planned protests? Or...?
Yes, this is a slippery slope argument. But that doesn't necessarily make it fallacious, as some here are wont to claim.
Bottom line: It is felt by many, and by all security researchers and privacy advocates, that this crosses a line that should not be crossed. I agree. Emphatically.
Besides: Viscerally, having some kind of scanner not under my direct control, on my devices, is... icky![]()
I'm late to this topic but I was NOT going to upgrade from an iPhone 7 this fall because I would be forced to use iOS 15 with this terrible CSAM feature that we've all chatted about the past few weeks.Looks like they want to clear the path for iPhone season.
That's a fair statement, but I keep coming back to the fact that Apple has been doing on-device scanning for things in your photos for at least five years. Granted, that's not for the purpose of ferreting out illegal activity, but the exact same slippery slope argument could be made for that.I suspect you're missing one of the big concerns: It's not this specific implementation that's necessarily the problem, although there has been some evidence to suggest it is, or could become, problematical. It's also the precedent of doing any on-device scanning for the purpose of ferreting-out illegal activity that's a problem.
Open your Photos app and do a search for something like "firearms." I don't know about you, but I get at least a few hits. The technology is already there to do this, and has been since Apple released iOS 10 back in 2016.... and guess what? It runs on your iPhone whether you're using iCloud Photo Library or not, and there's no way to turn it off.If doing on-device scanning for CSAM is ok, then why not on-device scanning for prohibited <name your thing>? Weapons? Political gatherings? If scanning images that people plan to upload to cloud storage is ok, then why not scan images regardless of whether they're to be uploaded anywhere? If scanning for image matches is ok, then why not scanning for "hate speech?" (Some countries do have "hate speech" laws and there are people, right here in "the land of the free," that would like to see them here, too.) Or planned protests? Or...?
While guys like Snowden enjoy hearing the sound of their own voices far too much, I think at least part of the pushback from some of the more professional organizations like the EFF is simply a matter of not being sure that Apple has thought this one through — and they're probably right.Bottom line: It is felt by many, and by all security researchers and privacy advocates, that this crosses a line that should not be crossed. I agree. Emphatically.
That point I sort of get. I think I'd feel the same level of icky if I hadn't read all of the technical papers on it and had some understanding of what's going on — and most significantly the fact that it's directly and solely tied to iCloud Photo Library. Personally, I feel like it's only a semantic difference whether photos are scanned on my iPhone before they're uploaded to iCloud or after they arrive on Apple's servers. That's just me, of course, and I understand how some may feel differently about that.Besides: Viscerally, having some kind of scanner not under my direct control, on my devices, is... icky![]()
Yes, one of the several flaws to which I alluded in my previous post. That, in turn, demands a certain degree of trust, not just in Apple (or whomever is doing the manual reviews), but also on the individuals doing the reviews. In a day-and-age where people with convictions for child abuse have been found to have been hired into positions of trust involving children, one might reasonable be excused for being a bit concerned about the possibilities. Far-fetched? Paranoid? Would you ever have believed a school district would be so careless as to hire a bus driver that was a registered sex offender? Yet it's happened.But this inherently means that eventually non-infringing material will be sent to Apple for review.
If they do the screeching voices of the minority will come back.They’re still going to release it at some point so this is all moot.
Exactly, and then there's also the fact that Apple has no short-term plans to launch this outside of the U.S. in the first place. That makes it far easier to say "no" to foreign governments.According to their documentation, the implementation is even more secure than this (i.e., this situation, if Apple's rhetoric is to be believed, is itself not even plausible) --- specifically, if China wants to catch dissidents, they'd need to both 1) compel their local CSAM maintainers to upload dissident photos (easy), and 2) compel at least 1 foreign jurisdiction's CSAM maintainer to upload the same dissident photos (much harder).
Personally, I think Apple's goals are noble here, but I also think that they made a huge tactical error by not getting the EFF and other agencies on board from the get-go. Apple may have put a lot of thought into this, but there may still be things that they haven't thought of... but more importantly, they haven't given anybody enough of a peek behind the curtain to really verify whether they've dotted all the i's and crossed all the t's.my bet is Apple spent more than a brief moment, unless their idea from the get-go was malevolent.
Context. My post was in reply to someone trying to make fun of Apple’s “what happens on iPhone, stays on iPhone” slogan. When you upload something to iCloud, it no longer falls under the category of “what happens on iPhone”.Really, that's weird cause the last few thousand of my photos are both on iCloud and my device. It's only the older photos that have been moved to the cloud, but still there are thumbnails of them on my phone.
Also, the ability to scan a user's phone without them having an iCloud account is part of this.
Exactly thisSo - anyone disturbed just a tad that there is a whole group at Apple that is studying how to identify Child Porn and how to program some computer to recognize it? That means that have to have examples of it....that means they have to study it, that means they have to develop requirements for this SW, that means they have to develop algorithms to figure out that this picture is child porn vs a kid taking a bath or in a swimming pool....
Then someone has to review these results to make sure they are correct and meet the requirements of the SW product.
What kind of staff are working this task?
By having and reviewing examples of the Kiddie Porn, they are breaking the very same laws.
Who is vetting these Apple employees?
This is making me queazy to think about.
The very few dislikes to your good statements always come from the same people. That was fantastic: now I could easily update my ignore list. I thank MacRumors for this option.Oh God! Don’t just delay it. CANCEL THIS, Apple. Can’t you see… people won’t be ordering the new iPhone 13 if you launch this child safety crap.
Yes, because they were all dead set on using iCloud Photos to back up their illegal images, and couldn't figure out a workaround.Looks like the pedos can breathe a sigh of relief.
I believe the less emotionally-charged pushback from the security community has been exactly this. This was a huge blunder on Apple's part IMO, and some who enjoy their fame (lookin' at you, Snowden) (or five minutes of it [... but I also think that they made a huge tactical error by not getting the EFF and other agencies on board from the get-go. Apple may have put a lot of thought into this, but there may still be things that they haven't thought of... but more importantly, they haven't given anybody enough of a peek behind the curtain to really verify whether they've dotted all the i's and crossed all the t's.
As the feature is currently laid out, the data being sent to Apple for review would only be photos that are already on Apple's servers, since it only scans photos being uploaded to iCloud Photo Library. If enterprises are already okay with employees uploading proprietary company information to iCloud, this doesn't really change anything, and if they're not, then iCloud Photo Library should be disabled, or employees shouldn't be using the Photos app for company information, in which case nothing gets scanned.Additionally, there is the verification step once the scanner triggers enough "matches" -- your "matched" data is sent to Apple for review. Are enterprises really OK with this? The scanner thinks it's a match, so it's going to send potentially proprietary company information to ... someone ... at Apple?
Apple claims there's a one in a trillion chance of a false positive, and while that may be a bit of a higher estimate, it's still going to be very uncommon. In the case of the most common hashing algorithms like MD5, the probability of two files having the same hash (known as a "hash collision") accidentally is estimated to be somewhere around 1.47*10-29, although it's fairly trivial to generate files that compute to the same hash deliberately.Obviously the theory is that it's only sending when there's high confidence it's CSAM and not something innocuous. But the existence of the review step indicates that this is not the same as certainty. And of course we all know there can't be certainty without human review. But this inherently means that eventually non-infringing material will be sent to Apple for review.
Apple has delayed the rollout of the Child Safety Features that it announced last month following negative feedback, the company has today announced.
![]()
The planned features include scanning users' iCloud Photos libraries for Child Sexual Abuse Material (CSAM), Communication Safety to warn children and their parents when receiving or sending sexually explicit photos, and expanded CSAM guidance in Siri and Search.
Apple confirmed that feedback from customers, non-profit and advocacy groups, researchers, and others about the plans has prompted the delay to give the company time to make improvements. Apple issued the following statement about its decision:
Following their announcement, the features were criticized by a wide range of individuals and organizations, including security researchers, the privacy whistleblower Edward Snowden, the Electronic Frontier Foundation (EFF), Facebook's former security chief, politicians, policy groups, university researchers, and even some Apple employees. Apple has since endeavored to dispel misunderstandings and reassure users by releasing detailed information, sharing FAQs, various new documents, interviews with company executives, and more.
The suite of Child Safety Features were originally set to debut in the United States with an update to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. It is now unclear when Apple plans to roll out the "critically important" features, but the company still appears to be intent on releasing them.
Article Link: Apple
Apple has delayed the rollout of the Child Safety Features that it announced last month following negative feedback, the company has today announced.
![]()
The planned features include scanning users' iCloud Photos libraries for Child Sexual Abuse Material (CSAM), Communication Safety to warn children and their parents when receiving or sending sexually explicit photos, and expanded CSAM guidance in Siri and Search.
Apple confirmed that feedback from customers, non-profit and advocacy groups, researchers, and others about the plans has prompted the delay to give the company time to make improvements. Apple issued the following statement about its decision:
Following their announcement, the features were criticized by a wide range of individuals and organizations, including security researchers, the privacy whistleblower Edward Snowden, the Electronic Frontier Foundation (EFF), Facebook's former security chief, politicians, policy groups, university researchers, and even some Apple employees. Apple has since endeavored to dispel misunderstandings and reassure users by releasing detailed information, sharing FAQs, various new documents, interviews with company executives, and more.
The suite of Child Safety Features were originally set to debut in the United States with an update to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. It is now unclear when Apple plans to roll out the "critically important" features, but the company still appears to be intent on releasing them.
Article Link: Apple Delays Rollout of Controversial Child Safety Features to Make Improvements
That was certainly a tactical and strategic error, but I'm not certain it would have mattered to some because, no matter what they did or how they implemented it, they'd still be crossing that line many feel should not be crossed. Their arguments then may have been purely philosophical, but they'd likely still have made them.... but I also think that they made a huge tactical error by not getting the EFF and other agencies on board from the get-go.
Apple probably did design this feature with privacy in mind, but it sets precedent for future scans for many different things (that we know and don’t know about) and its a slippery slope once its activated. Reason and technical understanding should not place FULL 100% trust in Apple and there must be a mention or even an acknowledgement that this can lead to future breaches of privacy both intentional and unintentional.
A decade is a bit of an exaggeration, but it's definitely not something that happens as quickly as most people think.
Firstly, there has to be a critical mass of a photo caught circulating, but modern social media and "dark web" channels have sped that process up dramatically in recent years.
More importantly, however, CSAM circulates for years, and the disturbed people who collect this stuff can't ever get enough of it. There's a very high probability that anybody in this situation will have enough photos in their collection that also happen to be in the CSAM database, which is also likely why the threshold is only set at 30. I haven't ever heard of a case where a consumer of CSAM had fewer than several hundred photos in their collection.
Sadly, you're partially right that it's not going to do anything to stop "active abusers" — at least not directly. The animals who are creating CSAM are usually smart enough to avoid public online services in the first place, but even if 30 of their photos strayed into iCloud, they'd be too new to be caught by the CSAM Detection algorithm.
However, this is where old-fashioned detective work and forensics come in, and from the law enforcement agents I've spoken to, more often than not a collector of CSAM provides invaluable leads to track down the distributors and creators.
Will not have any effect on those photos, photos have to be known child porn in a data base, scan compares hash from database with uploaded photos.For all the proud mommies and daddies taking pics of their child splash-splashing and cavorting in a tub of suds, welcome to pedophile-dom. This is dumb. It would be fun to see some progressive warriors get stung.
The oft-repeated argument against this is "we don't care how it's supposed to work, we care that's it's happening on our devices." But, to answer your question more directly: there are many who think it would be trivially easy to adjust the existing method to scan for arbitrary things (naked 40 y/o women), although my albeit anecdotal experience is that it would be much easier to just co-opt existing scanning algorithms in the Photos app (facial recognition, object detection, etc.) for nefarious purposes. Transfer learning in DNNs is certainly a thing, but it's not a magic button like some seem to think.Can someone explain to me how this "sets a precedent for future scans"? I've been following this for weeks but still don't really understand this point.
I'm not gonna be one of those that at say "I have nothing to hide so this doesn't bother me". Quite the contrary; I value my privacy and have photos on my device that I'd prefer to keep away from prying eyes - although nothing illegal, unless having nude photos of your 40 y/o wife is suddenly illegal
As I understand it, and using my above example of personal photos, the only way my photos can be flagged by the system (and ultimately seen by other human eyes) is if those exact photos are in the CSAM database for the hashes to match. I don't see this as being a credible concern unless I'm missing something.
The idea of abuse is banded around a lot and I guess this is what's meant by "future scans" but I still don't follow. If a regime change suddenly made it illegal for me to have my aforementioned personal photos and I was now at risk persecution than the point above still holds - my personal photos will not be on any database to give any matches. Even if the new nefarious regime manipulated the database to include hundreds of thousands for photos of nude women - none will be my wife so there cannot be any matches (and if any are of my wife then I've got bigger problems).
The same applies to all the other examples mentioned - persecution of political activists is one that comes up a lot. If I were such an activist (or insert any other persecuted demographic) and I have photos of me at some demonstration or rally (or insert any other compromising activity), how can a match by flagged without the actual photos first being in the hashed database?
Clearly, legitimate comparison of image hashes to the CSAM database (even if the database content is compromised) is of little concern (at least to my understanding). So then where is the concern?
Is it the potential to compromise the hash comparison algorithm such that less and less exact matches can be garnered? This to me seems to be the only way to "trick" the system into reporting my unique photos as matching the CSAM (or other) database, however, I don't see this one articulated too clearly in any of the arguments presented. Also I don't see this as being particularly efficient since in order the ensure retrieval of any targeted photo the bar would have to be set so low that all photos would be most likely be reported and sent for investigation.
Um, no. No one at Apple needs any actual CSAM do do their jobs. Just the hashes that have been provided by the appropriate organization.So - anyone disturbed just a tad that there is a whole group at Apple that is studying how to identify Child Porn and how to program some computer to recognize it? That means that have to have examples of it....that means they have to study it, that means they have to develop requirements for this SW, that means they have to develop algorithms to figure out that this picture is child porn vs a kid taking a bath or in a swimming pool....
Then someone has to review these results to make sure they are correct and meet the requirements of the SW product.
What kind of staff are working this task?
By having and reviewing examples of the Kiddie Porn, they are breaking the very same laws.
Who is vetting these Apple employees?
This is making me queazy to think about.