Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
...and how often does it have to be repeated that the proposed system was designed to match similar-looking images because requiring an exact match would be too easy to fool? When you're scanning the images from a billion iPhone users, even a million-to-one chance of a false match will be to much to properly investigate.
Nope. It required a match.
 
Thats nice for icloud, but doesn‘t address the csam hash-checking function running locally on machines with mac OS 10.15 & newer. If your computer comes with a built in quiet little snoop that doesn't alert you of the presence of illegal material, but instead just alerts the federal police and wrecks your life, thats probably something everyone with teenagers should consider very deeply.

As of a year ago, facing the pushback last time,
“December 2021, Apple removed the above update and all references to its CSAM detection plans from its Child Safety page, but an Apple spokesperson informed The Verge that Apple's plans for the feature had not changed.”

CSAM scanning was never implemented, so there's not a hash-checking function running on any machines right now.

https://www.macrumors.com/2022/12/07/apple-abandons-icloud-csam-detection/
 
  • Disagree
Reactions: Kazgarth
Apple would be liable because they are expressly saying there is no need for them to carry out CSAM detection giving excuses as to why. Various vocal groups are saying Apple needs to implement CSAM detection on icloud and Apple have turned around and basically said no. Therefore if CSAM material is found on icloud by the police due to the apprehension of criminals and their investigations, Apple could be held liable for knowing such a thing takes place and refusing to implement something that would have prevented the unlawful material from appearing on their icloud in the first place.
So far as I am aware Apple and other companies are required only to alert the authorities if they discover illegal images on the servers. They are not obliged to search for them. However, the iCloud user license precludes putting illegal content on their servers, so they do screen for that. That's fine by me - their servers, their rules.
This could have been a good thing but a bunch of complainers who didn't even realize before that Apple is already scanning their iCloud email for child porn suddenly went "but my privacy!"
The issue was about installing on-device surveillance without the permission of the owner of the device. And such a system could be used not only to detect illegal CSAM images, but authoritarian regimes could use it to scan for faces of dissidents, forbidden flags and symbols, political slogans, religious text and pictures, and even faces of certain ethnic identities. To make matters worse, when this was pointed out, Apple published a technical paper outlining the system, giving authoritarian regimes an outline of how to do this even if it isn't a part of iOS. We'll see whether a scheme like this will get embedded into authoritarian regimes' surveillance of their citizens. It seems many failed to understand the ramifications of this.

Getting back to the news article:
First, those of us who objected were told we didn't understand the technical aspects of the system. We did.
Then we were told the system performance was excellent. Then, it became clear there would be false positives (hence the need for human review on Apple's end). Then it was established the system could be circumvented by minor modifications to images.
And now we are told 'sorry, bad idea because the system might be misused', which completely misses the point that the intended use of the system was to search without a warrant, without probable cause, without judicial review, and without explicit permission by the owner of the device to do so.

This is what happens when you let engineers run amok without ethical, legal and social review. Not Apple's finest hour.
 
That’s precisely the point. The AI is making some judgement calls. Visually similar images produce the same hash. Not the same picture cropped. So someone’s toddler COULD produce a false positive. Apple knew so and implemented a workflow for it.

Personally I wouldn’t store those pictures. But it is still a concern because some judgement is happening without your knowledge. And no possible way to clear it saying “no that’s my kid….here is proof” or “no that’s my wife she just looks young….see she is 22!”

If there was a way to clear out the false positives with proof, I would have been fine with it.
Similar images do not create the same hash.
 
  • Disagree
Reactions: jonblatho
That’s precisely the point. The AI is making some judgement calls. Visually similar images produce the same hash. Not the same picture cropped. So someone’s toddler COULD produce a false positive. Apple knew so and implemented a workflow for it.

Personally I wouldn’t store those pictures. But it is still a concern because some judgement is happening without your knowledge. And no possible way to clear it saying “no that’s my kid….here is proof” or “no that’s my wife she just looks young….see she is 22!”
That's not my understanding of how this was supposed to work. It's not AI image recognition in that sense... it's not looking for images that have the appearance of illicit images. It's not making judgement calls. Saying visually similar images produce the same hash is such a broad statement as to be useless in making an objective claim. Of course they do, that's the whole point, to identify tweaked versions of the same image. But that's very different from saying that your family beach pics are going to match some random hash from a CSAM database. We just don't (and may never) know.
 
From their perspective, not wanting to be hosting and storing illegal and despicable content on their servers,
...and, that's the root of the problem being caused by various governments - it shouldn't be Apple's responsibility to police what is on their servers, at least when they're effectively just acting as offsite storage for people's private data.

There are some photo and social media sites that want to monetise your photos - which is a different kettle of fish - but the point of Apple's CSAM scheme was to allow end-to-end encryption so Apple couldn't do anything other than store your data in sealed boxes. (Unfortunately, it opened other cans of worms in the process).

In old money, it's the difference between blaming a publisher who prints and distributes a book full of CSAM and blaming the Post Office for delivering your sealed envelopes. The problem is caused by governments who can't/don't want to see that distinction. There's also the problem with governments who want to backdoor all strong encryption (and are using child abuse as a bait-and-switch issue) but really don't get that that will make it useless for everybody - including industry, not just privacy nuts.
 
It is clear you do not understand how this feature works or even what hash-checking means.

These systems work like antivirus software. They scan files for matching hashes against a database of known child abuse material compiled by law enforcement agencies.

A child having explicit photos of girl/boy-friends is not going to be flagged because it is not CSAM being circulated within known pedophile rings online.

Let’s at least get our facts straight before arguing pros and cons of systems such as these.
Some systems do use novel image detection at great peril (here’s an example) but you’re right that that’s not what Apple had proposed.
 
Nope. It required a match.
...between two hashes that were specifically designed so that two "similar" images produced the same hash. These are not the same hashes that you use to verify the authenticity of a download, they are the sort of hashes that get used in AI image recognition.
 
That's not my understanding of how this was supposed to work. It's not AI image recognition in that sense... it's not looking for images that have the appearance of illicit images. It's not making judgement calls. Saying visually similar images produce the same hash is such a broad statement as to be useless in making an objective claim. Of course they do, that's the whole point, to identify tweaked versions of the same image. But that's very different from saying that your family beach pics are going to match some random hash from a CSAM database. We just don't (and may never) know.
Look at the white paper. It’s explained there. Visually similar images produce the same hash. So SIMILAR it’s making some judgement here.
 
...and, that's the root of the problem being caused by various governments - it shouldn't be Apple's responsibility to police what is on their servers, at least when they're effectively just acting as offsite storage for people's private data.
I mean I guess. That's not really what I was getting at.... I meant more like Apple (and the people running it) itself doesn't want to be enabling this stuff, regardless of what any particular government says about it. Given how out in front they were with this proposal, it seems like this was not something they were being forced into out of fear of liability.
 
  • Like
Reactions: jonblatho
...between two hashes that were specifically designed so that two "similar" images produced the same hash. These are not the same hashes that you use to verify the authenticity of a download, they are the sort of hashes that get used in AI image recognition.
Exactly. Saying it requires a match to me means it’s how you validate your downloads. Which you should always do to make sure the download is trustworthy and EXACT.
 
I understand why people were complaining about this CSAM detection; but I never did understand why they complained about that and not object detection via AI/ML too...which is far more powerful and makes their argument somewhat weak. (Object detection has been around for quite awhile and is how communication safety in messages works...detecting nudes, or how you can search for flowers, cars, animals, etc. in your photo library.)

The argument around CSAM was "Oh, someone could upload hashes so it'll detect guns or certain signs too...". Well, the "bad guys" would need to know what the picture looks like first in order to provide the hash. Your picture of a gun probably wouldn't be visually similar to their picture of a gun, so therefore would have a different hash and wouldn't get caught.

On the other hand, object detection via AI/ML could just look at the picture and detect what's in it. The "bad guys" don't need to know what the full image looks like. Who's to say that Apple isn't secretly counting the number of pictures of guns we have...or that they won't in the future? (i.e. "Dear gov't, as requested for national security, we see that Joe Schmoe's iPhone has detected 742 pictures of guns. Here's his info.") Heck, they could even use this method to detect CSAM content before it starts floating around the dark web to get a known hash.

So if you're going to complain about getting caught via a matching hash (visually similar photo), you should also be worried about object detection via AI (non-visually similar photos).
 
Last edited:
  • Wow
Reactions: gusmula
I applaud the courage to listen to third-party privacy and child abuse experts and recognize that any vectors they create for user surveillance can be abused by third-parties.

The Advanced Data Protection feature took a long time to implement, but it demonstrates a commitment to protect their customers. Legal processes already exist to acquire data for law enforcement purposes.
 
Visually similar images produce the same hash
Again, yeah. But how similar? similar in what ways? Is there an example of an especially egregious false-positive found in testing? Was the hashing method updated as a result?
 
Pornography itself is a slippery slope. You crave more and more explicit content and eventually get to CP.
That's ridiculous. Pornography has pretty much existed since humans started drawing stick figures in caves...yet the vast majority of adults have never had, nor ever will have, an interest in child pornography. Given how much pornography is consumed every day across the internet alone, if what you said had even a grain of truth, child pornography would be everywhere. No matter how much porn they watch, mentally healthy grown adults are repulsed by child pornography.
 
I mean I guess. That's not really what I was getting at.... I meant more like Apple (and the people running it) itself doesn't want to be enabling this stuff, regardless of what any particular government says about it. Given how out in front they were with this proposal, it seems like this was not something they were being forced into out of fear of liability.
You’re on the right track, but it’s not fully out of the goodness of their hearts. They invested a lot of engineering effort in a solution they wound up canning. Part of it is that by enabling E2E encrypted iCloud Photos without some form of CSAM scanning they risk inviting bad PR (at best) and even regulation in the event that someone gets busted and Apple’s services wind up implicated.

See also Apple planning to pull services from the UK if their encryption legislation becomes law in its current form. Of course, they want to sell these services to as many people as possible. Designing themselves out of markets altogether doesn’t help with that, and eventually they could be forced to compromise if they push things too far, “for the children.” Supposedly, of course.
 
  • Like
Reactions: VulchR
Uh huh. Suuurrree Apple. "After meeting with experts, privacy advocates, etc.." you decided to put the brakes on this spyware.

The complete, outrage and pushback you got from people angry over the spyware you tried to secretly sneak into iOS had nothing to do with it.
 
Again, that isn’t how it works. Look up PhotoDNA.

Look it up yourself:
Most common forms of hashing technology are insufficient because once a digital image has
been altered in any way, whether by resizing, resaving in a different format, or through digital editing,
its original hash value is replaced by a new hash.
You're either looking for an exact match or you're looking for "similar" images. You can't have it both ways. They're claiming a false positive rate of "1 in 10 billion" (with no citation given) but that's less impressive if you're scanning hundreds of millions of photos a day - and that's likely based on the false assumption that the photos on someone's camera are going to be uncorrelated.

...and the text I linked to above is talking about simply rejecting or deleting offending photos, which sounds comforting, but In reality, anybody finding a match is going to be under huge pressure to report it to the authorities.
 
  • Like
Reactions: Ethosik
They did I’m sure. Purely a response to the backlash

We'll never know for sure, it's very common and very human being so focused on the feature that we don't foresee all the consequences. I've worked for big companies in the past where we made a new feature and only discovered some of the consequences or concerns when the public got their hands on it.

So the answer could be either way, or somewhere in the middle as is more often the case.
 
We'll never know for sure, it's very common and very human being so focused on the feature that we don't foresee all the consequences. I've worked for big companies in the past where we made a new feature and only discovered some of the consequences or concerns when the public got their hands on it.

So the answer could be either way, or somewhere in the middle as is more often the case.
Given how much emphasis Apple puts on privacy, I can't believe they didn't think this one through. Apple also wants their ecosystem to be viewed as safe and kid friendly. Maybe Apple thought they could walk a fine line here. Maybe they thought a little less privacy wouldn't matter so long as they scored a ton of PR points for protecting kids. Whatever the reasoning, the one thing I doubt is that they simply missed the (obvious) (not-really-)unforeseen consequences.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.