Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
But there are already algorithms on your iPhone and Mac that have access to your photo library. Why is this one any more susceptible to attack than the existing ones that do things like tag your photos and match faces to names?

Edit: Which BTW don't seem to be able to be turned off.
We chose to use them because we receive a benefit from them. This is just creepy.
 
Wait - I’m confused by this. Does this mean that everyone’s iCloud is going to be scanned without user’s authorization in the name of child welfare??

While I am sure people may agree with this, it seems like one step away from doctors/dentists submitting DNA samples of *every* patient because it is in the public interest.

This program seems just a small morale slip away from being an invasion of privacy on a monumental scale. Give what Snowden revealed the US government has a huge thirst for data collection like this. It’s a short hop to scan for compromising photos of your political rivals, yes?
Leaked Photos
Ahem. $0.02

TimJ
 
You should keep giving them the benefit, as you clearly don’t understand how iCloud works. Apple never has access to your images. That’s why the hashing is done on your device, since only your device can view the image.
You should brush up on your iCloud knowledge. Apple absolutely has access to images in iCloud for anyone using iCloud backup which is NOT e2e encrypted.
 
I am sadly sure that this comment will be lost and buried, but the ongoing discussions on surveillance and privacy, both in general and in particular cases such as this, brought to mind Blackstone's ratio, which I had read a number of years ago. To wit:

It is better that ten guilty persons escape than that one innocent suffer.

John Adams, founding father of the United States, put it very eloquently:

It is more important that innocence should be protected, than it is, that guilt be punished; for guilt and crimes are so frequent in this world, that all of them cannot be punished.... when innocence itself, is brought to the bar and condemned, especially to die, the subject will exclaim, 'it is immaterial to me whether I behave well or ill, for virtue itself is no security.' And if such a sentiment as this were to take hold in the mind of the subject that would be the end of all security whatsoever.

In other words, as laudable as the ends may be, they cannot always justify the means. Moreover, a system that looks only for wrongdoing will invariably find it where none exists. And we all know the story of Jean Valjean...
 
iCloud photos is automatically checked ON when a new device is set up and Apple ID is entered for the first time.
A user has to dig through the settings to manually turn it off.
This. I don’t think enough users are aware of this automatic opt-in, especially whenever you sign out and sign back into iCloud. I have to remember to disable it every time. It’s just stunningly bad that Apple deliberately refuses to remember my last setting.

I like some iCloud features (like FindMy, Notes) but never have I wanted my photos in the cloud. And for all the supposed ML on the device, it seems iOS is still incapable, after years, of respecting my preference.

“It just works …. Sometimes”
 
It's sad that Apple has betrayed their principled stand on user privacy. This is a back door plain and simple, and there is nothing preventing Apple from expanding the database to look for other images. This might sound alarmist, but could the same technique be used for facial recognition with governments passing laws that force Apple to turn over matches for certain individuals that the state deems undesirable?
 
I still have only skimmed their PDF, but this seems the first time I've heard there's additional matching, and the "Inner-Layer Unwrapping of Vouchers in iCloud" doesn't seem to mention this?

The matching process is split into two parts, the first part of which the client does without knowing if the second part (done by the server) is a hit or not. My understanding is that the goal here is to provide additional protection against the device alone being compromised for the purposes of producing false positives.

This page talks about the client side of the process:

This page talks about the server side of the process:
 
  • Like
Reactions: ikir
Well, Tim Cook is a numbers guy and not a product guy.... I could see him saying this saves us so much money if we use phone CPU cycles instead of server CPU cycles, I like it

Partially I wonder if it's less about server cycles and more about shifting where the decryption keys exist so that they can only decrypt certain images that they suspect for CSAM and encrypt everything else. AFAIK iCloud Photos isn't covered by the end to end encryption and can be served up to law enforcement who suspect an account of containing CSAM. I could see this as a pathway for enabling iCloud Photos to be end to end encrypted whilst still providing a path for fulfilling requests from the FBI for access to content from these accounts via the vouchers.


Fine. Just roll over and give up your 4th amendment protections.

If you're uploading your photos to Apple, you've likely already lost those protections. Whilst Apple have been forceful about not putting a backdoor into their devices, they've been more than happy to spill any iCloud data they have available which, as noted above, already includes iCloud Photos data.
 
  • Like
Reactions: jdb8167
Wait - I’m confused by this. Does this mean that everyone’s iCloud is going to be scanned without user’s authorization in the name of child welfare??

Not really. They're going to create a fancy "fingerprint" of all your photos, and then see if any of those match the fancy "fingerprint" of known CSAM. That will all be done on your phone locally. The fancy "fingerprint" is made so someone couldn't just adjust the saturation or crop a picture a little to dodge the system. You have to hit about 30 matches before Apple is able to see anything, and then an actual human will verify (while only seeing the suspected matches, not your whole photo library).

Also, that database of known CSAM will basically be what two different organization in two different countries BOTH contain. This should make sure that a "sneaky" government can't just add something like a picture with a BLM flag to the database.

And this is all way more private than the way almost every other cloud service (Google, Microsoft, Facebook, Flickr, Amazon, Twitter) goes about it. They just straight up scan the images on their own servers.
 
  • Like
Reactions: jdb8167
Pro user is using this hash to simply tell a user that the photo is ineligible for icloud upload. Anti-user is secretly viewing the pictures and sending them to the cops
Agreed, though as far as I understand there is a legal obligation to report CSAM when encountered. I think authorities would soon demand that the "pro-user implementation" report positive matches, and Apple could hardly respond that this is technically impossible. Which is why I think no such system should be on the device at all - it crosses a line and it is a beachhead for further surveillance demands.
 
Does macrumors not bother to verify the claims of others before publishing them? There is no human oversight for reviewing false collisions.
If you meant "even singular false collisions", that's true. A single false collision resulting in a false match being recorded within the safety voucher that accompanies an image uploaded to iCloud Photos will not get reviewed by a human.

That singular false match will also not trigger any further response or reporting, however. The system even generates its own "false matches" (called "synthetic matches") to obfuscate the number of matches, whether true or false, discovered from the user's actual photos.

Once enough of these matches (say, 30) are recorded per Apple ID, Apple's human reviewers will be notified and gain access that they didn't previously have, to only those safety vouchers that have been marked as matched. Any synthetic matches generated by the system itself will be immediately obvious and discarded as there's no visual derivate contained within the safety voucher, so nothing to review. For the rest of the matched safety vouchers, the reviewers get to see a visual derivative (call it a low-resolution thumbnail) of the user's image to determine whether it's CSAM or, say, tank man or Winnie the Pooh.

I seem to recall reading somewhere that while Apple's reviewers don't have access to the actual, original CSAM images (laws prohibit them from keeping copies of them), they would have access to the visual derivatives (thumbnails) of the CSAM supposedly matched, to compare against the visual derivatives (thumbnails) of the user's matched images, but since I couldn't relocate the source for that information don't consider this paragraph factual. Let's say that I feel like this would make sense, but don't know it for a fact, to be on the safe side.

The rest of it you can read about here:

 
Last edited:
Mr. Green is either misinformed about the specifics or talking about something else. It’s not a surprise that collisions can be found in a hash scheme that specifically seeks to maximize collisions for “similar looking” images. This is not, by design, a cryptographic hash.
Okay, if you say so.
I suggest you to get in touch with him, and teach him “How to teach cryptography properly!”. You definitely sound like you know it better.

I’ve searched the contact page for you:

/s
 
  • Love
Reactions: PC_tech
I don’t know about you specfically, but people here definitely don’t understand in general, and it’s the same on Reddit. There is so much misinformation about this it’s crazy. So yeah the reason Apple keeps releasing more technical documentation really is because people don’t understand, as I see it.
People may get technical details wrong, but they get the basics right: Apple is installing an AI-watchdog on your phone, making your own phone look at your data in a way that goes against your interests.

Apple's technical documentation is mostly a diversion, trying to paint the outrage as "confusion". They go on and on about cryptography and safety vouchers - where it may indeed be easy to get details wrong, but those are irrelevant to the objections against the system. They gloss over the critical parts, like how does the image analysis work, or how do the reviewers make their decisions. They do not explain why pictures made by the iPhone's own camera are checked. And they provide little assurance as to why this system would not expand in the future, other than "trust us".
 
They're hashes. Basically checksums. They don't really contain the data, or even a portion of the data; they're a fraction of the size of the data, and all they're good for is you can take the original data and verify a match (or not).

Apple's OSes also contain XProtect, which are hashes of malware — doesn't mean your devices have malware on them.
Perceptual hashes differ from checksum hashes in that they are somewhat reversible - you could recreate an approximation of NCMEC picture if you had its perceptual hash. With a checksum hash even the smallest change in the original should cause a drastic alteration in the hash, whereas perceptual hashing aims to create similar hashes for similar images. That is why Apple encrypts the hashes on the phone, so no-one can jailbreak the phone, extract the database and recreate (blurry/warped) CSAM pictures.

So in a way the pictures are on the phone, but without the secret key they are just garbled data.
 
Whether final version or not, the scanning process happens. Apple is not backing out. Atleast it should be implemented properly.
 
With hashing you will always have a chance of collisions because it is inherently lossy by design.
But what are the statistical chances of a hash collision occurring randomly, 30 times at that, purely by accident and not through deliberate engineering? About as close to zero as anything gets I reckon…
 
Most agree it would not be a problem to find child abuse, the problem is, what's next, this tech can lead to all kinds of finding stuff inside your phone, like politics in countries where you can get into deep trouble venting your opinion.
Maybe you should have considered such issues before deciding to carry a multi-sensor tracking device on your person all days every day that provides many more actual mechanisms for depriving you of your liberty than this. Do you have a Facebook account? Instagram? Twitter? Yeeeaaah…
 
I mentioned something similar on a previous thread except the threat I pointed out is what happens if a hacker were able to hijack your Apple ID. Remember a few years ago when there were the celebrity iCloud breaches? Never mind that individuals and account security don't really go hand in hand well (passwords and such).

Previously the worst case scenario was that you lost access to your account and purchases. Now you'll have individuals hijacking accounts and holding people for ransom with the threat of uploading kiddie porn via a VPN in their location in order to frame them which would set off the triggers and as you said... reported to the authorities, no recourse and you'd end up in jail with your life ruined. Doesn't even have to be a hacker, could be a revenge actor doing it, etc.

Apple can preach about the tech being sound till the cows come home, but the mechanism behind it stinks of bad actors, potential criminality, and interference.
Wouldn’t it be easier for a hacker to, say, take over your social media account and just start posting bad stuff? The scenarios people are coming up with that make them freak out about this are ludicrous compared to countless other scenarios we already accept without question.
 
Wait - I’m confused by this. Does this mean that everyone’s iCloud is going to be scanned without user’s authorization in the name of child welfare??

While I am sure people may agree with this, it seems like one step away from doctors/dentists submitting DNA samples of *every* patient because it is in the public interest.

This program seems just a small morale slip away from being an invasion of privacy on a monumental scale. Give what Snowden revealed the US government has a huge thirst for data collection like this. It’s a short hop to scan for compromising photos of your political rivals, yes?

Getting tiring of people jumping to the wrong conclusion and labeling Apple as the bad guy. Is this scanning of photos? Yes! But only newly-added photos that start with an iOS device doing first half the scanning, and then iCloud servers doing the second half of the scanning. With that first half done (the voucher added), and the threshold not yet crossed, then *nothing* happens on the server-side of things. There is no mass-scanning of images on the servers without the user consent. The user consents by the very act of willfully uploading new questionable images from their device to Apple's servers.

Now, as for images received via various channels (eg. social messaging apps) and then automatically added to iCloud photos without any explicit action on the part of the user, well, the jury is still out on that one. But in that case, the user is in control since having a 3rd-party app add images to the Photo library is always a user choice, I believe. Correct me if I'm wrong.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.