Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
1) on server searches by cloud hosts are as much of a black box for us users as on-device searches, not sure about your point about “hasn’t been reviewed by anyone”; the process doing it is just like any processing apple already does to your Photo Library, and it’s not spyware if they make you agree to it

2) common sense suggests that on-device pre-labeling is less invasive than straight on-server searches (since negative matches aren’t even “bothered” once they’re on the cloud), that would be enough of a “why”. Also the very article you linked explains that these hash matches nowadays can sometimes be done on encrypted data as well (depending on the type of encryption). So not sure what’s your point about being able to encrypt before upload. The iPhone user already has one crucial defense against this: just disable iCloud Photos (formerly known as iCloud Photo Library), if you feel Apple is overreaching. If anything, this whole drama is giving us more awareness about all of this.

No, on-device is more invasive because you have a program running on your device that is scanning your files. The current implementation will only work if you have iCloud Photos switched on. In the version for other countries such as Saudi Arabia or China, the folk in charge will simply tell Apple that the file scanner has to run and post to a server whether iCloud Photos is activated or not.
With on server scanning, the dissident in question simply doesn't upload to Google cloud; Apple adding on-device scanning gives governments to monitor what is actually on the population's iPhones, iPads and Macs whether they are connected to iCloud or not; all they have to do is tell Apple to do it, and Apple will do it if they want to continue to sell devices in that country (I don't seem them giving up on China any time soon).

But I think that we're all arguing at cross-purposes. Folk ask what is to stop this system being abused, and other folk reply that it can't be abused because the chances of a false positive is 1 trillion to one. That's not answering the question that's being asked.

What is to stop this system from being abused? If anyone knows then they should drop a note to Apple, because they've already admitted that it's open to abuse.

Just for reference once again:

Apple did admit that there is no silver bullet answer as it relates to the potential of the system being abused, but the company said it is committed to using the system solely for known CSAM imagery detection.
 
Yes, I absolutely prefer that, because it leaves me a choice of not uploading anything or encrypting it first where possible. Once they start snooping through my data on my own device I will no longer be in control, and the end-to-end encryption for services like iMessage that they market so proudly will be a farce.

Yes, that's what I was trying to say. That's exactly why I prefer server-side scanning.

Wow, and overnight, Android is the privacy platform. It's like waking up in an alternative universe.
 
With on server scanning, the dissident in question simply doesn't upload to Google cloud; Apple adding on-device scanning gives governments to monitor what is actually on the population's iPhones, iPads and Macs whether they are connected to iCloud or not; all they have to do is tell Apple to do it, and Apple will do it if they want to continue to sell devices in that country (I don't seem them giving up on China any time soon).

What's stopping these governments for telling Apple and other to do this today without this new system?

China has the power to tell Apple to upload every photo taken or stored on an iPhone belonging to a Chinese user today. And it wouldn't be difficult for Apple to implement.
 
I'm no Edward Snowden fan but this quote and the one from the EFF are perfect:

"No matter how well-intentioned, Apple is rolling out mass surveillance to the entire world with this," said prominent whistleblower Edward Snowden, adding that "if they can scan for kiddie porn today, they can scan for anything tomorrow." The non-profit Electronic Frontier Foundation also criticized Apple's plans, stating that "even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor."

I've read enough to understand this system won't false flag personal photos on a phone etc. I do trust the only people getting busted for child abuse photos will be true criminals. However, the bottom line is Apple has stated, over and over, that backdoors are unacceptable to them because of the potential for abuse. They have even defended not making backdoors when pressed to help the goverment recover data from known terrorists planning to attack US citizens on our home soil. Now, suddenly, backdoors are acceptable and despite our universal revulsion for child abusers this is the top of a very, very slippery slope where the protections so unique to Apple begin to slip away.

Still on Apple.com: "Apple has never created a backdoor or master key to any of our products or services. We have also never allowed any government direct access to Apple servers. And we never will."
It's not a backdoor. A backdoor would require it to happen without either Apple or the user's knowledge.
You might argue it's a front door or maybe better a small side door.
 
  • Angry
Reactions: peanuts_of_pathos
Well. Clearly you dont understand how hash works. Knowing the hash from the database you can easily manipulate catbpics to be flagged. You have enough flags and have cops at your doors. It can be your son photos or gf photos …

The hash from the database is different on every iPhone. It's blinded. So if you get the hash from Apple, from the government database or from an iPhone it can't be used on any other iPhone.

Since the hash is different for every iPhone you would have to break the cryptography on every iPhone to transfer a cat picture to a matching hash.
 
  • Like
Reactions: peanuts_of_pathos
Anybody here is good with statistics and boolean logic?

Let’s suppose the threshold to raise a red flag is 10 offences, and the hashes have 1 millionth chance of a false positive (it’s probably even smaller), what are the odds of getting 10 false positives by sheer unluckiness?

Apple has said it is 1 in a trillion accounts, not pictures, with the threshold they have chosen. After that there are two manual reviews by humans before it maybe gets to law enforcement agencies.
 
Last edited:
  • Sad
Reactions: peanuts_of_pathos
But I think that we're all arguing at cross-purposes. Folk ask what is to stop this system being abused, and other folk reply that it can't be abused because the chances of a false positive is 1 trillion to one. That's not answering the question that's being asked.

What is to stop this system from being abused? If anyone knows then they should drop a note to Apple, because they've already admitted that it's open to abuse.

My question is: what has changed compared to one week ago.

I’m sure dictators and agencies were already acutely aware Apple could do this potentially and actually already did all kind of AI processing to photo libraries.

People keep saying the cat is now out of the bag but I don’t see the cat. I just see a super specific use case so far, of a pretty mundane technique.
 
It’s nearly impossible to escalate to human review if you don’t have multiple pedo pics on your device.
Not understanding this is some anti-vax level of failing at statistics and computer science.

Now, one could still argue that the system is a black box and not fully transparent, but is Apple really that incompetent to create a system that generates tons of false positives and unnecessary human reviews? With their track record of championing privacy?
Apple can’t even stop scam apps in their App Store. So to answer your question, yes, I don’t trust them to be less incompetent in managing this new “feature” than the way they handle the App Store.
 
So how would you encrypt the iCloud Photo Library today?

I can't really think of an easy way to do it and still have all the features intact.
You can't. I was speaking more in general terms. There are 3rd party tools and services that allow you to encrypt files before uploading them to the cloud (e.g. Cryptomator and various E2E encrypted file syncing services). For example, you could use Cryptomator to upload pictures to iCloud Drive in encrypted form. But if Apple starts scanning the files on our devices, that undercuts any kind of E2E encryption. That's what alarms many security experts about Apple's proposal.
 
  • Disagree
Reactions: Stewie
No. They’re putting the scanning ON your iPhone. Your iPhone, even though n airplane mode, is searching your photos for kiddie porn or whatever else the government wants Apple to look for.

This will go to the SCOTUS.

Apple says the scan happens right before you send the photo to iCloud.

Anyhow, it just a technical detail. Let's say in scans independent of uploading to iCloud. If a match is found a safety voucher is created with this information. But it's marked in such a way that you can't know, Apple can't know and anyone who gets hold of your iPhone can't either. So completely secret from everyone.

When these marks called safety vouchers are uploaded to iCloud Apple can't read the content of them. But can't they just count the numbers of them? Yes, but that doesn't indicate anything. Apple is creating false safety vouchers also, even for people with no matches. So the presence and number of safety voucher doesn't mean anything until the number of real safety voucher reaches a threshold.

Then Apple has enough data of a secret key to determine which safety voucher is real or fake and read which image was matched.
 
If it wasn't, how comes you call the police when i walk into your house? I don't steal anything, i just watch and observe what you do. Just that! 24/7.

It's your house? Well, it's your iPhone too. Am just watching you, just that, where's your problem?

That is two different thing.

Now do you have directly law indicating privacy is a right, like right to bare arm, freedom of speech, freedom of movement?
 
Apple can’t even stop scam apps in their App Store. So to answer your question, yes, I don’t trust them to be less incompetent in managing this new “feature” than the way they handle the App Store.
Yeah comparing a yearly flow of millions of apps to review to the number of instances of frickin’ child p0rn escalated to human review is so relevant and edgy.
 
Say that you're an official with the CCP. You have huge stacks of brochures of anti-CCP materials. You've got them scanned and hashed. Next you call Apple and say, "please alert us if any similar imageries appears in your customers' devices". Apple would say, "Sure, we're just following your law"... Hence when a Chinese photographs such brochure "in the wild" using an iPhone, someone from "the government" will knock the next day and "strongly enquire" about yesterday's photo.

How are you so sure that the scanned pictures on the user's phone will generate the same hash as the scanned photo performed by the government?

If they used this system to catch scanned text it would probably leading to high number of false positives.

In fact, in iOS 15 Apple is introducing a way to convert text in images to text. Much better technology to misuse.
 
  • Sad
Reactions: peanuts_of_pathos
Yeah comparing a yearly flow of millions of apps to review to the number of instances of frickin’ child p0rn escalated to human review is so relevant and edgy.

Which is partly why Apple decided it’s users should pay the cost of doing the checks.

For my own modest photo library, checking 30,000 photos against a database of 200,000 will require 6 billion checks.

And both numbers will only grow in the future.
 
  • Sad
Reactions: peanuts_of_pathos
Yeah comparing a yearly flow of millions of apps to review to the number of instances of frickin’ child p0rn escalated to human review is so relevant and edgy.
Yeah thanks for totally ignoring the fact that Apple sucks at their gate keeping job. We haven’t gotten to how bad Apple is at fixing known security flaws.

Soon, billions, not millions, of comparisons will be made on user’s phones. What can go wrong?
 
Apple says the scan happens right before you send the photo to iCloud.

Anyhow, it just a technical detail. Let's say in scans independent of uploading to iCloud. If a match is found a safety voucher is created with this information. But it's marked in such a way that you can't know, Apple can't know and anyone who gets hold of your iPhone can't either. So completely secret from everyone.

When these marks called safety vouchers are uploaded to iCloud Apple can't read the content of them. But can't they just count the numbers of them? Yes, but that doesn't indicate anything. Apple is creating false safety vouchers also, even for people with no matches. So the presence and number of safety voucher doesn't mean anything until the number of real safety voucher reaches a threshold.

Then Apple has enough data of a secret key to determine which safety voucher is real or fake and read which image was matched.
I read the article; thanks for the recap.

It's a very BIG technical detail because there will be software on MY phone, not Apple's--my phone, that is scanning information from MY photos (not Apple's) to look for information (that I am not privy to and can't know) that they can send to someone to review after an unknowable number of photos in MY phone might end up resembling a known CSAM photo.

And right now it's CSAM (abhorrent and untenable, for sure), but in other countries (or here, even, in the next administration), what else might be considered abhorrent and untenable? Protesting? Adult pornography? Photos with same-sex relationships?

What else will Apple allow my phone to search for and use against me because the government doesn't like it?
 
how many times do we have to go down this road? If another company does it, nobody cares. If Apple does it, the internet is on fire. In this particular case: multiple tech companies already do image scanning for CSAM. Google, Twitter, Microsoft, Facebook, and others use image hashing methods to look for and report known images of child abuse.
Except Apple calls itself the beacon of users privacy. That's their entire business moto. If others are doing it, then Apple should be able too is hypocritical.
 
  • Like
Reactions: peanuts_of_pathos
China: China: Hey Apple, we want you to tell us whenever someone stores a picture of protests in Hong Kong
Apple: Err, we could do that, it’s as easy as adding a list of hashes to an index, but we’d rather not
China: <raises eyebrow>

You can't create a set of hashes which covers "looks like protests in Hong Kong" even if you have thousands of pictures of a particular protest in Hong Kong to work from. Photos from the same event creates vastly different hashes.
 
  • Angry
Reactions: peanuts_of_pathos
You can't create a set of hashes which covers "looks like protests in Hong Kong" even if you have thousands of pictures of a particular protest in Hong Kong to work from. Photos from the same event creates vastly different hashes.
But you can add hashes for media relating to the protests in HK that allow China to contextually identify the protestor in other ways. You can also do it in the Messaging System, as Apple explained. “Apple, monitor hashes that match with x symbol or feature that is a common denominator in the photos, I.e facial tracking and expression recognition (anger, happiness, sadness, all learned by the AI), or signposts or locational data based on specific building frames, etc”. There’s ALWAYS a way. And it’s probably already happening. If the AI can identify based on a database, it can also identify based on expressions that can be curated from other protests to “teach” the AI what anger and inciting information looks like. You know, like from the show Person Of Interest.
 
  • Sad
Reactions: peanuts_of_pathos
Yeah thanks for totally ignoring the fact that Apple sucks at their gate keeping job. We haven’t gotten to how bad Apple is at fixing known security flaws.

Soon, billions, not millions, of comparisons will be made on user’s phones. What can go wrong?

850 million iCloud users. For now Apple is only accusing its US customers of being perverts but let’s consider the inevitable application to worldwide users.

Let’s assume an average photo library of 10,000 photos and the current CSAM database is 200,000.

So an error rate of 1 in a trillion actually means Apple making 2 million false accusations.

Apple refuses to say how many accusations trigger a police call. Maybe 2. Maybe 200.

I hope anyone falsely accused sues Apple for all their worth.

In the past, America literally fought a war of independence over less abuse than what Tim Cook is inflicting on free people.
 
This just seems like a major contradiction. How can Apples system get a match on a cropped, color adjusted, pixel modifies, distorted image, yet claim false positives will be so low? There is some leeway to the hash matching if a unique photoshop edit can be caught.

By having a threshold. One picture isn't enough. Maybe they have set the threshold to 50.

My impression is that many of the people having such pictures has thousands of them, even hundreds of thousands.

I read that WhatsApp reported 400 000 cases last year and someone here at MacRumours wrote that Facebook reported 20 million cases. If true, you can probably disregard those who just have a few photos.
 
  • Sad
Reactions: peanuts_of_pathos
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.