Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
After reading exhaustive number of replies. for me it comes down to this:
  • Prior to launch: Police need probable cause then get a judge-issued warrant to search through your personal things
  • Post launch: Apple will search your devices automatically and notify police with no warrant required.
If you strikeout the 'child' language and put in just about any other term, people would be outraged. But 'for the children' makes everything a-okay.. Today's 'child photos' are tomorrows 'extremist content'.

Bingo! ^^^

Great post and it summarizes the key change here and why so many are so rightfully not comfortable -- at all -- with this proposed change.
 
I can agree with you on having time to discuss and debate for sure. But then that is what’s happening now a month before release.

It's not enough - turns out they'd already been eyeing this with a privacy policy change back in 2019


E8rgO5aWYAIHi8h.jpeg
 
And you're right, if Apple has been forced, they should absolutely come out and say that.
If Apple was forced, they:

1) might be under a gag order
2) would completely undermine their argument that this wont be expanded because they will just say no to governments
 
Then they need to pause this rollout completely and announce that feature simultaneously and be way way more open about all the questions and concerns up front.
The way I see it, this is a chicken and egg problem. Apple cannot do E2EE until they can ensure they comply with the law. Apple cannot comply with the law if they enable E2EE for iCloud Photo without doing on device check before photo is uploaded to iCloud Photo. This solution will enable E2EE.
 
He is saying that the technology can be misused. I am asking how and don't really get detailed answer taking into account how the system works.

It seems to very few people know about the weakness about this system which make it ill-suited for many applications of surveillance.

You're focusing on this specific system and technology being used.

The problem people have is the ideology and normalization of scanning user devices for ANYTHING.

edit: s/your/you're
 
Last edited:
The way I see it, this is a chicken and egg problem. Apple cannot do E2EE until they can ensure they comply with the law. Apple cannot comply with the law if they enable E2EE for iCloud Photo without doing on device check before photo is uploaded to iCloud Photo. This solution will enable E2EE.

Not really chicken and egg as one doesn't depend on the other to exist.
You can absolutely have a grand plan and lay it all out at once.

That doesn't even mean they'd have to do it all at once..

Guys - they aren't even TALKING about doing E2EE

We've let folks like Gruber inject that as a "theory"
 
The exact same methodology they are going to use on your device would/could/should be used on their servers.

Our iCloud data is already not E2EE - just scan on servers as every other host already does.

What most of us are objecting to is building the infrastructure to do this -- on our devices.
I’m trying to understand your view.

I can see why people would be concerned about this idea of scanning taking place on their phone. What I don’t understand though is that machine learning and on device scanning of photos has been taking place since the neural engine was introduced with the iPhone X.

The only difference that I can see is that if your phone detects CSAM when you save a photo (instead of a photo of say…a pet or a landmark) then it gets flagged and scanned a second time as a safeguard on upload to check it really was CSAM. Every time you save a photo it gets run through the neural engine already anyway.

What I’m asking is, help me understand why scanning on device for CSAM is different to the existing scanning on device for other things in photos. (And I don’t mean that in a condescending way, I’m interested in genuine discussion about the issue)
 
Yes but if iCloud is turned on, the photo would be scanned on device, even if it is not actually uploaded to iCloud.

Currently the photo is not scanned until it is actually on the cloud.
There’s no scanning of on device photos for CSAM materials. Photos are hashed and check only if it’s uploaded to iCloud Photo. If no on-device photos are uploaded, no photos will be hashed. There’s no scanning.
 
What I’m asking is, help me understand why scanning on device for CSAM is different to the existing scanning on device for other things in photos.

Real simple - glad you asked

All other types of scanning have been for my benefit as the user of my device.
(faces of family members, etc)

What is proposed now is to specifically "look for things" in my content that are specified by a third party (via a hash database in this case).

Does that make sense?

It's turning the scanning/analyzing abilities of my own device against me as the user to go through my own content --- to look for things somebody else wants to find.


And not just "anything" but "illegal" things...

It's making my own device into a police officer - going through my own data to look for "bad things".

I'm using quotes because of the ambiguity and subjectivity that's possible when defining "bad things" or "illegal things".
 
It's not enough - turns out they'd already been eyeing this with a privacy policy change back in 2019


View attachment 1818358
Great share!

But of course my question then is, why is the debate happening now? Why has it waited until Apple has chosen to implement a feature? This is buried in T and C’s which I suppose is a possible explanation but if Apple have essentially set out that they reserve the right to do this since 2019, why have these concerns never been debated previously?

Perhaps they have in a quieter way. And the feature announcement has just surfaced this in a vocal way. Interesting!
 
Great share!

But of course my question then is, why is the debate happening now? Why has it waited until Apple has chosen to implement a feature? This is buried in T and C’s which I suppose is a possible explanation but if Apple have essentially set out that they reserve the right to do this since 2019, why have these concerns never been debated previously?

Perhaps they have in a quieter way. And the feature announcement has just surfaced this in a vocal way. Interesting!

I don't think anyone really knew about it.

A simple T & C change doesn't mean anything until someone tries to act upon it in ways we are now seeing come to light.
 
If scanning on servers, you can never have end-to-end encryption. Also it's easier to find out what's happening on phones than on servers.
Look - as long as there are reasonable safeguards then the cloud can be good enough and private enough for most (not all, but most). Those that need 100% privacy should be able to keep their stuff on device and not have to worry that Apple has left a backdoor open for on-device scanning and reporting to some 3rd party.

When this whole thing was announced I applauded Apple for their stance. I couldn't understand why everyone was losing their mind over any effort to protect kids and prevent the distribution of CSAM. I personally don't mind giving up privacy if it could prevent even one child being abused.

However, I do respect that others feel differently and that that e.g. my employers might feel less secure about letting me use corporate assets on a device than theoretically could be scanned externally. I don't corporate restrictions happening yet, but there should be a way to keep a phone secure if the user chooses to have it that way.
 
Real simple - glad you asked

All other types of scanning have been for my benefit as the user of my device.
(faces of family members, etc)

What is proposed now is to specifically "look for things" in my content that are specified by a third party (via a hash database in this case).

Does that make sense?

It's turning the scanning/analyzing abilities of my own device against me as the user to go through my own content --- to look for things somebody else wants to find.


And not just "anything" but "illegal" things...

It's making my own device into a police officer - going through my own data to look for "bad things".

I'm using quotes because of the ambiguity and subjectivity that's possible when defining "bad things" or "illegal things".
Yeah that makes sense.

And when put like that I can definitely see your view.

On a personal level I’m not concerned about the privacy aspect of on device machine learning (which I see as preferable to seeing all of my photos in the cloud.)

But I can see your where you’re coming from in respect of it would be nice if the process didn’t involve my own device being used as a police officer.

Ideally Apple would scan for neural hashes in the server but then and only then be able to see the flagged images.
 
  • Love
Reactions: Pummers
The way I see it, this is a chicken and egg problem. Apple cannot do E2EE until they can ensure they comply with the law. Apple cannot comply with the law if they enable E2EE for iCloud Photo without doing on device check before photo is uploaded to iCloud Photo. This solution will enable E2EE.

Wrong. Corporations are legally only required to report CSAM upon encountering it. That means:
  1. They are under no legal obligation to look or scan for it in the first place (there's a reason Apple had record low CSAM reporting figures in previous years, they practically never went looking for it via PhotoDNA or anything like that technology)
  2. They are under no legal obligation to NOT implement full E2EE wherein the customer is the only person capable of viewing the hosted content
  3. They are under no legal obligation to implement any form of content scanning even if they were hosting a fully encrypted service (which means the provider, Apple, can't access anything)
There is literally nothing stopping them from implementing E2EE right now for every iCloud service -- backups, photos, and all -- without any of this client side scanning nonsense. They elected to do it themselves for God only knows what reason.
 
Last edited:
Ideally Apple would scan for neural hashes in the server but then and only then be able to see the flagged images.

Well - that's what they are telling us will be the case "on device" -- so I sure as hell would hope they can do it on THEIR own devices!! lol

The reality is -- nobody has signed up (truly knowingly) to have Apple start going through our own data looking for stuff external agencies are trying to find.

That's a dragnet and police state type of stuff.

Law enforcement needs to work with their traditional means to do their job - not be enabled by platforms in this way.
 
Because if they scan in the cloud then they will have to know all the details and generate hashes of your entire image library.

They only want to know about CSAM and don’t want to know the details of other photos. So they designed a way to scan on device and only upload hashes of potential CSAM matches to Apple so that Apple knows less about what else is in people’s photos.

I don't care about that. I mean, anyone who expects that their cloud images are 100% safe is misguided. If it's really sensitive then don't put it in the cloud.

Apple's choice is to compromise on-device privacy to maintain cloud privacy, and that's a nonsense. The device should be sacred if the user so chooses. The cloud (it should assumed) is more risky and those that must have complete privacy can just choose to not use the cloud.
 
  • Like
Reactions: Pummers
Craig does a great job at explaining — clear and concise, as always. I like the WSJ reporter too and the format of these short videos they produce.
 
Well - that's what they are telling us will be the case "on device" -- so I sure as hell would hope they can do it on THEIR own devices!! lol
It does make me ask the question why aren’t they just doing that?

It’s either:

1. Because they genuinely can’t. Or if they can, not in a cheap and inexpensive way. In which case they’re harnessing your own devices processing power to save costs on server side processing.

2. Or they just can. But aren’t. And I’d want to know why.

I wish Craig was asked in that interview why that isn’t the case.
 
The irony is, this is exactly the outcome that the intelligence agencies want.
I'm not so sure about that. Intelligence agencies have always wanted to be able to snoop on phones. Their best guys can probably already figure out a way of accessing stuff in the cloud.

Apple has now provided a way to snoop on a users device. I think that should be sacred. The user can decide if they want to use the cloud or not, but if you choose not to then the OS shouldn't provide any way for a 3rd-party to snoop on the phone.

Admittedly, Apple says that device scanning only occurs for cloud users, but since they have built the technology into the device OS, it could be sequestered even for non-cloud customers.
 
  • Wow
Reactions: DeepIn2U
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.