Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
For me its the gateway to other invasions....

Probably started with google, "we have AI scan your emails to better serve you ads". No human involvement but I still don't want anyone doing that nor do I trust that anyone, especially Google, is not harvesting other data from my personal emails.

Now Apple wants to scan hashes of pictures that will be sent to iCloud because they don't want CSAM on their servers (nor does anyone). Next to no human involvement until they think your a pedo, then someone looks through the flagged pictures to see if they are CSAM, there is a chance the picture wasn't CSAM but now some stranger has already looked at YOUR PERSONAL PICTURES. Might have been a picture of a flower pot, might be a pic of a far more personal nature, no way to know until its too late.

THEN...

Apple (or any other manufacturer) can decide at any moment that they want to compare hashes of pictures NOT being sent to cloud based services, just pictures on your device. Sure, there will be mass outrage but "think of the children".

THEN...

Once we are comparing picture hashes then comes the question "what about video"? Pedos could be uploading CSAM videos to iCloud, now we need to use "AI" to scan your videos too. Much harder to do, much more invasive and much less accurate. In order not to falsely accuse anyone humans will need to review YOUR PERSONAL VIDEOS!

THEN...

What about your security cameras from Ring, Amazon, Eufy, Logitech, etc? All of these folks store video on their cloud systems and surely they don't want illegal materials on their servers either so now they start using "AI" to listen to the audio from your cameras for certain sounds, words, phrases, etc. Just to be sure no one is falsely accused humans will need to review YOUR PRIVATE AUDIO AND VIDEO!

THEN...

The government(s) get involved and want to get info on dissidents/opponents.

See what can happen? It always starts small and with something that is difficult to come out against, like CSAM, but as technology gets better (like hashing audio or video) the invasions into your privacy get deeper and deeper.
 
Last edited:
A Case could be made without the outrage that even iCloud data shouldn't be scanned but thats not what the outrage is about. The primary objections of pundits and some users is that the scan is occurring on device rather than on iCloud.

so my Questions and confusion rather is this:

If the only available data TO scan for Apple to do on device is the exact same as the iCloud data , why does it matter where the scan occurs to users? especially if the scan is done by Device AI and Apple is only contacted when there is a BULK of hashes per device which match the one in Apple hash servers.
The users are objecting why the scan isn't occurring on servers. Well. It would be the exact same data as the one available to scan on iPhone. All the other data is still locked out to Apple

another question I would ask is why the outrage against Apple specifically on doing specifically photo hash only scanning when in fact Google scans private emails and all other online content for more categories than CSAM? Microsoft scans all online storages for more than CSAM? Amazon scans private data on all its online drives? FB and Twitter scan Private DMs and Facebook Messenger chats for CSAM? So why is the outrage fixated on Apple?
when in fact Apple is not only scanning for a lot less but they have said they will not expand the category of what they scan and who to give the scans to. You could always argue you don't trust Apple but if thats the case , if you dont trust Apple, then NO tech in the mainstream industry is better for you.

These questions above are different than WHy they are scanning to begin with but to that I have a question as well. If Apple is not allowed to scan anything , how does it stop CSAM which is the worst of society.
Apple doing the scan on device is a big deal for several reasons.

First, ask why they don't they just do this in the cloud? They are cheap. They want to leverage the local neural engine in the iOS device to do this workload. At scale it's a huge computational load and they want their users to pay for it.

Further, consider how much goodwill they just nuked for distributed use of those A-series chips for something positive like protein-folding analysis to help with COVID. No, instead Apple chose to treat their entire userbase like suspected pedophiles. That burns a lot of charity.

The fact that the scan is done on the user's phone, without their consent, and *prior* to uploading makes this a warrantless search that Apple is conducting as a fishing expedition on behalf of law enforcement.

Law enforcement cannot do this without a warrant which requires probable cause.

NCMEC is a private foundation, but is funded by the US Justice Department. Anything Apple refers to them will be reported to FBI or other agencies. It's also run by longtime infomercial hawker John Walsh, father of Adam Walsh.

People thinking that Apple will not make a mistake really overestimate the level of care Apple will use. Likely, their employees will never actually see the CSAM photo. They will simply look at the match count and forward to NCMEC for review.

Comparisons to cloud-hosted data being scanned are simply not the same as what Apple is doing here and the way they dropped this has been unbelievably badly handled.

This will keep building as a PR disaster and we will see if Stella Low can handle it. She's from the UK and maybe she just doesn't understand the Fourth Amendment landmine Apple just stepped on. She likely was on a team that signed off on this whole thing in advance. Jobs' longtime PR chief Katie Cotton (left in 2014) would have seen this coming.

Then there is the mission creep of adding new hashtables of wrongthink to check for, "for the children" or to protect you against terrorists. The precedent that Apple can use our personal resources to incriminate us without cause is intolerable and is destructive to the brand.
 
The fact that the scan is done on the user's phone, without their consent, and *prior* to uploading makes this a warrantless search that Apple is conducting as a fishing expedition on behalf of law enforcement.

Law enforcement cannot do this without a warrant which requires probable

Bingo! Thank you for actually answering OP’s question. I came here to post the same thing but you beat me to it.

My phone is my property. If Apple wants to build surveillance functionality into their cloud servers that they own, that’s fair game. But Apple should not be installing surveillance functionality on our devices.
 
Why is it alright to scan in the cloud but not on your phone? It is still a warrantless search if done in the cloud yet Facebook, Google and Microsoft all do it. If you use any of their cloud services you are being searched already. Worse, no online photo provider provides E2E encryption because of this need to make sure they are not hosting CSAM.

I see this as a gateway for Apple to be able to finally enable E2E encryption on their iCloud backups because they can reassure authorities that they are not hosting any CSAM material since the scan happens just before upload to the cloud. Then once your photos are there, nobody can access them.

Also, the only material that Apple and authorities would have access to is just the photos flagged as CSAM and only after you meet their threshold can it be decrypted for review by Apple. With cloud scanning, Apple and other service providers have full access to ALL of your photos all of the time.

I feel there’s just too many people that don’t understand the technology that are spreading misleading or false information. Be it on purpose or through ignorance.
 
  • Angry
Reactions: Pummers
Why is it alright to scan in the cloud but not on your phone? It is still a warrantless search if done in the cloud yet Facebook, Google and Microsoft all do it. If you use any of their cloud services you are being searched already. Worse, no online photo provider provides E2E encryption because of this need to make sure they are not hosting CSAM.

I see this as a gateway for Apple to be able to finally enable E2E encryption on their iCloud backups because they can reassure authorities that they are not hosting any CSAM material since the scan happens just before upload to the cloud. Then once your photos are there, nobody can access them.

Also, the only material that Apple and authorities would have access to is just the photos flagged as CSAM and only after you meet their threshold can it be decrypted for review by Apple. With cloud scanning, Apple and other service providers have full access to ALL of your photos all of the time.

I feel there’s just too many people that don’t understand the technology that are spreading misleading or false information. Be it on purpose or through ignorance.
The justification on here is that if I upload to iCloud, I've already agreed to use your service giving you that right versus doing it on the phone that I own...but to your point, I still think doing it on the phone is okay for a couple of reasons.

* Think of it as nothing more than a diagnostic check...Apple is not "searching" your phone or looking at your private pics. They are simply analyzing what is on there. They do this all day long in a very similar manner for other items on your phone.

* Nothing even happens on their end unless you actually upload to iCloud. In a sense, they are still giving you the option to NOT use iCloud and therefor, nothing is ever reported since the analysis never leaves your phone.

* The idea of this being some sort of hackable "backdoor" is totally ridiculous. This is hard coded into an iOS update and is not some sort of software or app that can be hacked. Anyone claiming that governments or law enforcement can NOW use this to "spy" on iPhone users doesn't understand that this is controlled by Apple. If they can force Apple to provide access to a users data, this hash programming doesn't make it any easier or more difficult for them to do so.

If a government can somehow infiltrate the hashed database or force Apple to add their own hashes ("We want to know about all images that contain Trump in them"), how does this make that easier if it never leaves the phone. Governments/spy agencies are way smarter than that. They can hack into your phone or iCloud (so easy) without the need to use this tech. They wouldn't waste their time or risk it being public by forcing Apple to comply...they don't need them to do that.
 
  • Angry
Reactions: Shirasaki
Why is it alright to scan in the cloud but not on your phone? It is still a warrantless search if done in the cloud yet Facebook, Google and Microsoft all do it. If you use any of their cloud services you are being searched already. Worse, no online photo provider provides E2E encryption because of this need to make sure they are not hosting CSAM.
The simple answer is that the server that the data is on is Apple's property and not the user's. They can inspect the data you choose to store in their service as a function of disclosure in their Terms & Conditions.

They can also search in response to valid warrants from law enforcement on that data in the context of an active investigation on an individual with an iCloud account.

I see this as a gateway for Apple to be able to finally enable E2E encryption on their iCloud backups because they can reassure authorities that they are not hosting any CSAM material since the scan happens just before upload to the cloud. Then once your photos are there, nobody can access them.

That is not what this is about. Apple is never going to allow you to hold a private encryption key for data stored in iCloud. That decision has been made, this is downstream of that, not a first step in bringing you more privacy.

I feel there’s just too many people that don’t understand the technology that are spreading misleading or false information. Be it on purpose or through ignorance.

That is how I feel about the legal principle involved. Apple chose to cross the Rubicon, now they have to deal with the consequences. Separately, this board is not all Americans and the gravity of the Fourth Amendment issues this presents (and why Americans care deeply about them) may not be obvious to all posters.

The technology discussion is irrelevant, other than to point out that iOS is not secure and this presents unbelievable corporate risk due to false charges. The location of the search is the issue.
 
* The idea of this being some sort of hackable "backdoor" is totally ridiculous. This is hard coded into an iOS update and is not some sort of software or app that can be hacked. Anyone claiming that governments or law enforcement can NOW use this to "spy" on iPhone users doesn't understand that this is controlled by Apple. If they can force Apple to provide access to a users data, this hash programming doesn't make it any easier or more difficult for them to do so.

Apple has unpublished methods (private APIs) to use the CSAM functionality, just like it has private APIs to do COVID-19 exposure tracking through Bluetooth. Unpublished APIs can leak or be discovered and their functionality included in malicious apps. Uber was caught doing this.

People think every app accepted into the App Store has passed some rigorous code review. A scan for malware will not find these violations. Fears that these methods can be exploited either by malicious App Store apps or by exploiting iOS in another way are not ridiculous. They may be unlikely, but if you stop and think how the pieces fit together, it may make more sense. Normalizing the use of the Neural Engine to spy on you, on devices you paid for, is a violation of every principle Apple has previously espoused about Privacy. Analyzing this as a completely isolated event ignores the tangible threats this functionality brings.
 
Apple has unpublished methods (private APIs) to use the CSAM functionality, just like it has private APIs to do COVID-19 exposure tracking through Bluetooth. Unpublished APIs can leak or be discovered and their functionality included in malicious apps. Uber was caught doing this.

People think every app accepted into the App Store has passed some rigorous code review. A scan for malware will not find these violations. Fears that these methods can be exploited either by malicious App Store apps or by exploiting iOS in another way are not ridiculous. They may be unlikely, but if you stop and think how the pieces fit together, it may make more sense. Normalizing the use of the Neural Engine to spy on you, on devices you paid for, is a violation of every principle Apple has previously espoused about Privacy. Analyzing this as a completely isolated event ignores the tangible threats this functionality brings.

The point is they already have this with other onboard analytics that according to you can be hacked by a rogue app.

How does doing the analysis this way make it any easier to do that?

Breaking into a phone is not easy, but certainly not impossible with a very advanced rogue app or messaged link that may accidentally be followed by the user.

The question is, how does on phone analysis of hashed images from a set database affect that security? Even if new hashed could be introduced to the system to look for something other than child porn, how is that info tagged and transmitted to the hacker? When it’s uploaded to iCloud which they could hack much easier than going through all those steps and expense to create what would be necessary to add hashed?

It’s all spy movie fantasy…hackers and governments don’t need this to make their spying easier. They have a dozen other ways to get the info they want without involving Apple.
 
If “on-device” hashing can be abused then can’t “on-server” hashing also be expanded to other categories and be abused? What’s the difference here?
 
  • Like
Reactions: dk001 and MozMan68
I do use google services as well, because I think they are the best. I’m not nearly as anti google as most here. But at least I know what I got with them. I don’t think there is a way around this for me, switching to a google device running Android isn’t any more private.
Personally, I choose to not give my business to one who pontificates from a pedestal and then quickly proves to be a liar and in the bargain provides lower quality services than the competition. Sorry Cook, no more Euros from me.
 
If “on-device” hashing can be abused then can’t “on-server” hashing also be expanded to other categories and be abused? What’s the difference here?
Seriously...someone give me one realistic example of how on-device hashing can be abuse. Not sci-fi fantasy...step by step on how someone could abuse it.

I'll even let you assume that someone can break into the phone and alter the hashes and what they can tag. What next?

And again, remember that Apple already does on-device analytics that are way more open than this....
 
Seriously...someone give me one realistic example of how on-device hashing can be abuse. Not sci-fi fantasy...step by step on how someone could abuse it.

I'll even let you assume that someone can break into the phone and alter the hashes and what they can tag. What next?

And again, remember that Apple already does on-device analytics that are way more open than this....
Exactly. I have photos on my iPhone marked as having cats in them and I never tagged those photos as cats. Hmmm. Some spyware on my phone eh?
 
The point is they already have this with other onboard analytics that according to you can be hacked by a rogue app.

How does doing the analysis this way make it any easier to do that?

Breaking into a phone is not easy, but certainly not impossible with a very advanced rogue app or messaged link that may accidentally be followed by the user.

The question is, how does on phone analysis of hashed images from a set database affect that security? Even if new hashed could be introduced to the system to look for something other than child porn, how is that info tagged and transmitted to the hacker? When it’s uploaded to iCloud which they could hack much easier than going through all those steps and expense to create what would be necessary to add hashed?

It’s all spy movie fantasy…hackers and governments don’t need this to make their spying easier. They have a dozen other ways to get the info they want without involving Apple.
Breaking into a phone is closer to easy than impossible. That is not according to me. Look up Pegasus to get started.

I don't take issue with most of what people are asserting about the security of the implementation, but this is not an implementation issue, it is a legal domain issue.

If it doesn't bother you, it doesn't bother you.
 
Breaking into a phone is closer to easy than impossible. That is not according to me. Look up Pegasus to get started.

I don't take issue with most of what people are asserting about the security of the implementation, but this is not an implementation issue, it is a legal domain issue.

If it doesn't bother you, it doesn't bother you.
I already know about Pegasus...and even Pegasus requires the user to click on a link sent via message. It isn't just placed on a phone without the users knowledge.

I also think one of the largest corporations in the world did their due diligence from a legal perspective. All the conjecture about "MY PHONE IS MY PROPERTY AND YOU CAN'T LOOK AT MY PICS!" nonsense is from people not understanding the tech and what it is actually doing. Apple has actually found a way to maintain the user's privacy while preventing their servers from being used as a hub for child pornography no different from the way they protect a user's privacy when using the phone to check traffic data, improve Maps, perform onboard system diagnostics, etc.
 
I already know about Pegasus...and even Pegasus requires the user to click on a link sent via message. It isn't just placed on a phone without the users knowledge.

I also think one of the largest corporations in the world did their due diligence from a legal perspective. All the conjecture about "MY PHONE IS MY PROPERTY AND YOU CAN'T LOOK AT MY PICS!" nonsense is from people not understanding the tech and what it is actually doing. Apple has actually found a way to maintain the user's privacy while preventing their servers from being used as a hub for child pornography no different from the way they protect a user's privacy when using the phone to check traffic data, improve Maps, perform onboard system diagnostics, etc.
Of course they have.... but most likely not.
 
I think people just don’t know how things work on either a high level or a technical level. I had some concerns about their claims about the false positive rate. But a few people on this forum helped me understand and mitigate those concerns.

This really isn’t that big of a deal. I do however understand the argument of treating everyone as a suspect. I don’t have anything to hide so I don’t care. And the last time I took a picture or saved a picture on my phone was 3 months ago.
 
I think people just don’t know how things work on either a high level or a technical level. I had some concerns about their claims about the false positive rate. But a few people on this forum helped me understand and mitigate those concerns.

This really isn’t that big of a deal. I do however understand the argument of treating everyone as a suspect. I don’t have anything to hide so I don’t care. And the last time I took a picture or saved a picture on my phone was 3 months ago.
The chances that you'll have anything that they're looking for are effectively 0
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.