Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Okay..that’s what I thought you meant…and subverted is an okay word to describe that actually.

To be clear, you feel that just because it IS possible, Apple will take direction from a government entity to add an altered hash database to iOS which will “scan” for matching pics (btw..it never checks for “content”, it has to match an identical picture, but okay) and then Apple will have those images tagged, then “see” them somehow (let’s assume the user uploads them to iCloud) at which point Apple will be notified, and then in turn tell this government that has asked for them. Does sum up what you are saying could happen?

You are not the only one who has inferred this could happen, so I just want to be clear before I reply with a response that is appropriate.
Yes. Just because they built it into the client side, it will be very hard to refuse a government order to search for something else client-side, since the capability will already built in.
 
Again... all the big cloud services have been scanning photos for YEARS yet nobody had a problem with it then. ;)
And that's to be expected. If you are uploading to a cloud service, I would expect it to be scanned. I would expect it to be checked for not only CSAM, but for viruses, etc.

The difference is, it's scanned on THEIR equipment and THEIR servers. Not on MY device. I own my iPhone... I just paid it off in fact. It's really a matter of principal. Apple has decided to "take the law into our own hands". It's not about "discovering" something that has been sent to iCloud, it's actively LOOKING for stuff.

And if you guys are so blind as to think that Apple couldn't, with a flick of a switch, turn on hash reporting for all photos, whether uploaded or not, and just send a little flag to Apple that images are on your device, I have a bridge to sell you.

This is voluntary surveillance. THAT is what I have an issue with. And the opportunity for abuse is HUGE on this one.

It's like I said before; would you be comfortable with someone putting a camera in your home, with the promise that they'll only watch for child abuse, and that nobody will EVER look at your spouse walking around in their underwear... we promise!
 
  • Like
Reactions: Euronimus Sanchez
Yes. Just because they built it into the client side, it will be very hard to refuse a government order to search for something else client-side, since the capability will already built in.
But search for what?

EDIT: To clarify..Hashes only work with identical pictures or very obscure visualization. What could they search for via this method?
 
To be clear, you feel that just because it IS possible, Apple will take direction from a government entity to add an altered hash database to iOS which will “scan” for matching pics (btw..it never checks for “content”, it has to match an identical picture, but okay) and then Apple will have those images tagged, then “see” them somehow (let’s assume the user uploads them to iCloud) at which point Apple will be notified, and then in turn tell this government that has asked for them. Does sum up what you are saying could happen?
I cannot speak for @bobcomer, but, yes: That is the crux of many arguments. (It is but one of my objections.)

You are not the only one who has inferred this could happen, so I just want to be clear before I reply with a response that is appropriate.
Grammar Nazi, here, with a nitpick: The speaker or writer implies. The listener or reader infers. (With thanks and a tip o' the hat to Sheldon Cooper :))
 
But search for what?
BLM? Klan involvement? Pictures of setting cars on fire (already done, when Apple gave the FBI access to someone's iCloud photos... encrypted? Didn't matter... private? Not with a warrant...)

Terrorism? Confederate flag? The list goes on and on....

I'd be willing to bet that the new EULA doesn't mention CSAM specifically when it comes out... just "Scanning for illegal content".
 
  • Like
Reactions: Euronimus Sanchez
*Anything* in image form.
And... with the new technology being rolled out for iMessage (which seems to be the white elephant in the room) the AI could be used to search (for example) for images of the Confederate Flag or KKK stuff. Doesn't even have to be a hash match, with the new AI. If it can tell if an image is a nude, it could tell if an image was a confederate flag, or a BLM sign.
 
And if you guys are so blind as to think that Apple couldn't, with a flick of a switch, turn on hash reporting for all photos, whether uploaded or not, and just send a little flag to Apple that images are on your device, I have a bridge to sell you.

To be fair... Google, Samsung, and others could flip that same switch, too.

Maybe they have already? Who knows!

We need Little Snitch for Android!

It's like I said before; would you be comfortable with someone putting a camera in your home, with the promise that they'll only watch for child abuse, and that nobody will EVER look at your spouse walking around in their underwear... we promise!

Isn't that why people have been covering their webcams for years? Maybe they've been spying on us all along!

Hey... at least Apple is only comparing hashes (if you believe them)

No one looks at your *actual* photos unless the hash-matches get escalated to human review. And there are quite a few steps before that happens as has been explained in earlier comments.

Could they scan for more content eventually someday? Sure.

But like I said before... any company could.

I'm not gonna switch to a burner flip-phone and/or stay off the internet.

¯\_(ツ)_/¯
 
And... with the new technology being rolled out for iMessage (which seems to be the white elephant in the room) the AI could be used to search (for example) for images of the Confederate Flag or KKK stuff. Doesn't even have to be a hash match, with the new AI. If it can tell if an image is a nude, it could tell if an image was a confederate flag, or a BLM sign.
I really don't know about that feature, as I have never had children so can't really say if it's okay, but I don't think it reports to anyone other than the kid and the parent if opted in. Of course, since it's there on device, that could change.
 
*Anything* in image form.
Got it…and that’s what I thought (and others have “implied”;) )

Next question, how does THIS, now, stop them or ever stopped them from ever doing this before or in the future? Where was the outrage when Apple added hashed comparisons years ago?

Some will argue that those were for the user benefit and not “policing” us, BUT you specifically state you are worried about someone (government or other entity) MAKING Apple look for other “stuff”.

They could ALWAYS have done that (hell, maybe they are already doing that or have done it).

What about this specific way of comparing picture hashes so that ONLY those pics or tagged if being uploaded to iCloud has everyone so up in arms when this is already being done?

They could “tag” ANY pics they want to today without you knowing. No one disagrees with that. The question from me is why are people so concerned now and not with the fact that they have had this capability for years? They CAN scan every pic one uploads to iCloud to look for ANYTHING (you have actually give them that right by using iCloud).

They CAN put it in iOS to scan for ANYTHING on your device if they wanted to…yes, illegally in most countries, but they COULD.

The disagreement here seems to be that their efforts here are to NOT scan photos in iCloud. To NOT compare ANY pics on your device if you do not have iCloud sharing turned on (you don’t have to believe it, but until it is proven to the contrary, I choose to believe what they are saying). To only “compare” hashed database images and mark them so that IF AND WHEN they are uploaded to iCloud, Apple could know about that once it is already in iCloud and NOT every single image you might upload.

I’ve said it before, I can’t make anyone believe what Apple is saying is true, but this specific addition to iOS doesn’t change anything else in my opinion as to what they are saying or breaks any laws (particularly in the US).

If Apple chooses to drop this “on-device” method of marking photos that are being upload to iCloud, they will just actually scan every single photo in iCloud. The outcome will be the same, but now they will have be accessing every single thing I upload to iCloud, or for the conspiracy theorists out there, giving “someone” the ability to alter that scan to “see” anything they want. Versus Apple’s way which at least has them with a much tighter toll gate with iOS versus iCloud.

iCloud - Easy to break into with a password for any hacker to view or even upload images as well as access a ton of other data. They can do that today.

Hard coded on iOS - Unless using a super advanced Trojan horse link that you accidentally click on opening the door or someone within Apple accessing the data, hackers have extremely limited capability to get into your phone unless they actually have it as well as access to unlock it.

I prefer the on phone comparison method for my own protection. Doesn’t make iCloud anymore secure, but adding this type of ”check” to iCloud is much more open to abuse by anyone than on phone analysis.
 
If Apple chooses to drop this “on-device” method of marking photos that are being upload to iCloud, they will just actually scan every single photo in iCloud. The outcome will be the same, but now they will have be accessing every single thing I upload to iCloud, or for the conspiracy theorists out there, giving “someone” the ability to alter that scan to “see” anything they want. Versus Apple’s way which at least has them with a much tighter toll gate with iOS versus iCloud.
They already have plans to scan iCloud on-server. Just in case you drop those images in from your iCloud web interface, and not from your phone (bypassing the hash scanning). So, why is it necessary to do both?

And once again, this is apple's CHOICE to go "actively snooping", and NOT in the law. In fact, the law says specifically that providers should NOT view the law as carte blanche to go "digging" for evidence.
 
  • Like
Reactions: Euronimus Sanchez
I am amazed at how many people are OK with on-device scanning/hashing, using the excuse "Google / Facebook / whoever is already doing it!" No... they aren't... NOT ON YOUR DEVICE.

And actively scanning/hashing your personal device is a HUGE LEAP from posting something on Facebook for people to see....
 
  • Like
Reactions: Euronimus Sanchez
I am amazed at how many people are OK with on-device scanning/hashing, using the excuse "Google / Facebook / whoever is already doing it!" No... they aren't... NOT ON YOUR DEVICE.

And actively scanning/hashing your personal device is a HUGE LEAP from posting something on Facebook for people to see....

Some people just don't care about this stuff.

There's a lot going on in the world... and on-device photo scanning doesn't even crack the Top 10.

I'm guilty of arguing in these comments... but I'm all out of F's to give.

I will continue to use an iPhone because I like it. And I'm looking forward to getting an M1 Macbook Air... my first Mac! :)

You all can fight the good fight... let me know how it turns out.
 
  • Like
Reactions: MozMan68
They already have plans to scan iCloud on-server. Just in case you drop those images in from your iCloud web interface, and not from your phone (bypassing the hash scanning). So, why is it necessary to do both?

And once again, this is apple's CHOICE to go "actively snooping", and NOT in the law. In fact, the law says specifically that providers should NOT view the law as carte blanche to go "digging" for evidence.
Apple can’t control how you upload though…and I’m guessing uploading from an Apple device is by FAR the vast majority of what is uploaded to iCloud.
 
Next question, how does THIS, now, stop them or ever stopped them from ever doing this before or in the future? Where was the outrage when Apple added hashed comparisons years ago?
I don't disagree with scanning server-side, only client side, and as far as I know, this is the first client side "scanning". (I know the term scanning might be a problem, and it's not like scanning an exact image with a copier, but for want of a better word for extremely low res scanning that looks for similar features, I'll use scanning.

Some will argue that those were for the user benefit and not “policing” us, BUT you specifically state you are worried about someone (government or other entity) MAKING Apple look for other “stuff”.
Aye, but client side scanning that's goal was to report to authorities is the only thing I'm concerned with on this issue. Yes, spotlight could have been changed to report to police if it was set to look for certain type images, but given what Apple has said, that never happened, until now, and I trusted them when they said that because I saw no evidence to the contrary.

What about this specific way of comparing picture hashes so that ONLY those pics or tagged if being uploaded to iCloud has everyone so up in arms when this is already being done?
Because server side scanning and client side scanning aren't even remotely the same, even if the end goal is the same. Client side has a lot more potential access to our personal information than what gets uploaded to icloud.


They CAN put it in iOS to scan for ANYTHING on your device if they wanted to…yes, illegally in most countries, but they COULD.
Not can anymore, it's a done deal in iOS15. (by including client side scanning)

I’ve said it before, I can’t make anyone believe what Apple is saying is true, but this specific addition to iOS doesn’t change anything else in my opinion as to what they are saying or breaks any laws (particularly in the US).
What can I say, it changes everything to me, simply because of what (and when) it has access to my info. It means I can longer trust them to keep my private info private -- no better than any of the companies that are so eager to mine (and report) data.

If Apple chooses to drop this “on-device” method of marking photos that are being upload to iCloud, they will just actually scan every single photo in iCloud. The outcome will be the same, but now they will have be accessing every single thing I upload to iCloud, or for the conspiracy theorists out there, giving “someone” the ability to alter that scan to “see” anything they want. Versus Apple’s way which at least has them with a much tighter toll gate with iOS versus iCloud.
And that would be perfectly fine by me. I don't object to the scanning their own servers. And it's not "someone"
Hard coded on iOS - Unless using a super advanced Trojan horse link that you accidentally click on opening the door or someone within Apple accessing the data, hackers have extremely limited capability to get into your phone unless they actually have it as well as access to unlock it.
There is that potential to frame someone, but it's pretty much the same threat in server side scanning. (assuming Apple still does server side scanning, which I would think they'd have to)
 
I prefer the on phone comparison method for my own protection. Doesn’t make iCloud anymore secure, but adding this type of ”check” to iCloud is much more open to abuse by anyone than on phone analysis.
You and I are total opposites on this. For me iCloud does not mirror myphone, it only has a subset of my phone, and on device scanning is wide open, and nothing can be as open as wide open.
 
  • Like
Reactions: GBaughma
You and I are total opposites on this. For me iCloud does not mirror myphone, it only has a subset of my phone, and on device scanning is wide open, and nothing can be as open as wide open.

Works for me…but all I know is, everything on my phone is on iCloud at some point whether it is a back-up or actual separated data accessible by me on iCloud.com. ICloud is in essence, my phone on the web/cloud (and I’m guessing that’s true for most).

Will continue to live my life in the hopes no one tries to F with me…hah! #IgnoranceIsBliss
 
Works for me…but all I know is, everything on my phone is on iCloud at some point whether it is a back-up or actual separated data accessible by me on iCloud.com. ICloud is in essence, my phone on the web/cloud (and I’m guessing that’s true for most).

Will continue to live my life in the hopes no one tries to F with me…hah! #IgnoranceIsBliss
I always did backups on my local storage. (not because of a trust issue to began with, but a cost and speed issue.) Now I'm glad I went that way. :)

I understand your position, and for now, you're no worse or better off than I am. I'm a worrier so I always am thinking ahead. (right or wrong) I also remember way too much of what I read and on device scanning reminds me too much of 1984.
 
ICloud is in essence, my phone on the web/cloud (and I’m guessing that’s true for most).
You're not really helping your arguments, you know
lol.gif
 
The irony is, almost everybody would have a problem if their parents, partner, children, neighbors, employer, people somewhere abroad, other nations would check ALL of what is on their phones.

Think about, a random guy on the street takes your phone and wants to fully check it for illegal content.

But they say they have no problem when a private company does it!
 
You're not really helping your arguments, you know
lol.gif

Regarding what?

I’m not arguing that one SHOULDN’T use iCloud because it is easier to hack…I’m simply stating that it is easier to do that versus on-device data.

I’m just not “hack-worthy” so I don’t live my life worrying about it and take the normal steps via my crazy passwords and not sharing common info so hackers COULD get into my account if they even wanted to.
 
Regarding what?
Ok, but it's kind of a cheap shot, which is why I added the
lol.gif


One argument on the part of the Defenders Of On-Device CSAM-Scanning (hereinafter DOODCSAMS [hey, an acronym w/in an acronym--what fun!]) has been, in essence "What remains on your device is private, but, once you put it in the cloud, there can be no reasonable expectation of privacy."

But you wrote, in essence, that, for your data, "on the device is in the cloud and vice-versa," and that most iThings users probably felt and acted the same way.

Now, as a retired security professional (yes, I was regarded as something of a security professional whilst employed), I know that a reasonable expectation of data security can only be assumed if two conditions obtain: 1. The data remains only on hardware under your physical control and 2. That data does not leave that hardware. However, Apple has been touting personal information and data privacy in a big way. (I'll assume I don't have to demonstrate, again, in what ways they've been doing this?) So I would argue that a goodly number of iThings owners naturally assumed on-iDevice == on iCloud wrt privacy.

Particularly since (and it's been a while since I set up either of my iThings, so I could be mis-remembering), Apple tends to enable iDevice <-> iCloud "mirroring" by default?

The point here being (this is an edit--I forgot to make my point :D): I wonder how many Apple customers would be surprised to learn Apple's touting of their being a conscientious custodian of customers' data and privacy did not extend to that stuff in iCloud?

Like I said: A bit of a cheap shot :)

I’m not arguing that one SHOULDN’T use iCloud because it is easier to hack…I’m simply stating that it is easier to do that versus on-device data.
That does not necessarily have to be true.

I’m just not “hack-worthy” so ...
Yeah.... about that: No competent network security pro would ever state "I'm not going to worry about security because <thing> isn't worth hacking." Trust me: If it's network-accessible, somebody, somewhere, will find it worth hacking--if for nothing other than S&G's or "because it's there" or "because I can" or...
 
Last edited:
  • Like
Reactions: Mendota
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.