Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I think they ARE respecting that dynamic. Again, every single user has the choice to disable iCloud for photos. Isn't that control?

I guess it is - maybe I just don't like that I don't get a say in this being implemented. Then, obviously, it becomes a subjectively acceptable level of control.

Oh, it's not a "few" people attempting to upload CSAM. It's a pandemic and has been for a long time. Believe me, Apple (and other cloud services) would not be spending all these resources and time on CSAM detection if it were some small problem.

You're 1,000% correct and I definitely didn't mean to minimize that - I was just trying to illustrate a point, but you are absolutely correct about this.

Yes, it will be on their servers, but in effect "quarantined" for review and not able to be distributed. As for the "exact same amount of time" - I don't know. Was the server-side scan happening in real time as the photos were uploaded or just periodically? In any case, the point of this change isn't to reduce time on server or anything but to rather to continue to do what they've always been doing in a more private manner. That's it! I don't think Apple ever claimed that the method is more effective when done on the device itself - just more private.

I'm trying to shift my mindset on this to make it more acceptable, b/c I prefer my iPhone over others, and want to do anything possible to help stop CSAM, CP, pedophilia etc. (though I still wonder how much this is truly helping with that).

I'm just having a hard time shaking the fact that, again, it's being done on my device without my say. And I believe there are trade-offs to how Apple is implementing this that offset what I view as a small privacy benefit.
 
  • Like
Reactions: usagora
Just FYI:



yeah.. languages must evolve or they die (Latin). It's still jarring to the eyes to see it. As a (now apparently) old fart, I guess that's where generational lines are drawn... "get off mah lawn wit' yer "and" and "but"!"

I've been generally undecided on However. I try to use "also" and "likewise" in a phrase after semicolons. I remember my later school years hearing that some rules I learned in elementary school were being changed. That was a little surreal for 15 year old me after just learning "the way" 6 years prior!

I read all of Abraham Lincoln's speeches and noticed even he broke those rules! Wow. Language continues to evolve....or devolve, in some cases. Maybe we go back to just grunts and growls. :)
 
  • Like
Reactions: JMacHack
Okay, put everyone in solitary confinement. Problem solved:oops:.

Or, as Ben Franklin put it: "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."
Please, to clothe, feed, and harbor everyone in a prison without forcing them to be productive would just end in collapse.

Now, on the other hand, it’s easier to keep a populace producing and docile by making them fear repurcussion. Say, by threatening them with loss of livelihood or forfeiture of property if they don’t comply.

I.E. “do what we want or you will lose your job”

Or blackmail is always effective. I doubt many people want others to know of their “habits”
 
Okay, put everyone in solitary confinement. Problem solved:oops:.

Or, as Ben Franklin put it: "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."

And he was referring to paying taxes, not privacy.

It’s amazing how often the aforementioned quote continues to be used entirely out of context.
 
  • Like
Reactions: BigMcGuire

Apple full of BS!.

F5CD231E-B859-4CDF-A045-56B77A1934B3.jpeg
 
  • Like
Reactions: Philip_S
Count me in as one of the people totally against all of this. I follow what Apple's saying, and yes - it's subjectively "better/less intrusive" than many other ways a company might try to police your photo collection. But that doesn't make it acceptable.

I mean:

- They're doing this via a hash table getting stored on your phone as part of iOS. So this works a lot like PC antivirus software, where it's going to try to match this data to your photos and flag any that are, in their estimation, too close a match. (We know it doesn't use exact CRC values, looking for 100% proof a photo of yours is identical to the source material, because Apple has spoken of this tech allowing them to match a cropped photo, or one that was changed from color to black and white, etc.) They're talking about all of this working with iCloud Photos, yet this hash table and the initial check happens locally on the phone itself. So that would imply they're flagging/tagging the suspect photos on your phone but doing a human verification and the reporting to authorities based on the flagged/tagged versions of the photos getting uploaded to the cloud.

- The above makes me question things like what performance impact it might have on my phone, for starters? Another question is if it incorrectly flags any of my local photos as matches, are those originals now edited with that "watermark" or "branding"? What impact could that have if the photos get exported for use in other apps, etc.?

- In a purely practical sense, this doesn't sound very effective? The people involved in this illegal activity will surely be among the first to know Apple has done this, so they'll avoid keeping any photo collections of existing content on their devices. (Remember, this tech won't do a thing to flag a new photo they take of a naked kid. It's only going to know about relatively old and shared-around content that they were able to add to the hash database.) This will just encourage them to save such photos in other formats that are far less obvious than sitting in an iPhoto gallery.....

- Lastly (and this point is, clearly, more controversial)? I'm not even sure it's a good way to go about prosecuting pedophiles to focus on shared photo collections of that kind of content? A number of studies have indicated the people able to relatively freely look at that stuff are more often satisfied by it, preventing them from assaulting a kid themselves. (This behavior isn't something we've been able to train people to change.) I'd view possession of it as more of an indirect clue the individual might be doing other illegal things than a major crime in and of itself.
 
He definitely seems to skirt most of those questions. Which is interesting. I still trust Apple to do the right thing for privacy, which I’m glad it’s checking hashes and not doing image scanning… but we’re definitely in troubled waters with the mob mentality going around tech forums and the general media right now. Going to be an interesting next few weeks/months

Yah, I'm not convinced. I still trust Apple (innocent until proven guilty), but the description of their system suggests that they do scan the photos (on-device) for "features". They are not using the CASM hashes directly, but a variant (ie. proprietary technology) that allows for image manipulations to still result in a match. Translation: Any manipulation of an image would result in a different hash in CASM's list, but the same hash in Apple's list.

Once this feature goes live, will everyone's existing iCloud Photo libraries be scanned, or just newly-added photos that originate at the device? Given their explanation so far, it sounds like the encrypted device is part of the equation, so any existing photo libraries won't result in a mass of arrests around the US. Darn! 🤣
 
it begins with child protection, then porn will be banned, then something else whatever any government will demand to seek or ban. And finally, personal computers will no longer be personal.

But the CSAM detection system is very poor at banning a general type of pictures.

The new AI stuff in Messages is perfect for this, though. And yet, most of the issue people have is with the CSAM detection system.
 
Is this how this technology/process works, in a nutshell?

A totalitarian gov produce- and spread 5 anti gov memes (as bait).

After a few weeks they tell Apple to add 5 new hashes to the search and report back on anyone with more than 1 anti gov meme on their phone.

Someone sent you one of the memes so your account gets flagged..

If you’re lucky the auditor at Apple agrees with your political views and report it as a “mistake”.

If you’re not so lucky the auditor is a big fan of the gov and report your thought crime to the authorities.

You get a knock on the door and a free trip to a labor camp.

That’s it?
 
Bottom-line: They are scanning your iPhone. Whatever you store in iCloud.

You don't know that for sure. Apple is describing it as "Before an image is stored in iCloud Photos, an on-device matching process is performed [...]"

It could mean it only scans the photo just a few seconds or minutes before being uploaded.

What happens if iCloud Photo Library is off? AFAIK, no statement from Apple is explicitly saying that they will scan the photo anyway and create a security voucher but it will not be uploaded.

We just don't know what the exact implementation will be.
 
Is this how this technology/process works, in a nutshell?

A totalitarian gov produce- and spread 5 anti gov memes (as bait).

After a few weeks they tell Apple to add 5 new hashes to the search and report back on anyone with more than 1 anti gov meme on their phone.

Someone sent you one of the memes so your account gets flagged..

If you’re lucky the auditor at Apple agrees with your political views and report it as a “mistake”.

If you’re not so lucky the auditor is a big fan of the gov and report your thought crime to the authorities.

You get a knock on the door and a free trip to a labor camp.

That’s it?

It's not stored in the Photo Library if you just share with Messages or something else. It's just stored in Messages or whatever other app you used to receive those memes.

But why can't such a powerful government to even more? Say every photo is uploaded to government servers unencrypted?

Why use the cumbersome CSAM detection system?
 
It still puts Apple where they must not be. Investigating is done with warrants.
I am sure Apple was put for a choice. Do this - the way you like it - or we wil make law that forces you to do it, in a away you don’t like.

But Apple isn't doing anything like a legal investigation, just looking-into-a-matter type of investigations which companies do all the time in many areas.
 
I'm still quite confused why they chose the on-device implementation instead of a server-sided one. They must have been aware that the former would cross the line for a lot more people than the latter.

By doing it locally they don't have to break encryption. It also makes it easier to offer end-to-end encryption for iCloud Photo Library since they can say that they scan for CSAM.
 
The new AI stuff in Messages is perfect for this, though. And yet, most of the issue people have is with the CSAM detection system.

Issues with both, actually.

Absolutely, 100%, GTFO out of my Messages - for damned sure.
Zero chance that stays as "just for parental controls"

This is all such an incredible script flip from Apple and their marketing the last few years.

I'm simply blown away by how complicit so much of their customer base seems to be.
 
Last edited:
"if an account is flagged with a collection of illegal CSAM material, an Apple team will review that to make sure that it is a correct match of illegal CSAM material prior to making any referral to any external entity."

How exactly are they going to do this part? Are they just going to manually compare hash numbers? How is manually comparing hash numbers going to "make sure it is a correct match".

Surely they will not be looking at actual photos of CSAM material.

When a photo is labeled as CSAM, the system will also create a small less detailed version of the image from the local copy and include it with the secret voucher.
 
What happens if iCloud Photo Library is off? AFAIK, no statement from Apple is explicitly saying that they will scan the photo anyway and create a security voucher but it will not be uploaded.

This was addressed in the article explicitly with a quote from Erik Neuenschwander:

And the last point that I would just add is that it does still preserve user choice, if a user does not like this kind of functionality, they can choose not to use iCloud Photos and if iCloud Photos is not enabled, no part of the system is functional.
 
Correct. So if Apple finds only one CSAM photo, it gets uploaded to iCloud Photos. They will tolerate one piece of child porn per account, that is their actual stated position. Because engineering reasons.

All photos will be uploaded to iCloud if you have iCloud Photo Library turned on. Same as today.
The new thing is that if a hash of the photo (short string of characters) matches a CSAM hash (another short string of characters), a safety voucher will be created and also uploaded.

Those safety voucher can't be read by Apple or you. They can't be counted (in a meaningful way) by Apple. Only when there is enough safety voucher in iCloud can Apple read them and control if the photos are indeed CSAM material.

So what is the threshold? We don't know. I would guess it's bigger than 10 maybe as large as 50.
 
First scanning pictures, next scanning texts, web searches, any activity Apple deems inappropriate at that time. Like the abusive captor telling you it's for your own good as he beats you.

You do know that every major browser scans every URL you browse against a list of predetermined bad URLs provided usually by Google?

If Google determines the URLs content is harmful your browser won't go there or at least provides a stern warning.
 
So when ios15 is released to the world does the US get a different version of the OS to enable this scanning? Or does every ios15 phone worldwide start getting scanned against US set hashes? What happens when this is expanded to other countries? Do those phones then get a local hash set or does the world stay pegged to the US hash set? Would people in the US except a hash database originating from a country like Iran or China? Why should other countries accept a US hash database? If its a local database how much does it diverge from the US hash database?

I'm being deliberately obtuse here but this is the Pandoras box that is being opened.
 
- The above makes me question things like what performance impact it might have on my phone, for starters? Another question is if it incorrectly flags any of my local photos as matches, are those originals now edited with that "watermark" or "branding"? What impact could that have if the photos get exported for use in other apps, etc.?

I sincerely doubt this is going to have any noticeable performance impact, especially since it's not even scanning everything on your phone but just your iCloud-synced photos. And my understanding is that it's a "safety voucher" being created if a matched image is on your phone. Nothing is done to the actual image itself. And based on the accuracy of the system as described by Apple, I highly doubt that false flags are going to be a common issue.

In a purely practical sense, this doesn't sound very effective? The people involved in this illegal activity will surely be among the first to know Apple has done this, so they'll avoid keeping any photo collections of existing content on their devices. (Remember, this tech won't do a thing to flag a new photo they take of a naked kid. It's only going to know about relatively old and shared-around content that they were able to add to the hash database.) This will just encourage them to save such photos in other formats that are far less obvious than sitting in an iPhoto gallery.....

No, they can keep all the CSAM on their phone as they please (to be blunt), as long as they don't upload it to iCloud. The whole point of CSAM detection (whether scanned on the device or on the cloud) is to keep people from using iCloud as an archival or distribution tool for illegal imagery. ALL that's happening now is that the scanning process is more private.

Lastly (and this point is, clearly, more controversial)? I'm not even sure it's a good way to go about prosecuting pedophiles to focus on shared photo collections of that kind of content? A number of studies have indicated the people able to relatively freely look at that stuff are more often satisfied by it, preventing them from assaulting a kid themselves. (This behavior isn't something we've been able to train people to change.) I'd view possession of it as more of an indirect clue the individual might be doing other illegal things than a major crime in and of itself.

You've probably heard the counter-argument to that, though - that those people are fueling the demand for CSAM and thus the producers have motivation to continue increasing the supply, which involves abuse/exploitation.
 
Have no idea what these hashtags are or how it works but it feels gross thinking that my new OS may contain the data for a database of child porn to scan against. Just weird.

A hash is just a string of characters, usually not very long.

SHA-256 hash of apple: 3a7bd3e2360a3d29eea436fcfb7e44c735d117c42d1c1835420b6b9942dd4f1b
SHA-256 hash of Apple: f223faa96f22916294922b171a2696d868fd1f9129302eb41a45b2a2ea2ebbfd

Changing one character to uppercase and we got a completely different has.

SHA-256 of A hash is just a string of characters usually not very long
e0a915f1b0a5f32f2cf5421f95f31fe5c18730090bec088e1c1be75e601803c1

SHA-256 of A hash is just a string of Characters usually not very long
e4660423e96cd42b6b404953d2add783aa2094d5e25d5957d1d6cf2a1a5fe4eb

Changing one character in uppercase in a sentence also produced something quite different.

A hash of a photo will look like this.
 
I think the more important question is the odds of a hash collision or coincidence. Or in other words, what are the odds that an innocent photo could be falsely flagged?

I know some about hashing, but not enough to answer that question. I realize the chances are low, but since there are 113 million Apple users, if the chances are only "one in a million," that means Apple will "only" accuse 113 innocent people of a heinous crime. (Actually more than 113 because it's really based on the number of photos, not the number of customers.)

However, "one is a million" is not based on anything. If someone has a real number, I'd like to hear it.
 
This is a contradiction. You can not know what something is and also not know. It can't be both my private data and searchable. It's not a massive achievement it's hand waving and side talking.

With private set intersection and threshold secret sharing you can keep things secret from Apple until a threshold is reached. If the threshold is never reached Apple can't read the data even if it is in its possession.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.