Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Human review will filter out the bugs and get the actual pedos.
Stop with this “cops at your door” nonsense.
You need multiple offenses to even raise a red flag for human review.
These reactions are showing people are really bad at statistics, the chance of MULTIPLE offenses from the same user are infinitesimal.

I’ll continue with the “cops at your door” argument all I want because the facts are the facts. There will be bugs and human review sure helped stop post 9/11 privacy breaches by US officials …. Oh wait …… it didn’t ….
 
What is child porn? Can you define it?

Is taking a picture of your child playing in a bathtub child porn?

Is taking a picture of your child’s private parts to send it to the doctor (because there’s a problem) child porn?

Will a doctor now be considered a serial child porn dealer?

You see how this becomes a problem very quickly?

Apple wouldn’t unlock the iPhone of a terrorist who killed scores of people, but they would now spy after hundreds of millions of people to try and find child porn on their Apple devices?

AI won’t be able to properly identify a picture as child porn. It will flag a bunch of suspicious pictures, and then a HUMAN will be looking at your private pictures to decide to report you to the police or not. That is the ultimate Big Brother.

Fortunately, Windows 11 is a nice OS, and as soon as it runs on ARM, and there’s a decent non-Apple ARM laptop, I’m gone - after being an Apple fan for two decades. My next smartphone will also not be an iPhone. Until then, no more OS upgrades for me.

Now I just need to figure out the best time to divest my investments from AAPL. I already reduced them from several million to just over a million, but it’s now time to completely divest.
Disclaimer: I, from a personal point of view, am not aligned with Apple's decision and I clearly recognize the privacy issue and potential for misuse.
End of Disclaimer.

I understand the sentiment - which I share to a large degree - but the technology Apple put in place for this is, as least from a design and cryptology point of view - very sound:

(1)User images will be compared against a known database of CP pictures. An Apple trained neural network will create a signature ('a hash') of the user image which is then compared to signatures (which has been further obfuscated) from the known CP material.
(2) Each image uploaded to iCloud is accompanied by a safety vouched that's double encrypted: only vouchers with a known CP match can get their outer layer decrypted. Inside the outer layer is a partial key to decrypt the inner layer which contains the user's encrypted image signature.
(3) Only once a threshold (unknown) number of known CP matching vouchers have been amassed for images uploaded to iCloud can Apple decrypt any of the inner layers and gain access to the signatures allegedly matching CP to start their manual review.

Very, very elegant solution IMHO for a decentralized algorithm. It reveals nothing to the client beyond the size of the CP database, only known CP material will be tested against, it's quite difficult for a malicious 3. party to meaningfully tamper with the on-device CP-signatures database, Apple cannot access signatures from non-CP matching images and the partial key plus threshold further enhances user safety against random false positives.

Further technical details and articles are available from Apple online.

For Apple's iMessage children protection, we're certainly looking at a neural network (or similar) trained to recognize general nudity, a scan that acts on-device at the decrypted endpoint. But, we can't investigate this further, as Apple has released no details as of yet.
 
Last edited:
Correct me if I’m wrong but this will only trigger if someone
1. saves already existing CP material on their photo library. Does one just keep it in their “hidden” album? The one everybody has access to and will look inside if you hand them your phone and look away for 5 seconds?
2. has new CP material that they save in their library, it gets shared around and eventually flagged as CP.
 
  • Like
Reactions: peanuts_of_pathos
Correct me if I’m wrong but this will only trigger if someone
1. saves already existing CP material on their photo library. Does one just keep it in their “hidden” album? The one everybody has access to and will look inside if you hand them your phone and look away for 5 seconds?
2. has new CP material that they save in their library, it gets shared around and eventually flagged as CP.
Almost correct:

re. 1.: will only trigger a CP-matching safety voucher being send to Apple once you upload the potential infracting image to iCloud. Local images are not - pr. Apple's current admission - scanned at all.

re. 2.: that would certainly be a scenario for anyone posessing and sharing such material, if they at a later time uploaded to iCloud and the on-device database database of signatures for known CP-material had been updated accordingly.
 
Human review will filter out the bugs and get the actual pedos.
Stop with this “cops at your door” nonsense.
You need multiple offenses to even raise a red flag for human review.
These reactions are showing people are really bad at statistics, the chance of MULTIPLE offenses from the same user are infinitesimal.
So I should let the cops search my home without a warrant 20 times, but they only actually arrest me if they find something 11 out of those 20 times?

Most of us aren't worried about being arrested; we're worried about the searching. This features essentially deputizes Apple, who can get away with the search as part of a EULA or ToS. This warrantless search, performed by the company who made my phone, can find evidence that can be used against me. Now it's for CSAM, so I'm safe. But what's next? It's a can of worms.
 
Anyone knows how this will play with EU's GDPR?
Difficult. While they can ask for your consent to scan - which is probably in line with GDPR - they probably fail to meet GDPR transparency regulations.
That is, they have to tell you what data they collected about you if you request so. GDPR also says it is mandatory they delete all data collected about you if you request so.
From what I know so far they do not offer any of this. Therefore I‘d assume it does not play well with GDPR.

But then again, nobody really cares. Its dead legislation to some degree
 
So I should let the cops search my home without a warrant 20 times, but they only actually arrest me if they find something 11 out of those 20 times?

Most of us aren't worried about being arrested; we're worried about the searching. This features essentially deputizes Apple, who can get away with the search as part of a EULA or ToS. This warrantless search, performed by the company who made my phone, can find evidence that can be used against me. Now it's for CSAM, so I'm safe. But what's next? It's a can of worms.

Should we ban explosive sniffers at airports?

Automated-anonymous-until-positive mass-search is sometimes acceptable.

Technology is allowing stuff that wasn’t even imaginable decades ago to tackle some long standing problems (like child abuse).

It’s up to societies to decide (by backlash, uproar and legislation) if something should be done just because it can be done, of course. But there’s no silver bullet, 100% wrong or 100% right, or line in the sand (like this completely moot to me uproar about local pre-labeling of CSAM you already accepted to upload to the cloud vs industry-standard mass search of CSAM on data that’s already in the cloud).

Apple sniffs pictures when they‘re already in the boarding area in the airport, just before being uploaded to the plane. It’s not an house search. If Apple will do it to my offline-for-real data (not destined to be uploaded), I will be the first to march with pitchforks to the nearest Apple Store. The mere fact this can be done, doesn’t mean they will do it or that it is any more probable that they will do it.
 
I’ve nothing to hide, but this just doesn’t seem right to me.

I’m not updating any existing device to iOS15 until this is roll-out is stopped. I don’t want my photos scanned and I don’t want it to happen to my children’s messages. I ensure my children are safe myself. There’s a level of trust and these sort of forced policies just don’t agree with me.
I admit I have not read most of this comment thread so it's probable this has been addressed already, but no one's photo libraries are being scanned, if by "scanned" you mean evaluated for content. Photos ARE being scanned in Messages, IF the phone's user is 12 or under AND IF they are part of an iCloud Family Plan AND IF the parents opt in.
 
  • Like
Reactions: peanuts_of_pathos
Apple wouldn’t unlock the iPhone of a terrorist who killed scores of people, but they would now spy after hundreds of millions of people to try and find child porn on their Apple devices?
This is not at all an accurate description of the feature.

(It DOES sort of describe the Messages feature, but that is only turned on IF the user is 12 or younger AND IF they are a part of an iCloud Family Plan AND IF the parents explicitly opt in. Even then it's done on-device, and the warning is sent to the parent and not Apple. Since iMessage is end-to-end encrypted, Apple couldn't even see the warning.)

What is happening is that a photo you upload to iCloud Photo Library is (on the device, not on the server) having its digital fingerprint compared to a list of digital fingerprints of known child pornography images. The fingerprint itself couldn't be used to recreate the photo. If you are really concerned, you should turn off iCloud Photo Library, then it won't even be happening at all.
 
  • Angry
Reactions: peanuts_of_pathos
Lots of folk have been saying that Google does the same thing?

But I can’t find a link to any article that says Google is carrying out CSAM checks on-device. From what I understand, everyone else seems to carry out the checks on the server.

On server warrantless mass search is worse because that means your pics are decrypted on the server at some point.

On-device-but-only-for-data-you-already-agreed-to-upload is for all intents and purposes the same as on server privacy-wise, just more respectful of non-pedos (since only positive matches above a certain threshold will be actually sent to Apple unencrypted for human review).
 
Taken from EL Reg "Governments in the West and authoritarian regions alike will be delighted by this initiative, Green feared. What's to stop China (or some other censorious regime such as Russia or the UK) from feeding images of wanted fugitives into this technology and using that to physically locate them?


"That is the horror scenario of this technology," said Green. "Apple is the only service that still operates a major E2EE service in China, in iMessage. With this technology public, will China demand that Apple add scanning capability to iMessage? I don't know. But I'm sure a lot more worried about it than I was two days ago."


According to Green, who said he had spoken to people who had been briefed about the scheme, the scanning tech will be implemented in a "two party" design. As he explained it: "Apple will hold the unencrypted database of photos (really the training data for the neural matching function) and your phone will hold the photos themselves. The two will communicate to scan the photos on your phone. Alerts will be sent to Apple if *multiple* photos in your library match, it can't just be a single one."


"The privacy-busting scanning tech will be deployed against America-based iThing users first, with the idea being to gradually expand it around the world as time passes. Green said it would be initially deployed against photos backed up in iCloud before expanding to full handset scanning.


If this is the future of using Apple devices, it might not only be sex offenders who question Apple's previously-stated commitment to protecting user privacy."

The feature isn't privacy-busting, as I understand it. They aren't using AI to scan your images.
 
The feature isn't privacy-busting, as I understand it. They aren't using AI to scan your images.

Unfortunately people are conflating 2 completely separate features
1) CSAM hash matching for soon-to-be-uploaded iCloud Photos
2) AI guessing of nudes in iMessage on child iPhones

The latter is completely optional like any parental lock, and only that feature is about the AI “looking” at local pictures (and telling about it to the parents, not telling Apple or authorities).

Also people fail to understand that once the OS is booted and logged in, local data is decrypted in order for the user to use it, including a picture on the receiving end of an iMessage chat. So they’re weirded out that on-device AI can look at received iMessage pics.
 
Understood as I have stated in previous posts. I was also responding to the question/statement asking where privacy was established as a right.

But what if the government compels these tech companies to do this (else be regulated or whatever other bureaucratic shenanigans)? That is, a private company can become a “state actor” for Fourth Amendment purposes if the search occurred at the government’s behest, not because the private company chose to do it. In that case, the Fourth Amendment’s warrant requirement kicks in; if there was no warrant for the search, that makes the search “unreasonable.” It’s a Fourth Amendment violation.

So yes, Apple is seemingly doing this of their own will. Yes, we are bound by the terms and conditions. Doesn’t mean we can’t have opinions about it or point out that it could go bad at some point. It’s not far fetched. I’d say Apple’s temporary constraint to the US acknowledges this to a degree.
From a Fourth Amendment perspective, the starting point is always whether the challenged conduct is governmental or not. You are certainly correct that the government could not use a private entity as an agent to accomplish what it alone could not, and that a government-directed warrantless search by a private company would be impermissible. But government would have to be closely involved in the search itself for that to be the case (e.g., by directing it, paying someone to do it, etc.). If, on the other hand, a private company conducts the challenged search on its on accord, then provides the results to the government without involving the government in the search, there is no Fourth Amendment issue, even if the result to the person being searched is essentially the same.

If this works the way Apple says it will (and assuming for the sake of argument that this is a "search," which is not obviously the case within the meaning of the Fourth Amendment), it -- not the government -- will be conducting the "search," and there is no Fourth Amendment issue. Of course, different facts could yield a different result.

Apple also can rely on another prong of Fourth Amendment law, which is that its protections apply only where there is a reasonable expectation of privacy. The law is quite clear that an individual has the ability to waive that expectation, and can do so by contract. Here, Apple almost certainly will require iCloud users to agree that there is no expectation of privacy with respect to hashed, encrypted information -- at least where the hashes match those in the database Apple will be using. If we agree that Apple can do this (e.g., by continuing to use iCloud once informed that this is occurring), we effectively will be saying that we do not expect that this hashed information is private. If so, it's fair game under the Fourth Amendment.

None of the above is intended as a defense of what Apple is doing, and I haven't yet formed an opinion on whether this is a good idea or not. (I tend to think it is not.) But Apple appears to be on solid footing from a Fourth Amendment perspective.
 
Can you point any law directly state privacy is right?
If it wasn't, how comes you call the police when i walk into your house? I don't steal anything, i just watch and observe what you do. Just that! 24/7.

It's your house? Well, it's your iPhone too. Am just watching you, just that, where's your problem?
 
  • Like
Reactions: areudum
Unfortunately people are conflating 2 completely separate features
1) CSAM hash matching for soon-to-be-uploaded iCloud Photos
2) AI guessing of nudes in iMessage on child iPhones

The latter is completely optional like any parental lock, and only that feature is about the AI “looking” at local pictures (and telling about it to the parents, not telling Apple or authorities).

Also people fail to understand that once the OS is booted and logged in, local data is decrypted in order for the user to use it, including a picture on the receiving end of an iMessage chat. So they’re weirded out that on-device AI can look at received iMessage pics.

The semantics of the “how” doesn’t really matter here. The point is that a supposedly privacy oriented company just openly said that they will process your images in the cloud for a purpose other than what users expected. That is the epitome of a contradiction which tarnishes further whatever pitch of privacy they have.
 
The semantics of the “how” doesn’t really matter here. The point is that a supposedly privacy oriented company just openly said that they will process your images in the cloud for a purpose other than what users expected. That is the epitome of a contradiction which tarnishes further whatever pitch of privacy they have.

Nope, they won’t process your images in the cloud.

They will pre-label your images BEFORE they leave your iphone in order to actually process as FEW AS POSSIBLE of them in the cloud and still comply with US child abuse due diligence for cloud hosts.

Last year Apple reported 256 CSAM cases to authorities.
Facebook reported 20M. There’s gotta be at least some false positive in those 20M.
Apple is not trigger-happy when it comes to reporting you.

Pre-labelling locally on device is far better privacy-wise than processing-in-the-cloud.
And yes it‘s important to stress this part about the “how”: Apple is doing this to pics your ALREADY agreed to UPLOAD to the cloud. So it’s like they’re on the cloud already, for all intents and purposes.
 
  • Like
Reactions: hans1972
On server warrantless mass search is worse because that means your pics are decrypted on the server at some point.

On-device-but-only-for-data-you-already-agreed-to-upload is for all intents and purposes the same as on server privacy-wise, just more respectful of non-pedos (since only positive matches above a certain threshold will be actually sent to Apple unencrypted for human review).

Well, I actually prefer the search on the server to be honest. For one thing, it means I don't have a lump of spyware running on my phone that I didn't install myself and hasn't been reviewed by anyone apart from the company that gave you software that crashed your phone when it received a dodgy string in a text message.

The other point is that Apple has been scanning photos on the server for quite some time anyway.


So the question is, why did they decide to put the scan on the device? My guess is that the server side searches, like those run by Google, were a bit too private for whoever is holding Apple's leash: Google's users have the opportunity to encrypt the stuff the photos they send to the cloud. Running the scan on the device makes it much harder to encrypt before it is scanned and the match data sent to Apple's server. This is why I was curious as to what Google was doing. I still don't know if they are running a device-side scan.

And this is really worrying:

Apple did admit that there is no silver bullet answer as it relates to the potential of the system being abused, but the company said it is committed to using the system solely for known CSAM imagery detection.

Apple is loading hashes onto your phone, and they have no idea what they contain, so when they expand it to other countries they'll have no idea what they're searching for. Just tell Apple that it's a database of escaped prisoners, when the hashes are really pictures of dissidents or campaigners for gay rights. Hey presto, your totalitarian government now has a ready-made surveillance network.
 
Last edited:
  • Love
Reactions: peanuts_of_pathos
This system is ripe for abuse and privacy creep over time.

Anyone who it would catch will just turn off iCloud photos anyway, defeating the purpose.

Apple should admit that they made a mistake and cancel the rollout.

WhatsApp reported 400 000 cases last year so it seems that a lot of people exchanging/collecting these kind of photos don't know how to evade.

It's quite difficult to abuse the system even if trying to add other kinds of pictures to the database since almost every photos in the world would create a different kind of hash.
 
Well, I actually prefer the search on the server to be honest. For one thing, it means I don't have a lump of spyware running on my phone that I didn't install myself and hasn't been reviewed by anyone apart from the company that gave you software that crashed your phone when it received a dodgy string in a text message.

The other point is that Apple has been scanning photos on the server for quite some time anyway.


So the question is, why did they decide to put the scan on the device? My guess is that the server side searches run by Google were a bit too private for whoever is holding Apple's leash: Google's users have the opportunity to encrypt the stuff the photos they send to the cloud. Running the scan on the device makes it much harder to encrypt before it is scanned and the match data sent to Apple's servers.

1) on server searches by cloud hosts are as much of a black box for us users as on-device searches, not sure about your point about “hasn’t been reviewed by anyone”; the process doing it is just like any processing apple already does to your Photo Library, and it’s not spyware if they make you agree to it

2) common sense suggests that on-device pre-labeling is less invasive than straight on-server searches (since negative matches aren’t even “bothered” once they’re on the cloud), that would be enough of a “why”. Also the very article you linked explains that these hash matches nowadays can sometimes be done on encrypted data as well (depending on the type of encryption). So not sure what’s your point about being able to encrypt before upload. The iPhone user already has one crucial defense against this: just disable iCloud Photos (formerly known as iCloud Photo Library), if you feel Apple is overreaching. If anything, this whole drama is giving us more awareness about all of this.
 
  • Sad
Reactions: peanuts_of_pathos
Can't wait until they roll out updates to scan for gay porn in Saudi Arabia.
Or Anti-CCP material in China.

The system isn't good for anti-CCP material unless a lot of people share almost the exact same photo.

Lets say there is an illegal demonstration in China. Many of the demonstrators takes pictures. An uncercover police agents does the same.

They can't use the photos taken by the undercover agent to find the other photos. If they arrest some of the demonstrators and gets access to their photos, they can't use those photos to match the photos taken by the demonstrators which weren't arrested.
 
  • Angry
Reactions: peanuts_of_pathos
Yes, I absolutely prefer that, because it leaves me a choice of not uploading anything or encrypting it first where possible. Once they start snooping through my data on my own device I will no longer be in control, and the end-to-end encryption for services like iMessage that they market so proudly will be a farce.

So how would you encrypt the iCloud Photo Library today?

I can't really think of an easy way to do it and still have all the features intact.
 
  • Like
Reactions: peanuts_of_pathos
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.