Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well, as I have already posted, the proverbial cat is out of the bag...As Apple has announced the ability, governments are salivating at the ability, albeit limited now, to scan personal devices.

Apple has admitted they "geeked more" and were able to create a way to peek into its phones: something TC used to stand on a soapbox and claim was impossible. Even if Apple doesn't move forward, governments will use existing laws that bar Apple from announcing the result, to force Apple to implement the technology. For example in the US, it will be by NSL's.

I have now changed my opinion: this is not the beginning a slippery slope, we are now racing downhill at breakneck, runaway speeds.
 
When some of the leading research figures in a specific field collaborate in a paper it's pretty natural that a lot of previous research will be their own. Some of the authors are quite well known and prolific in the field.



Which is pretty obvious since the matter discussed is the hypothetical implementation of client-side scanning on electronic devices for law-enforcement purposes and its risks. When treating something hypothetical it's pretty inevitable to have to make assumptions and a perfectly fine practice, as long as the assumptions are well reasoned and supported by compelling arguments.
So basically this conspiracy theory gets a pass because the social court has deemed it in keeping with their agenda.
 
  • Disagree
Reactions: dk001 and KindJamz
This is really easy to delete. The scanner computes a "finger print" or "hash" of the image then compares it to a list of hashes. The solution is to modify every image on the iPhone to the hash is different. You can do this without making the picture look much different

Someone needs to write a free, open-source image modifier that makes random changes to selected images. Once this thing is out in the word and used Apple's idea fails to work.

Scaning only works because the same image is stored on many devices but if each were changed in some random way the hashing method fails.
 
So basically this conspiracy theory gets a pass because the social court has deemed it in keeping with their agenda.

It's a paper: if you disagree with it you are free to review it and refute it with sound arguments and reasoning. If you think the sources cited are flawed, explain why. If you think the assumptions made are flawed, explain why.
 
However it is a system that if honest will be effective.
No, I do not think that should be taken for granted. For one thing, the system is supposed to trigger only when there are around 30 pictures with hash matches, to reduce false positives. That means moderately careful CSAM-consumers are safe even when they occasionally do not pay attention and let such a picture into their iCloud. (How long until governments demand a lowering of the threshold? "Apple allows 29 CSAM-pics per user!" Bad PR.)

Then after triggering it is when things get really murky. Apple's reviewers are supposed to provide another safeguard against false positives, but they only get to see degraded variants of the user pics, and they do not get to see the CSAM-originals from the database. How will they decide when their review-version is not blatantly obvious? Are these pixels a teenager or a young adult? Discard, or forward to NCMEC and let them sort it out?

Which leads to NCMEC, the big unknown. A semi-private organization, shielded from transparency requirements and oversight, yet legally the sole US authority over its database of alleged CSAM pictures, and in practice the leading authority worldwide. To my knowledge there is no independent auditing regarding the "quality" of that database. However, when NCMEC gets pictures from companies, they decide whether to forward them to the relevant authorities - and from the Swiss federal police we know that 90% of these NCMEC-notifications are not actually CSAM and discarded. So NCMEC is apparently quite clueless about CSAM - not a good sign for their database. They just like to spamspamspam police agencies. The police has to review every such notification, tying their up resources. Investigating actual CSAM-cases takes time, and arguably NCMEC's spam may hamper actual investigations.

Unfortunately NCMEC's spam is useful for agencies that demand further erosion of privacy. E.g. the German federal police regularly touts the inflated numbers from NCMEC as a reason why lawmakers should mandate generous data retention for ISPs. Basically: "We get so many reports, often ISPs have deleted critical data by the time we start investigating a case!"

TLDR: Even when ignoring concerns about privacy and future expansions, I remain unconvinced that the proposed system would not do more harm than good when it comes to fighting CSAM.
 
  • Like
Reactions: BurgDog
As for the "on device" part, the files that go to the cloud don't change so it doesn't actually matter from a legal or rights perspective.
What it changes is that now the only need to be ordered to call the scan with a different path specified, which is much harder for Apple to claim is too hard/expensive.
So what kind of images could the government force into such a database which could be used to catch unwanted elements.

It would have to be images which are shared directly on a large number of devices to be useful.
How about inserting pictures of abused animals on farms that have been published online in, to see who already has them? That would give a good idea who was breaking ag-gag laws.
 
If one participates in these vile activities, I imagine they are smart enough to store material on hard drives, SD cards and only plug them into cameras / devices with no Bluetooth, Wi-Fi and mobile data antennas plus isn't the dark web where they lurk online? I call, "BS on CSAM". Governments want to remove our right to privacy. Turn the spotlight on your Governments as they have some nasty secrets that they don't want made common knowledge.
 
Last edited:
  • Like
Reactions: BurgDog
So another point here is that the solution is not only privacy-invasive, but also ineffective.
After all, any person in his or her right mind won't upload illegal pictures to cloud services. Supporters for the CSAM plan tend to ignore this.

Which I consider an effective result.

Apple can say that virtually no child pornorgraphy are stored in iCloud Photo Library and they can offer end-to-end encryption.

Those of us who us iCloud Photo Library can pride ourselves that we are using a service free of such material.
 
  • Haha
Reactions: KindJamz
The worry is not about the technology itself, it's about Apple's decision to introduce the technology into its ecosystem and on its consumers' devices and which consequences this decision might lead to.

For a government it would be a much lower hurdle to compel Apple to abuse such technology if the technology is already in-place. If the technology is not in-place, a government wanting to abuse it would need to compel Apple to implement it first.

Such a technology has been in place for many years on devices. Anti-malware scanning, asset management systems etc.

You even have backup-services which makes a copy of all the data on the phone and sends it to third parties!
 
  • Haha
Reactions: bobcomer
And while the software as it is right now would be incapable of matching the terrorist's face at any angle, actual photo recognition could well be part of the next upgrade. And if we're adding photo recognition, might as well add GPS tracking too, so we know where the terrorist was when his face was captured. None of which would be unlikely, if they want to expand the idea of CSAM detection to detect terrorists in the first place.

It's not necessarily about the software in its current state (even though I could instantly see ways to circumvent and misuse it when it was announced, and I'm not even a security specialist), but the infinite possibilities for expansion.

People have been criticising the CSAM Detection system as it is right now without fully understanding the intricate details of the algorithms.

Why are people doing it when the algorithms are so ill equipped as you describe?
They should be worried about the Photos app which contains exactly the algorithms you are describing. It has photo recognition and GPS tracking built in for many years!

Here is a much better system which Apple could implement with small changes to iOS:
* Government supplies a database of images containing faces of unwanted elements (or their families, friends and associates)
* Apple makes, secretly, a small change to the Photo app to compare the faces in photos taken by the user against the database photos. The app is already doing face scanning, so almost no changes are necessary
* The photos are already marked with location using among other things GPS
* If a few matches are detected, iOS secretly turns on iCloud backup
* No changes to iCloud Backup is necessary only turning the feature on and hiding it in the user interface
* iCloud Backup isn't end-to-end encrypted and can be read by Apple and by extension a government
* If a few matches are detected, iOS secretly turns on location sharing with Apple or even the government

Voila, a much better system.

No new technology needed to do this.
 
Well, as I have already posted, the proverbial cat is out of the bag...As Apple has announced the ability, governments are salivating at the ability, albeit limited now, to scan personal devices.

Here is a much better system which Apple could implement with just small changes to iOS:
* Government supplies a database of images containing faces of unwanted elements (or their families, friends and associates)
* Apple makes, secretly, a small change to the Photo app to compare the faces in photos taken by the user against the database photos. The app is already doing face scanning, so almost no changes are necessary
* The photos are already marked with location using among other things GPS
* If a few matches are detected, iOS secretly turns on iCloud backup
* No changes to iCloud Backup is necessary only turning the feature on and hiding it in the user interface
* iCloud Backup isn't end-to-end encrypted and can be read by Apple and by extension a government
* If a few matches are detected, iOS secretly turns on location sharing with Apple or even the government

Voila, a much better system.

No new technology needed to do this. The cat has been out of the bag for over a decade.
 
What it changes is that now the only need to be ordered to call the scan with a different path specified, which is much harder for Apple to claim is too hard/expensive.

How about inserting pictures of abused animals on farms that have been published online in, to see who already has them? That would give a good idea who was breaking ag-gag laws.

Most ag-gag laws are about producing such material and not saving a copy of it. Also most of them are found unconstitutional in the US.

Are you saying people fighting animal abuse store iconic images of abused animals in their iCloud Photo Library in bigger numbers?
 
Most ag-gag laws are about producing such material and not saving a copy of it. Also most of them are found unconstitutional in the US.

Are you saying people fighting animal abuse store iconic images of abused animals in their iCloud Photo Library in bigger numbers?
I admire your effort to help educate on the tech Apple is proposing, and I’m also of the opinion that arguments should be made with logics instead of being based on feelings, hidden agendas, conspiracy theories, etc. Unfortunately I’m not educated enough in the proposed tech. to provide better counter arguments. Keep it up!

If any government is salivating about using the CSAM tech. to catch undesirables, said government is probably not too bright IMHO, and probably will fail. Your points in previous posts also points out that existing tech. in modern smart phones can already do this with ease for quite a while, so hopefully folks better understand the issue at hand.

My take is that Apple is using this tech. to gauge feasibility of E2EE for iCloud Photos.
 
People have been criticising the CSAM Detection system as it is right now without fully understanding the intricate details of the algorithms.

Why are people doing it when the algorithms are so ill equipped as you describe?
They should be worried about the Photos app which contains exactly the algorithms you are describing. It has photo recognition and GPS tracking built in for many years!

Here is a much better system which Apple could implement with small changes to iOS:
* Government supplies a database of images containing faces of unwanted elements (or their families, friends and associates)
* Apple makes, secretly, a small change to the Photo app to compare the faces in photos taken by the user against the database photos. The app is already doing face scanning, so almost no changes are necessary
* The photos are already marked with location using among other things GPS
* If a few matches are detected, iOS secretly turns on iCloud backup
* No changes to iCloud Backup is necessary only turning the feature on and hiding it in the user interface
* iCloud Backup isn't end-to-end encrypted and can be read by Apple and by extension a government
* If a few matches are detected, iOS secretly turns on location sharing with Apple or even the government

Voila, a much better system.

No new technology needed to do this.

I agree, most critics are laypeople who get scared without understanding how it works, just like with the paraben scare, when a reporter quoted a paper she didn't understand and now every cosmetics company needs to replace the most effective, harmless and battle-tested preservative, despite reassurances from the authors of the paper themselves. But this time, unlike the baseless paraben scare, there are also many highly qualified critics who do understand how it works.

I'd think there's an even better way of doing this though: just scan the photos in iCloud. No detectable change necessary on the client-side, and there's no encryption on the servers either (or at least no encryption while the photo is in transit), so they could theoretically do anything they want there, should they have malicious intent. They could even forward every photo they receive to the government, just like they're required to do with emails.

But I don't think Apple wanted to do anything malicious. I expect they had the best of intentions when coming up with this system, probably meant to combat this very issue: what if the government does start asking them for photos? With a system like this, they'd have no photos to give, since if it exists, then they can also have end-to-end encryption, and then they could severely limit what private user data they can provide should they be required to.

The problem is that the system is so flawed that the flaws make it both ineffective and easy to abuse (as detailed in a number of papers written by a number of security experts). It's so badly designed that it took mere days to make it ineffective and easy to abuse, and not by security experts, but by regular programmers. It seems to me they haven't thought this through, and now this idea is being used as basis for other similar ideas by factions who do have malicious plans and the power to implement them at scale.
 
  • Like
Reactions: BurgDog
Such a technology has been in place for many years on devices. Anti-malware scanning, asset management systems etc.

You even have backup-services which makes a copy of all the data on the phone and sends it to third parties!

Let me quote Apple's own words, emphasis mine:

Another important concern is the spread of Child Sexual Abuse Material (CSAM) online. CSAM refers to content that depicts sexually explicit activities involving a child.

To help address this, new technology in iOS and iPadOS will allow Apple to detect known CSAM images stored in iCloud Photos.

So no, the technology is not already been in place for many years, it's a new technology Apple wanted to introduce and it's delaying due to the backlash with a specific, novel design. Of course it will be based in part on already-existing technologies and concepts, but that goes for basically all new technologies in general.
 
  • Like
Reactions: PC_tech and BurgDog
  • Like
Reactions: VulchR
They’re talking about scanning the photos that are about to be uploaded to iCloud Photos (and apparently it’s disabled if you disable the iCloud Photo Library). Would you be happier if exactly the same photos were scanned, but the scanning was happening on Apple’s servers after the photos were uploaded?

The impression I get is that every service is already scanning, or will soon be required to scan, photos that end up on their servers, for CSAM, and Apple figured it improved user‘s privacy if that required scanning happened client-side before the photos were uploaded rather than server side after they’re uploaded.

Baking it into the OS makes it harder for a government to come along and ask for additions - it’d mean changing iOS rather than just changing a script that runs on the server. Seems like Google and others that do it all server-side would be easier for a government to arm-twist into scanning more, since they can just change one script that customers never see - and security researchers / privacy advocates can’t analyze - rather than having to add code to the OS that then needs to be updated on every user device.
YES
 
  • Like
Reactions: VulchR
This is really easy to delete. The scanner computes a "finger print" or "hash" of the image then compares it to a list of hashes. The solution is to modify every image on the iPhone to the hash is different. You can do this without making the picture look much different

Someone needs to write a free, open-source image modifier that makes random changes to selected images. Once this thing is out in the word and used Apple's idea fails to work.

Scaning only works because the same image is stored on many devices but if each were changed in some random way the hashing method fails.
That's why they use neural hash, it doesn't have to be an exact copy, just enough similar. (and why it can match a non CSAM image)
 
That is possibly the most ironic/lamentable statement I have read in 2021. I would be hard pressed to squeeze so much inaccuracy into so few words.
Ha ha really :), There was only 3 points, which one(s) incorrect then, are you saying there is no confuson at all? Yes/no are you saying there is no scare mongering yes/no or is it they are scanning your phone Yes/no - and I do apologise if i have misunderstood but I did get impression they were only analizing picture that had been uploaded to the Photos icloud library, you think they are actually looking at contents on my phone weter uploaded or not?
I'm only trying to grasp the situation correctly :-(
 
I think this was mentioned in one of these articles, but it's worth posting again...

Apple accused of censoring Quran app at request of Chinese government
https://nypost.com/2021/10/15/apple-accused-of-censoring-quran-app-at-request-of-chinese-government/
Yeah, those pesky governments get in the way things.

I see the cat out of the bag already and it's just a matter of time before all computing devices with commercial software have this type of intrusive scanning.
 
Lot of chatter back and forth. It's nice that you are all so involved and energized by this. But don't be naive.

Apple's announcement, weak-ass rationalization and eventual retraction, were merely trial balloons sent up to gauge public sentiment. It was NOT to check if kidd1e-p0rn is abhorent - OF COURSE IT'S ABHORENT, DUH! It was NOT to see if a free nation would find invasive surveillance objectionable - OF COURSE IT'S OBJECTIONABLE, DUH!

It was to evaluate how deeply the program had to stay buried.

Would it need to remain a secret at all? Could such a thing be kept on the down-low? Must it remain classified Top-Secret SCI or Yankee White under penalty treason for leakers?

Apple's side door hash-evaluate-report program is very probably running already. Since when has Apple so willingly, officially and proudly disclosed systems BEFORE gold release? And how could anyone reliably report it anyway? Any really reliable source could be squelched by the NSA like any animal in a zoo.

Other technology outfits are already known to be engaged. Any approach to legislative transparency must assume this is an operational system, wherein the distinction between on-device vs. on-cloud is not really relevant (Sorta like Texas granting every fringe nutball busybody automatic standing to sue over abortion. F that. Infringement is infringement, and it won't stop at abortion. I bet California will do that over running small gas motors. I digress.)

In WWII, the biggest secret was that the Allied Forces cracked enigma. That single secret was way more important than 95% of the decrypted tactical messages (which might have saved a ship here, or a bomber sortie there.) The priceless 5% contributed to defeating the Nazis STRATEGICALLY, only because keeping the secret meant that Nazis wouldn't strengthen their cipher.

Anyway, we're all in the teeth of the thing, now; have been for years. So, just move along. Those weren't the droids we were looking for.
 
  • Sad
Reactions: dk001 and bobcomer
It was to evaluate how deeply the program had to stay buried.
Eh, I don't know. There would have been less damaging ways to gauge this. Apply for some patents, release some research papers, and the tech press would pick it up. If it results in outrage, Apple could claim it was just doing research, looking at all options, nothing to be actually implemented. Instead they went all in: we so care about child safety, this is what we'll do in iOS 15, and here's why it's so genius. No good way to backpedal from that.

Add the "screeching-minority" quote from that NCMEC-zealot and Federighi's "people-are-just-confused" red herring, and it all looks more like a mismanaged miscalculation. I think Apple's leadership honestly thought these features would be a good idea and received as such, or at worst accepted with moderate reluctance, and that Apple could be quite open about the introduction.
 
  • Like
Reactions: BurgDog and dk001
Lot of chatter back and forth. It's nice that you are all so involved and energized by this. But don't be naive.

Apple's announcement, weak-ass rationalization and eventual retraction, were merely trial balloons sent up to gauge public sentiment. It was NOT to check if kidd1e-p0rn is abhorent - OF COURSE IT'S ABHORENT, DUH! It was NOT to see if a free nation would find invasive surveillance objectionable - OF COURSE IT'S OBJECTIONABLE, DUH!

It was to evaluate how deeply the program had to stay buried.

Would it need to remain a secret at all? Could such a thing be kept on the down-low? Must it remain classified Top-Secret SCI or Yankee White under penalty treason for leakers?

Apple's side door hash-evaluate-report program is very probably running already. Since when has Apple so willingly, officially and proudly disclosed systems BEFORE gold release? And how could anyone reliably report it anyway? Any really reliable source could be squelched by the NSA like any animal in a zoo.

Other technology outfits are already known to be engaged. Any approach to legislative transparency must assume this is an operational system, wherein the distinction between on-device vs. on-cloud is not really relevant (Sorta like Texas granting every fringe nutball busybody automatic standing to sue over abortion. F that. Infringement is infringement, and it won't stop at abortion. I bet California will do that over running small gas motors. I digress.)

In WWII, the biggest secret was that the Allied Forces cracked enigma. That single secret was way more important than 95% of the decrypted tactical messages (which might have saved a ship here, or a bomber sortie there.) The priceless 5% contributed to defeating the Nazis STRATEGICALLY, only because keeping the secret meant that Nazis wouldn't strengthen their cipher.

Anyway, we're all in the teeth of the thing, now; have been for years. So, just move along. Those weren't the droids we were looking for.

You make a lot of assumptions in your post. Other than your opinion, do you have anything to back these up?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.