Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Martin Bland

macrumors member
Original poster
Jun 9, 2014
66
185
I did some digging in the iOS 16 IPSW as due diligence before potentially upgrading to see if Apple is still serious about its CSAM scanning. As a long time Apple user, I find it sad that I have so little trust in Apple that I feel the need to do this. Anyway, it seems Apple is still actively working on its CSAM surveillance tool.

iOS 15.6 shows the same "NeuralHash" neural net files as all previous versions of iOS that have it.

Bash:
➜  ~ ls -la /Volumes/SkyG19G71.D63OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b*                 
-rw-r--r--  1 ------  staff     6519 Jul 13 02:35 /Volumes/SkyG19G71.D63OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b-current.espresso.net
-rw-r--r--  1 ------  staff     2199 Jul 13 02:35 /Volumes/SkyG19G71.D63OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b-current.espresso.shape
-rw-r--r--  1 ------  staff  7241280 Jul 13 02:35 /Volumes/SkyG19G71.D63OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b-current.espresso.weights

iOS 16.0 on the other hand shows, new, updated files.

Bash:
➜  ~ ls -la /Volumes/Sydney20A362.D16OS/System/Library/Frameworks/Vision.framework/NeuralHashv*               
-rw-r--r--  1 ------  staff   109763 Sep  2 17:37 /Volumes/Sydney20A362.D16OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b_fp16-current.espresso.net
-rw-r--r--  1 ------  staff    33861 Sep  2 17:37 /Volumes/Sydney20A362.D16OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b_fp16-current.espresso.shape
-rw-r--r--  1 ------  staff  3668288 Sep  2 17:37 /Volumes/Sydney20A362.D16OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b_fp16-current.espresso.weights

Interestingly, the net and shape files have changed from a binary format to a text format so their contents are easily viewable.
I don't know how much development activity this change represents (I haven't tried to compare the networks themselves. Maybe someone more knowledgable that I can do that) but it does confirm that, Apple hasn't removed the functionality outright, and that it is still under active development.

This silent change is worrisome to me. I'm not upgrading to iOS 16.
 
Why is it worrying?

If you have nothing to hide, then you have nothing to hide.
Because it’s a violation of personal privacy and it sets a dangerous precedent for the future. Today we let Apple scan our photos and tomorrow what?

It’s funny, years ago Apple was firmly standing against the US Government asking for a back door into a locked iPhone but today they’re proposing passive surveillance of individuals’ photo libraries.

I’m all for the identification and prosecution of people breaking laws but the whole “if you have nothing to hide don’t worry about it” argument is so dismissal of such a fundamental right.
 
Because it’s a violation of personal privacy and it sets a dangerous precedent for the future. Today we let Apple scan our photos and tomorrow what?

Whatever they need to scan to protect children is fine by me.

It’s funny, years ago Apple was firmly standing against the US Government asking for a back door into a locked iPhone but today they’re proposing passive surveillance of individuals’ photo libraries.

Scanning for illegal images of abused children is not surveillance.

I’m all for the identification and prosecution of people breaking laws but the whole “if you have nothing to hide don’t worry about it” argument is so dismissal of such a fundamental right.

CSAM is all about identifying and prosecuting the worst people in society.
 
Because it’s a violation of personal privacy and it sets a dangerous precedent for the future. Today we let Apple scan our photos and tomorrow what?

They already scan your photos. That's how face recognition works in "People" works. That's how it's able to figure out what "trees" or "birds" or "food" are when you do a search. Google/Android does it also and much much more.
 
Last edited:
Why is it worrying?

If you have nothing to hide, then you have nothing to hide.
if you have nothing to say, then you have nothing say.

To contribute with some substance: the problem lies in the on device-implementation and the possibility to e.g. use it with a different hash to single out persons with a certain circle of contacts - very general it’s a backdoor into the iOS/iPadOS data storage system and its messaging system.
»Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor. …
All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change. Take the example of India, where recently passed rules include dangerous requirements for platforms to identify the origins of messages and pre-screen content.«

You can inform yourself by reading e.g. the full comment by the EFF from which I cited above.
 
I see it as a wrong priority by a company to want to inspect the private devices of all their customers just in case, because "they might store child porn". What incredible suspicion, self declared job outside of their business and breach of basic legal rules about privacy! Apple should totally stay out of end user devices. Let the police and authorities fight child porn.

After all those repeated privacy speeches Apple must make formally clear if this is now terminated or not? I appreciate that people dig in the code to verify what is really going on. Apples credibility is eroding.
 
They already scan your photos. That's how face recognition works in "People" works. That's how it's able to figure out what "trees" or "birds" or "food" are when you do a search. Google/Android does it also and much much more.
Exactly! Where was the uproar when the object recognition was implemented? That’s what I don’t get. AI/object recognition is much more powerful than how CSAM detection works. Why aren’t these people worried or upset about that?

At a high level, CSAM detection works something like “find photos that are the same as this photo”. They’d need to know what the original photo is that they’re looking for in the first place. They would not be able to say “find me photos that have this in it”. The pictures of your baby’s first bath are not going to get flagged, nor will the pictures you took at that protest, or whatever.

On the other hand, object recognition works something like “find photos that contain this”. So theoretically, they could just say “find photos that contain guns, or this flag/banner, or containing this text, etc.” They don’t need to know what the original photo is. They are already using this method (AI) to check if photos contain any nudity for the ‘Communication Safety in Messages’ feature, which is not to be confused with CSAM. How do we know they wouldn’t keep a tally of how many questionable photos we have?
 
  • Like
Reactions: jhuynh
Exactly! Where was the uproar when the object recognition was implemented? That’s what I don’t get. AI/object recognition is much more powerful than how CSAM detection works. Why aren’t these people worried or upset about that?

At a high level, CSAM detection works something like “find photos that are the same as this photo”. They’d need to know what the original photo is that they’re looking for in the first place. They would not be able to say “find me photos that have this in it”. The pictures of your baby’s first bath are not going to get flagged, nor will the pictures you took at that protest, or whatever.

On the other hand, object recognition works something like “find photos that contain this”. So theoretically, they could just say “find photos that contain guns, or this flag/banner, or containing this text, etc.” They don’t need to know what the original photo is. They are already using this method (AI) to check if photos contain any nudity for the ‘Communication Safety in Messages’ feature, which is not to be confused with CSAM. How do we know they wouldn’t keep a tally of how many questionable photos we have?

you do not seem to know how CSAM and NCMEC work.
 
I don’t mind them scanning photos I knowingly upload to their servers. I do have a problem with them scanning photos on my private device. And no, I don’t have anything to hide, I just have no obligation to allow my photos to be examined by another party.
 
you do not seem to know how CSAM and NCMEC work.
I know how CSAM works. I said "at a high level" to keep things simple.

CSAM works by hashing your photos to see if they match any known hashes in the NCMEC database. So they'd need a known photo of what they're looking for to generate the hash to put in the database. The photos on your device don't need to be exactly identical. They could be slightly cropped or changed colors, but they do need to look very visually similar in order for the hash to match. I've read Apple's CSAM Section Technical Summary last year when it was announced.

Yes, the database could be used as a backdoor to find other content, but they'd need to have something to create the hash. They cannot simply just create a hash to find protesters for example. That would be better done using AI/object recognition, and that isn't how Apple's proposed CSAM implementation works. If I'm still way off base, please enlighten me.

if you have nothing to say, then you have nothing say.

To contribute with some substance: the problem lies in the on device-implementation and the possibility to e.g. use it with a different hash to single out persons with a certain circle of contacts - very general it’s a backdoor into the iOS/iPadOS data storage system and its messaging system.
»Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor. …
All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change. Take the example of India, where recently passed rules include dangerous requirements for platforms to identify the origins of messages and pre-screen content.«

You can inform yourself by reading e.g. the full comment by the EFF from which I cited above.
By the way, the "explain at length" hyperlink in the article you quoted/cited is going to the wrong document. It should be going to the one I posted above, not to 'Communication Safety in Messages'...which is not related to CSAM and works completely differently by using AI and not hashing. If you're going to quote/cite other sources instead of using your own words, please make sure they're correct first.
 
The tech to scan what’s in photos is already here. Can someone explain why CSAM scanning is so worrisome and opening a backdoor? I’ve seen lots of opinions regarding other governments potentially asking for special uses of this software but hasn’t this software already been there for years? If Apple would’ve complied to such demands, they would’ve already. Hash matching is not new and neither is object/text recognition. It’s likely I’m missing something.
 
I did some digging in the iOS 16 IPSW as due diligence before potentially upgrading to see if Apple is still serious about its CSAM scanning. As a long time Apple user, I find it sad that I have so little trust in Apple that I feel the need to do this. Anyway, it seems Apple is still actively working on its CSAM surveillance tool.

iOS 15.6 shows the same "NeuralHash" neural net files as all previous versions of iOS that have it.

Bash:
➜  ~ ls -la /Volumes/SkyG19G71.D63OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b*      
-rw-r--r--  1 ------  staff     6519 Jul 13 02:35 /Volumes/SkyG19G71.D63OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b-current.espresso.net
-rw-r--r--  1 ------  staff     2199 Jul 13 02:35 /Volumes/SkyG19G71.D63OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b-current.espresso.shape
-rw-r--r--  1 ------  staff  7241280 Jul 13 02:35 /Volumes/SkyG19G71.D63OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b-current.espresso.weights

iOS 16.0 on the other hand shows, new, updated files.

Bash:
➜  ~ ls -la /Volumes/Sydney20A362.D16OS/System/Library/Frameworks/Vision.framework/NeuralHashv*    
-rw-r--r--  1 ------  staff   109763 Sep  2 17:37 /Volumes/Sydney20A362.D16OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b_fp16-current.espresso.net
-rw-r--r--  1 ------  staff    33861 Sep  2 17:37 /Volumes/Sydney20A362.D16OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b_fp16-current.espresso.shape
-rw-r--r--  1 ------  staff  3668288 Sep  2 17:37 /Volumes/Sydney20A362.D16OS/System/Library/Frameworks/Vision.framework/NeuralHashv3b_fp16-current.espresso.weights

Interestingly, the net and shape files have changed from a binary format to a text format so their contents are easily viewable.
I don't know how much development activity this change represents (I haven't tried to compare the networks themselves. Maybe someone more knowledgable that I can do that) but it does confirm that, Apple hasn't removed the functionality outright, and that it is still under active development.

This silent change is worrisome to me. I'm not upgrading to iOS 16.
Just playing devil's advocate here, but I don't think it's safe to assume that on-device CSAM detection is still being worked on just because the presence of NeuralHash is still there. NeuralHash is Apple's hashing code that hashes photos into a number or hash. Besides being potentially used for CSAM detection, it could be useful for other things too, like finding duplicate photos...which is actually a new feature in iOS 16.

It wouldn't surprise me if Apple is using NeuralHash for that (don't see why they wouldn't, actually). If that's the case, the presence of NeuralHash probably isn't going to be removed whether Apple abandons on-device CSAM detection or not.
 
Last edited:
Just playing devil's advocate here, but I don't think it's safe to assume that on-device CSAM detection is still being worked on just because the presence of NeuralHash is still there. NeuralHash is Apple's hashing code that hashes photos into a number or hash. Besides being potentially used for CSAM detection, it could be useful for other things too, like finding duplicate photos...which is actually a new feature in iOS 16.

It wouldn't surprise me if Apple is using NeuralHash for that (don't see why they wouldn't, actually). If that's the case, the presence of NeuralHash probably isn't going to be removed whether Apple abandons on-device CSAM detection or not.
This is a good point. It is possible that this is being used for something else, like the duplicate finder. The problem is that Apple last officially stated it would continue the CSAM work. We know what the NeuralHash is originally there for, work on it continues, and Apple can’t be trusted on this matter until it completely disavows the effort.
 
  • Angry
Reactions: racerhomie
This is a good point. It is possible that this is being used for something else, like the duplicate finder. The problem is that Apple last officially stated it would continue the CSAM work. We know what the NeuralHash is originally there for, work on it continues, and Apple can’t be trusted on this matter until it completely disavows the effort.
Then don't trust Apple. Don't update to ios 16. Do what you feel best, and if that means android, you should think about it.
 
  • Like
Reactions: Blue Hawk
For anyone who says "If you don't have anything to hide", give this video a watch about this exact subject of how the automated CSAM detection can ruin innocent people's lives.

 
Yes, the database could be used as a backdoor to find other content, but they'd need to have something to create the hash. They cannot simply just create a hash to find protesters for example. That would be better done using AI/object recognition, and that isn't how Apple's proposed CSAM implementation works. If I'm still way off base, please enlighten me.

I don't think this is a back door. Really the only thing I want to know is how the list of hashes makes its way to your phone. The proposed system is checking on-device for hashes of photos that match a known list of hashes of CSAM photos. So far so good. But if one is to consider this a true "backdoor", then we should know how that list can be edited, and how that edited list can make its way to your iPhone. My (very basic) understanding is that the list is baked into iOS, it's not a file inserted onto the phone that iOS accesses. So the only way an edit to the list can occur (such as, putting in a photo of a political dissident) is by installing a new and altered version of iOS. If some entity has inserted itself into THAT process...well, the problems then are already way too big.

(I also recall that Apple has committed to letting watchdogs see the list in the iOS code, to confirm that it matches the official list and doesn't contain other images.)
 
I also recall that Apple has committed to letting watchdogs see the list in the iOS code, to confirm that it matches the official list and doesn't contain other images.
Here's some more information for you. https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf

A big concern a lot of people have is that this seems to be a way to circumvent protections against government surveillance and could be abused. Apple notes in the document that they're using quasi-governmental sources for the list, like NCMEC "and others." NCMEC and others provide the list to Apple, Apple puts the list on people's devices and has the devices scan for matching hashes, positive matches are reported back after reaching a certain threshold, and if Apple agrees that it matches the list, it passes personal information and details back to a third party for reporting to law enforcement. That's an incredibly simplified summary, you can read the above document for the details.

NCMEC is technically not a governmental organization, so Apple's invasion of privacy might be considered constitutional in the US depending on the terms people agree to. Apple's technically not looking through the the private contents of people's phones at the behest of the government. Apple's technically not reporting their customers to law enforcement. But the government does fund NCMEC's activities to the tune of $40 million per year. So it becomes a question of ethics and human nature. Would NCMEC and others agree to pass non-CSAM images the government doesn't like through to Apple to scan for? Or would they take a stand and risk losing $40 million in revenue per year by refusing such a request?
 
That is not csam detection. That is google using ai to identify csam. Apple's system is not ai based, it is hash based. Please don't spread FUD.
The point stands that if even one report is false, that person’s life is at best severely disrupted for weeks to months, worst case ruined. The person might spend their savings on lawyers or lose their jobs from numerous meetings with lawyers and police to get their name cleared before the matter becomes public — journalists are often happy to report on someone being investigated or charged for a crime, but not always so willing to update regarding a suspect being cleared. They'd also, much like the Google story, presumably permanently disable and delete the Apple ID in question, so all the money that person spent on iTunes and App Store purchases would become useless.

This might be mitigated by Apple's system supposedly requiring users to meet a threshold of matched photos before the account goes to manual review, but we don’t know much in the way of specifics on how the rights of the innocent are protected before things get handed to NCMEC or other such organizations, their proposed system might have changed since, and so on.
 
  • Like
Reactions: FILIPSN007
Also want to add that no one should buy into the Law & Order-style ********* that police are always on the side of the innocent/victim. I don’t want to get too far into details, but I have personal experience to back up my statement resulting from my reporting a traumatic crime against me when I was 18.

It’s not that they did nothing with my report — oh, they most certainly did. They actively tried to turn it around on me, fully upending my life for about a month.

In retrospect I'm still glad I reported because my report — and my victim impact statement — was used years later to help put the guy in prison for doing the same thing to others, where he remains today. But until the day he was convicted, if I could go back in time, it would have been a different story...
 
Last edited:
  • Like
Reactions: antiprotest
I don’t mind them scanning photos I knowingly upload to their servers. I do have a problem with them scanning photos on my private device. And no, I don’t have anything to hide, I just have no obligation to allow my photos to be examined by another party.

They don't scan anything if you're not using iCloud photos.

That said, they are not currently scanning anything, and it doesn't sound like they ever will.
 
The point is that google caught this in the cloud and there were no procedures in place. Don’t want scanning on your apple device, disable iCloud. Apple has several layers to endure false reporting is minimal ized.
The road to hell is paved with good intentions, whether that's procedural failures or an oppressive government mucking up their planned implementation by demanding that they add their own non-CSAM hash database under penalty of an iPhone import ban, other sanctions against Apple and/or its executives, etc.

After all, Apple does famously follow all local laws. It's a matter of when, not if.
 
  • Like
Reactions: Slartibart
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.