Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Amazing how many people now think a closed iOS is a bad thing. *sigh*

You still need the cloud for this system to work, which is why no scanning will take place unless iCloud Photos is enabled. All this is documented in the technical implementation document.

I've not seen anything on the framework, so I can't comment on that yet, but I doubt it'll be as nefarious as people make it out to be - unless you're referring to the child protection tools in iMessage - that's completely different to the CSAM -> iCloud Photos system.
You know that this is all about being trustworthy? Hard to gain and easy to loose - once lost, gone forever.

CSAM and the iMessage system are just two sides of the same medal. It is all about surveillance on your local phone and informing a third person, if the surveillance system detects something suspicious…
 
  • Like
Reactions: ivan86 and BurgDog
Uh no. Apple has to scan NOTHING. Implement full end-to-end encryption for all data between the phone and iCloud and be done with this nanny-state nonsense.

Yes I agree with the sentiment but if it is indeed already a legal requirement that all images stored on corporate servers must be scanned for CASM, then perhaps this is the best way of going about it? I don't know for sure. I'm still against the whole idea in general but I accept there may be more to it than I first gave it credit for. And reconsidering a position isn't the same as changing your mind 👍
 
I was talking about apps like Facebook, Instagram that have access to your photos gallery. Not Apple's implementation of CSAM.

Gotcha - though I would have thought that was the iMessage child-protection API rather than a CSAM scan API - I've not been able to find anything about either (or one or the other), so I can't really comment further for now.
 
You know that this is all about being trustworthy? Hard to gain and easy to loose - once lost, gone forever.

CSAM and the iMessage system are just two sides of the same medal. It is all about surveillance on your local phone and informing a third person, if the surveillance system detects something suspicious…

Definitely. As I've said, I think this is really more about the US government than it is about Apple or the technology itself. But in both cases, iMessage and the CSAM scans can be disabled. If you upload photos anywhere, you're more than likely to encounter CSAM scanning regardless.

And what's to say the other smart device manufacturers who are building devices with every increasing capability for machine learning won't go the same way as Apple? Ifs, buts, maybes, never happened yet.
 
I just thought of a way to make this whole thing moot. The problem will just go away...

I can write software that changes an image's hash. The software would randomly modify an image while still keeping it mostly looking like the original. Yes there are well known ways to do this. Then the user runs this on all stored photos every few weeks or few hours so that the hashes change constantly. This would completely defeat Apple's spyware.

The key that makes this work is that images are already JPG compressed with a lossy kind of compression. We can add information in when decompressed and make the change to the image and recompress.

Please, post as many ways you can think of to defeat this on many forums and get the process started to make this issue completely moot.
 
  • Like
Reactions: ivan86 and BurgDog
No. It is pretty clear he has a good technical understanding. There is or will be software on the phone that has access to cleartest versions of the data which it can hash.

The good news is that as a member of parliment he has some abilty to outlaw this kindd ofthing, then Apple would not be able to sell phones in either Germany of maybe the EU. That would force Apple to change.

The only REAL solution is to open up the phone so people can run whatever software they like. Make the phone more like a Mac. Yes, Apple will argue about malware on the phone, but this is not a big problem with Macs. This could be done by law too.

Open up the phone, be prepared for corporate networks to ban it. Or even non-corporate ones, as then it'd allow a free for all in terms of potential abuse. Damned if you do, damned if you don't.
 
People keep raising this as though it makes the template-matching somehow independent of the perceptual features of the image. I do not see how this can be if the template-matching is fuzzy. The hash will convey information about the content of the image, even if it is in a digested form. The only way this can work is if the hash somehow captures the essence of the image's content. My concern with the implementation of this is whether the false positives will be sensitive pictures of people, whether people's tendency to take multiple pictures of the same scene increases the odds of a false positive, and whether Apple's per-account criterion includes a consideration that the more pictures in a library, the higher the number of expect false positives overall.

As for the principle of this, Apple's goal is laudable, but the price is ushering in distributed AI, including on the person's machines, for surveillance. This was an ominous rubicon to cross, and Apple was crazy to do so. There is a reason why Google hires behavioural/social scientists - so they don't make stupid decisions like this.
And I‘m probably being over sensitive, but I think there‘s something nauseatingly distasteful about downloading hash codes of child pornography onto everyone’s phone – even if I’ll never see it.
 
The only way Apple will abandon this is if China tells them to, and we already know that this is not a feature that China will oppose.
China doesn't care about this feature. They already can see everything their citizen's do on their devices.
 
According to another site Reddit has found the hash algorithm in iOS 14.3 and,

"For example, one user, dxoigmn, found that if you knew the resulting hash found in the CSAM database, one could create a fake image that produced the same hash. If true, someone could make fake images that resembled anything but produced a desired CSAM hash match. Theoretically, a nefarious user could then send these images to Apple users to attempt to trigger the algorithm."

If this is true then I hope Apple finds a solution for this before rollout.

But even if this one is not true, we can probably expect a number of other problems to be discovered and perhaps lives ruined after rollout before they are fixed.
This is exactly the scenario that most concerned me about the feature. And then the potential for mistaken image identification. Which could then falsely flag a user or image as in appropriate or illegal.

And one day hashes won’t be sufficient. What happens when they just do direct image recognition on the device?

Imagine taking a photo of your own children in a family photo at the beach and because the image recognition sees something it thinks is inappropriate it then flags it or perhaps one day it blocks you from even taking the photo.

this is a very slippery slope. There is a possibility they may enable you to “opt out” of having your phone scanned. But then that too would flag you as a potential deviant. LOL. Honestly I don’t care for this feature at all. Smh
 
Definitely. As I've said, I think this is really more about the US government than it is about Apple or the technology itself. But in both cases, iMessage and the CSAM scans can be disabled. If you upload photos anywhere, you're more than likely to encounter CSAM scanning regardless.

And what's to say the other smart device manufacturers who are building devices with every increasing capability for machine learning won't go the same way as Apple? Ifs, buts, maybes, never happened yet.
Well I started to transition lots of my work to a Linux workstation over the last years and i guess that a Linux phone might be the only way to keep companies and authorities out of your home in the future.
 
  • Like
Reactions: ivan86 and BurgDog
I just thought of a way to make this whole thing moot. The problem will just go away...

I can write software that changes an image's hash. The software would randomly modify an image while still keeping it mostly looking like the original. Yes there are well known ways to do this. Then the user runs this on all stored photos every few weeks or few hours so that the hashes change constantly. This would completely defeat Apple's spyware.

The key that makes this work is that images are already JPG compressed with a lossy kind of compression. We can add information in when decompressed and make the change to the image and recompress.

Please, post as many ways you can think of to defeat this on many forums and get the process started to make this issue completely moot.

This wouldn't work because:
- You don't know which hashes trigger a match.
- Most probably the hash algorithm that Apple uses contains some pattern check to see if it's legitimate (remember, security researchers can only see the root hash of the dataset, which is encrypted on the phone).
- The image itself is already uploaded on iCloud, so it's not rechecked.
- Upload speeds are a bottleneck. You'd need coordination across tens of thousands of people. At this point, it's much easier to do a DOS attack.
 
Last edited:
I am deep into the Apple eco-system but this would be enough for me to exit stage right.
Is your concern with Apple's specific implementation of this CSAM scanning or with any CSAM scanning. AFAIK all of the other providers are already doing scanning on photos on their systems. Apple is just the last one to comply. the others are just doing it directly on the servers.
 
People keep raising this as though it makes the template-matching somehow independent of the perceptual features of the image. I do not see how this can be if the template-matching is fuzzy. The hash will convey information about the content of the image, even if it is in a digested form. The only way this can work is if the hash somehow captures the essence of the image's content. My concern with the implementation of this is whether the false positives will be sensitive pictures of people, whether people's tendency to take multiple pictures of the same scene increases the odds of a false positive, and whether Apple's per-account criterion includes a consideration that the more pictures in a library, the higher the number of expect false positives overall.

As for the principle of this, Apple's goal is laudable, but the price is ushering in distributed AI, including on the person's machines, for surveillance. This was an ominous rubicon to cross, and Apple was crazy to do so. There is a reason why Google hires behavioural/social scientists - so they don't make stupid decisions like this.
Hard agree on this on both counts that this is a stupid decision by apple and google doesn't make these kinds of stupid decisions. They make other kinds. Truly I've been ditching Google services for the past couple years because of how creepy they've gotten and that's not a good look. All that's left is gmail because it takes so much to migrate from one e-mail to another these days.
 
  • Like
Reactions: VulchR and BurgDog
I can't believe all this misunderstanding and baseless paranoia! Does he not understand that Apple, Google, Microsoft, etc. are already scanning for CSAM? So if they wanted to instead search for other types of images, they can already do that. The only thing this new method does is make things MORE private by hiding all scanning data from Apple except that related to a sizable collection of CSAM being uploaded to their servers. If people are still paranoid about that and don't trust Apple, then they should immediately disable iCloud for photos.
 
  • Like
Reactions: DanielDD
“A serious problem that must be solved”

Yes and no.

Murder is a serious problem but it cannot be solved. Corrupt politicians are a serious problem that cannot be solved. And on and on and on.

Most problems that arise from human failings can be mitigated but cannot be solved. We already mitigate child pornography in many ways. The question is this new way worth the trade offs for the gains.
You caught me talking in absolutes instead of shades of grey. How about "must be continued to be addressed?" Truly if every we called any of these issues "solved" we'd be fooling ourselves 😝.
 
Chinese authorities will infiltrate child security organisations in two different jurisdictions, put an hash for that specific photo. And then someone is gonna pass the 30 threshold limit and somehow the manual review process will not notice that those images are not actually child pornography 🤷🏻‍♂️
Apple said they would consider different countries requests on per country basis. This means the system can be customized. This is not about CP. This is about a mass scanning system that is rolled out globally.

And do you think Tim Cook or Craig will live forever to keep their pinky promise? Apple is a publicly traded company. Even the CEO can be replaced. And when management changes, policy will also change. The fact that such system with huge abuse potential is being implemented is the big problem. Apple’s pinky promise to only scan for CP is irrelevant.
 
They are not silly. There are various advantages to doing this on device rather than on a server:

- Security researchers can audit the hash database being used, and the accuracy of the matching process
- No need to decrypt photos on the server to run the matching algorithm
- Only matching photos can be seen by Apple. If a server-based approach were used, all photos could in principle be accessed
- Targeted attacks are much less likely since it's much harder to tamper with an encrypted phone

These are factual advantages of this system. Are there disadvantages? Yes... Potential for government overreach and tampering with the initial database. But these disadvantages also apply to a server-side approach.

Well, the disadvantages don’t apply to the server side.

Folk are concerned about future changes in a country’s law that will compel Apple to scan for documents and images stored on the phone regardless of whether they’re going to iCloud or not. The same laws will apply to servers, but can be avoided by not uploading to the servers.

Apple will continue to protest that they will resist any call to extend the system to non-CSAM images. But if it’s the law they won’t resist, they’ll roll over.
 
Apple said they would consider different countries requests on per country basis. This means the system can be customized. This is not about CP. This is about a mass scanning system that is rolled out globally.

Apple said they would activate this feature on a per country basis depending on local regulations. The database is the same globally. The hash dataset is embedded in iOS. Apple ships the same version of iOS in all countries, and you can verify that because each version is signed. So no... the way this is implemented cannot be used to perform country-specific matches.
 
What are you on about? I simply said they scan for CSAM, I didn't say they reported more than 4chan or anyone else.
So you're saying they break the law? Because the law says they have to report it if they find it. Or are you claiming there's virtually no CSAM on iCloud? (personally I find that as plausible as Apple's privacy claims. ie not at all)
 
  • Disagree
Reactions: usagora
Well, the disadvantages don’t apply to the server side.

Folk are concerned about future changes in a country’s law that will compel Apple to scan for documents and images stored on the phone regardless of whether they’re going to iCloud or not. The same laws will apply to servers, but can be avoided by not uploading to the servers.

Apple will continue to protest that they will resist any call to extend the system to non-CSAM images. But if it’s the law they won’t resist, they’ll roll over.

Yes they do. Apple still needs to have a source dataset to perform the match on the server. So, adding non-CSAM hashes to the dataset and possible government overreach still apply. The only difference is that the server is a black box for security researchers -- which makes it easier to do any kind surveillance.

Regarding the bold part: people who say this have no idea how software development works. The code implementation needs to be ported to other parts of the OS and implemented at the server side. Plus, they would need to have another source dataset, which would need to be discussed with other agencies. This is not a toggle they can change in settings. This requires coordination between multiple teams. They would need to implement this basically from scratch.

And all that effort for what? For a "surveillance software" that needs a dataset and performs an hash algorithm locally. It would take a couple of hours between a software update and someone raising a red flag.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.