Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
None of that reports to the government, which is what the CSAM stuff does and hence the difference. One is for your personal benefit, the other is to turn you in.
CSAM reports to Apple, not the government. Apple already reports CSAM violations on iCloud to the government, and would do the same with this, after verifying it actually is CSAM.

The rhetoric I am speaking to is specifically that Apple will change its policy to alter the CSAM technology for spying purposes. But if they would change their policy for that, then why not change their policy for keeping all the much-more-valuable information on your device?
 
The question is surely why does Apple not consult more widely in the first place about this societally-impacting issue? Even as an Apple fanboy, I am getting quite cross about Apple's arrogance and it is opening itself up to all sorts of criticism when, if it had some deference to people outside the glass donut, it may make better, more harmonious decisions.

I can't help thinking Apple is going down this road in order to appease the FBI or NSA, so that it can keep its encryption in place but slowly assist more with dynamic filtering of potentially illegal content. Chilling either way.

Or pressure from China. China has been clamping down on tech (forcing companies like Apple to keep Chinese user data in China, Jack Ma's IPO stifled, new data privacy law).
 
  • Like
Reactions: Username: Required
[…]? I mean, people completely trusted Apple with Siri until it came out that people were actually listening in on your conversations, so…
This was in the TOS for Siri. And it came out that contractors were doing the listening. Maybe the optics were bad but contractors usually are bound to the same NDA as employees.

The current suit not withstanding…which imo will just go away.
 
None of that reports to the government, which is what the CSAM stuff does and hence the difference. One is for your personal benefit, the other is to turn you in.
And again, you know that…how? Given Apple’s past track record with things like Siri, it’s entirely plausible for them to be using analytics and information from your iPhone for nefarious purposes.
 
CSAM reports to Apple, not the government. Apple already reports CSAM violations on iCloud to the government, and would do the same with this, after verifying it actually is CSAM.

The rhetoric I am speaking to is specifically that Apple will change its policy to alter the CSAM technology for spying purposes. But if they would change their policy for that, then why not change their policy for keeping all the much-more-valuable information on your device?
And Apple reports to the CSAM board which reports to law enforcement. (government!) Just because it doesn't report directly to the gov makes absolutely no difference.

And reporting iCloud photos in iCloud is fine by me, I only object to on device scanning.

As for your last question, because they would be caught at it and if you think now is a **** storm, it would be a hundred times worse.
 
This was in the TOS for Siri. And it came out that contractors were doing the listening. Maybe the optics were bad but contractors usually are bound to the same NDA as employees.
I fail to see how that’s any different than what they’ve done here. They’ve essentially made the “terms” of this issue clear on multiple occasions. Use iCloud Photos and be subjected to scans, or turn it off and decline. You trust Apple isn’t doing anything else nefarious now with Siri, so what makes you think that they aren’t telling the truth here?
 
And Apple reports to the CSAM board which reports to law enforcement. (government!) Just because it doesn't report directly to the gov makes absolutely no difference.

And reporting iCloud photos in iCloud is fine by me, I only object to on device scanning.
Reporting CSAM when scanned on iCloud is semantically identical to reporting CSAM when verified on iCloud. This is assuming Apple does exactly what they said they would, without changing any policies.

On that note...
As for your last question, because they would be caught at it and if you think now is a **** storm, it would be a hundred times worse.
...you're making no sense. You're arguing that Apple would be caught if they were using your data now for nefarious purposes, but they wouldn't be caught if they used CSAM for this?
 
I'll admit that you're more creative than I in contriving a what-if.

A few issues (according to their documentation, again, up to the reader to believe it or not) -- the process of revealing the >= 30 hits on the CSAM database relies on a handshake between Apple's servers and the vouchers from your phone -- i.e., China would also require access to Apple's proprietary server-side code, which I'm guessing Apple wouldn't be keen to fork over.

To your booby-trapped CSAM images argument, I see at least three possible issues:
  1. Assuming they are able to do this (see (2)), there is still the issue that you would need to have near-exact replicas of the images that they want to flag -- i.e., you could attend the same protest, take pictures of the same people (from perhaps a slightly different angle), and still, with very high likelihood, produce a different hash.
  2. Talking about an adversarial government who would have no problem generating new CSAM images does not immediately make the problem of training a GAN to do this easy. I would argue that they would have quite a hard time doing this, without many millions of novel CSAMs. They could likely easily make semantically meaningless images matching the hashes of image they want to flag, but creating novel CSAM having a particular hash is a much harder problem. Not impossible, but I gather very difficult without an absurd amount of data.
  3. Why the heck would China (or any other repressive country) choose this as the best spying vector? Aside from the absurd cost involved, it just about the least efficient way to get the job done. Your phone already semantically tags nearly everything on it --- it would be much easier to require Apple to just report whenever a user has content tagged with anything in <set of objectionable things>. If China has the ability to require Apple bend to its every demand, then there's no way they would choose the CSAM hashing vector for spying.
The images they want to flag would be shared images from things like memes or popular images. Doesn't have to be novel images. Or, they can plant these kinds of images of protest to see who takes them up.

They don't need to train the GAN to do it, they have to game the PhotoDNA hashing algorithm, which has already been shown to be insecure, for inclusion into the NCMEC database. Or move into the neuralhash system that was found in ios14. Since a lone hacker already breached that system, it's not infallible.

Both of those don't need state level budgets to do it but it sure helps. And yes, just forcing Apple by law to pipe their system into taking up the non CSAM local scanning is the easiest way, but this shows that even with Apple's refusal there are technical hacks to enter the system.
 
  • Like
Reactions: BurgDog
Each country have their own standards. I'm not going to criticize other countries since my own has bizarre moral laws. The important thing to me is a tech company shouldn't be the moral police for anyone.

I don't know about that. If you look back on all my comments, you would see that I'm pretty inclusive of different ideas. My bar is set really low here. If society as a whole does not value the well-being of its people and allows exploitive behaviour to run rampant without repercussion. That would be a crime that transcends all political differences.

Notice that I never used the word human rights or universal suffrage, etc. I only said the wellbeings of the people. If people are having a good life and are able to live and progress, then it's good.

Selling child prostitution as a service for foreign soldiers, or making ladyboys for freak shows, clearly subtracts from this very low bar I set.

It's the same girls that make child porn. And I'm not against child porn just because. Child porn that is done by teens, not children, without monetary incentives, i.e., just to have fun or being exhibitionists, may be okay, but a legal line must be drawn somewhere, and it's hard to distinguish which is which. Technically, most people have done it while in school, sexting, etc. I don't know at which point a nude picture of an underage teenager would be considered CSAM.
 
Last edited:
I understand why this is a slippery slope but I don’t like the idea of child predators breathing a sigh of relief.
Do you really think child predators are going to turn on the feature to let Apple scan their photos? Of course they aren’t. Instead Apple will just be scanning everyone’s photos invading everyone’s privacy which they claim they don’t do.
 
Right???? Who wants child safety? What’s wrong with these people?
Wow dude. A month of articles about this, millions of comments and you still don't understand that people's beef with this has nothing to do with child safety. People fought for our liberties and others around the world are losing their lives trying to gain some. Its shocking to me that their are people such as yourself that are so willing to give them up under the guise that this is about child safety.

This is essentially like having someone in your home that pops up every time you bring in some new groceries to tell you that everything is okay, carry on. Sorry I don't need or want that. You want to video or check everything I do while I am outside my home (ie. cloud), fine, I am okay with that. But I am not going to pay money for a home (ie. phone) that comes with someone patting down my pockets every time I come in.
 
The images they want to flag would be shared images from things like memes or popular images. Doesn't have to be novel images. Or, they can plant these kinds of images of protest to see who takes them up.

They don't need to train the GAN to do it, they have to game the PhotoDNA hashing algorithm, which has already been shown to be insecure, for inclusion into the NCMEC database. Or move into the neuralhash system that was found in ios14. Since a lone hacker already breached that system, it's not infallible.

Both of those don't need state level budgets to do it but it sure helps.
I'd be interested in some good peer-reviewed studies looking into exactly these types of scenarios.

And yes, just forcing Apple by law to pipe their system into taking up the non CSAM local scanning is the easiest way, but this shows that even with Apple's refusal there are technical hacks to enter the system.
The fact remains that if Apple is going to concede to government bullying, then there's absolutely no reason to worry about CSAM being gamed --- the existing data stores are much more valuable.
 
Err, Apple is already doing the scans on the server side like everyone else. They shouldn't be scanning people's phones to begin with.

Can you source this? You keep claiming this, but I don’t believe this to be true, which is why Apple reported such an incredibly tiny amount of CSAM compared to other companies who ARE scanning their servers.
 
  • Like
Reactions: Pummers
I fail to see how that’s any different than what they’ve done here. They’ve essentially made the “terms” of this issue clear on multiple occasions. Use iCloud Photos and be subjected to scans, or turn it off and decline. You trust Apple isn’t doing anything else nefarious now with Siri, so what makes you think that they aren’t telling the truth here?
If Apple is lying it doesn't change anything. Speculating they are lying is not a profitable conversation as it's all whataboutisms.
 
  • Like
Reactions: femike
So - anyone disturbed just a tad that there is a whole group at Apple that is studying how to identify Child Porn and how to program some computer to recognize it? That means that have to have examples of it....that means they have to study it, that means they have to develop requirements for this SW, that means they have to develop algorithms to figure out that this picture is child porn vs a kid taking a bath or in a swimming pool....

Then someone has to review these results to make sure they are correct and meet the requirements of the SW product.

What kind of staff are working this task?
By having and reviewing examples of the Kiddie Porn, they are breaking the very same laws.
Who is vetting these Apple employees?
This is making me queazy to think about.

You have just told everyone you don't understand the system.

Apple doesn't have examples of such material.
Apple doesn't use it for training the CSAM Detection System.
Apple doesn't have to develop algorithms to recognise child pornorgraphy, because the system doesn't do that.
Apple isn't breaking the law since they aren't in possession of examples of child pornography.
 
Sounds like you're the one wetting him/herself.

"No idea what they are yelling about" - Are you referring to "security researchers, the privacy whistleblower Edward Snowden, the Electronic Frontier Foundation (EFF), Facebook's former security chief, politicians, policy groups, university researchers, and even some Apple employees?"

You must know more than them, right?

I would say I know more about the CSAM detection system than they did when they issued those statements.
 
People really need to stop insisting that just because we object to something, we must not understand it. One more time for the people in the back: we understand how it works, and we don’t want our devices spying on us.

I'm pretty sure most people don't understand the system in detail.
In involves math, even simple math like trigonometry.
 
It certainly looks like "cancel this for freedom" is the new "I watch it for the plot".
Do you recall when “the gays” were after your kids? Perhaps you are too young for that. Every repressive government uses ”the children” as a whistle cry to erode freedoms that people still continue to fight and die for everyday. If you think this is to save children, do you also think the earth is flat?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.