Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Poor analogy, since your photos aren't uploaded and nobody looks at them. (Or even stores them!)

That's another poor analogy, since the list doesn't go back to a central database. It's like your house itself does that comparison with a list it was given, if your house could be trusted to keep secrets.

Look, these poor analogies just aren't helping anyone understand. Questions are good, but poor analogies are just unnecessarily alarming.
Poor thought. Apple has stated they could even distinguish edited existing CSAM. Not just a pixel, but serious Changes added to it.
But I get it. Human as always, there are those people who shares this exact thought and believe nothing to worry about, until it‘s too late and there is no way to escape.
 
Basically, no more Apple OS upgrades for me on any of my devices. It’s probably time to go back to Linux again and my next phone will surely be an Android.

Privacy!
You should look into Linux phone rather than android. Google must have done the same already.
 
  • Like
Reactions: decafjava
That just makes it worse! We now have no recourse for alternative devices if we don’t like this move?
 
I think it's likely that given the horrible potential for abuse, Apple may shelve the feature for iOS 15.0 and iPadOS 15.0. They run the risk of getting into legal trouble with the European Union, for starters.
 
  • Like
Reactions: Havoc035
This is wrong. It's a perceptual hash that does not only match bit-identical files. From Apple's own summary:

"NeuralHash is a perceptual hashing function that maps images to numbers. Perceptual hashing bases this number on features of the image instead of the precise values of pixels in the image. "

It gets worse: the hashing function uses a proprietary ML network trained by Apple. This means that it is very difficult or impossible to audit the algorithm's accuracy.
i stand corrected then - this is definitely an overreach. I need better sources for my info.

if and when this goes live in Canada I will be discontinuing my use of iCloud for photos.
 
This is BAD no matter how you look at it. There's absolutely no justification for looking into someone's private pictures. Child abuse prevention should not give government/companies a right to do this.

Already I wasn't using iCloud, but now I would never use it.

I doubt this will do anything to deter serious abusers, while it will open the gates to mass surveillance.

As an example, I have naked pics of my kids on my phone. So what? Now, what would happen if I upload them to iCloud?

Governments will also abuse this.
 
  • Like
Reactions: 09872738 and -DMN-
Apple should add scanning for:

1. Photos of the confederate flag.
2. Photos of people not wearing Covid masks.
3. Photos of Chinese people disrespecting the Chinese government.
4. Photos of Middle eastern women not wearing burkas.
5. Photos of a group of people with too many whites, not enough blacks.
I don’t understand why they would do that. Apple is comparing photo uploads with KNOWN IMAGES OF CHILD ABUSE, which none of your examples are.
 
Your photo has a hash collision with a known CSAM. It then gets uploaded to Apple for review. Their server is already or gets compromised and a hacker takes your photo. They then distribute to the world.

Great intentions, but this is just bad.
It does not. It just has to be close enough.

Also, remember the FBI phones? Who's to say the government wont inject other hashes into CSAM to look for other 'bad' guys?
 
I doubt this will do anything to deter serious abusers, while it will open the gates to mass surveillance
Serious abusers have long realised not to use such devices while doing so, way before machine learning becomes a thing. Implementing this tech won’t catch even a single extra offender, but all hells have broken loose.
 
  • Like
Reactions: DanTSX and ssgbryan
Serious abusers have long realised not to use such devices while doing so, way before machine learning becomes a thing. Implementing this tech won’t catch even a single extra offender, but all hells have broken loose.
Criminals tend to be incredibly stupid. You give them far too much credit. Most of these weirdos being caught have it in plain sight and all over their houses and have things literally labeled what they are ... like Epstein. He literally labeled all of his crap with what he was doing.
 
As long as the "friend" isn't a minor child, I don't see the problem between consenting adults, although Dr. Phil may have some questions for both hairless parties.
I have two words for you (if you are old enough to remember).

Traci Lords.
 
Criminals tend to be incredibly stupid. You give them far too much credit. Most of these weirdos being caught have it in plain sight and all over their houses and have things literally labeled what they are ... like Epstein. He literally labeled all of his crap with what he was doing.
Just like some sane people are idiots, so does insane people. He chose to flex his criminal activities. That’s on him. And btw, smarter criminals will know how to hide themselves well, like, building their own cellphone tower.
 
It does not. It just has to be close enough.

Also, remember the FBI phones? Who's to say the government wont inject other hashes into CSAM to look for other 'bad' guys?
Indeed. Apple's system is designed specifically so that the owner of the device has no way of finding out what images it is looking for. The owner also has no way of knowing if any of their images generated a hit. It's even possible that individual devices receive different databases, so Apple or some 3-letter agency could target individuals with specific databases. It's all completely opaque, with zero accountability.
 
“But practice already widespread…” sounds like a cop-out. How many evils throughout history have been “widespread”? And how much blood, sweat, toil and tears did it take to fight them?

I don’t ever buy the logic that just because something is in place means it’s right and we can’t change it. We are humans dealing with human-made problems. Let’s not cop behind the illusion of powerlessness.
 
  • Like
Reactions: mr_jomo and DanTSX
I think it's likely that given the horrible potential for abuse, Apple may shelve the feature for iOS 15.0 and iPadOS 15.0. They run the risk of getting into legal trouble with the European Union, for starters.
That could be a reason why they are only deploying this in the US initially. On the other end of the spectrum we have countries like China, which will probably immediately start pressuring Apple to scan for other "interesting" images once they deploy it ...
 
Allow me to introduce you to Mr Slippery Slope. If they can scan for this.....


I’m betting that we are within two years of not being able to type “no no words” in iMessage.

Then they start doing things like crawling for identified badthink images using this technology (say anti-vaccine memes for a handy example) and deleting them from your iCloud.


For the children of course!
 
  • Haha
  • Like
Reactions: -DMN- and Shirasaki
Somebody that is good with statistics tell me if I’m right or wrong.

In Apple’s document here https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf they claim an account has only a one in one trillion chance of being falsely flagged. Because there is nowhere near one trillion iCloud accounts we can probably assume there has never been and probably never will be an account incorrectly flagged. But by the same token, how would they know the one in one trillion figure is accurate? Because the instance has never occurred.
How do we know it's never occurred?
 
  • Like
Reactions: DanTSX
How do we know it's never occurred?
We don’t. That’s why I said ‘probably”. But I‘d like somebody that knows stats to weigh in and explain how Apple can say one in one trillion with any confidence.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.