Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It would be quite a leap to go from that to "that screenshot contains a racist joke, you should feel terrible".

Not everything created with good intentions is destined to be weaponised.
Not really.

The slippery slope is a real thing.

Breaking Points on YouTube just posted a video about an on-going story of the expansion of DNA collection by the FBI in the US over the last few decades.

It started out with something 99% of people could get on board with, only collecting DNA of convicted murderers and rapists..

Expanded to all felonies...

Then all misdemeanors....

Now in some states, you only need to be accused of a crime or under investigation to be compelled to submit a DNA sample. Something that a lot less people would be on board with.

Yes, good things could come out of it, but at a huge cost of civil liberties.

It looks like there is even DNA sampling from waste water and even air now. Reminds me of the movie Gattaca (1997).

Here is the Breaking Points clip:


The slippery slope commences. Child abuse, so easy to put in place the infrastructure for censoring.
Then came the clamor for "hate speech". Then Thought Crime.
Yup.

Of course there will be plenty of people that will say that would never happen, but it doesn't just start at the bottom, it starts with something almost everyone can agree with, and slides farther and farther down.

Child abuse is the very top of the slope.
 
What worried me about all this is times when you shot a photo of your toddlers in the bath (or something else just as similarly innocuous) and ten minutes later your pad was being raided by cops. AI isn't smart enough yet to know the difference.
Yes. That was my concern as well. Along with adults having fun by sharing content. There are definitely young looking adults.
 
  • Like
Reactions: MajorFubar
So you never bothered to read how this actually works and just worried about something for the wrong reasons 🧐
We all read it. It detected some level of photo manipulation and photoshop work so there is some leeway involved. It’s not a perfect pixel by pixel comparison.
 
Yes. That was my concern as well. Along with adults having fun by sharing content. There are definitely young looking adults.
Young looking adults wouldn't appear in a CSAM database. I don't know how much we have to repeat that your private pictures wouldn't be exposed
 
Young looking adults wouldn't appear in a CSAM database. I don't know how much we have to repeat that your private pictures wouldn't be exposed
Read my other comment. I looked into this heavily. It detects photo manipulations/distortions/cropping/photoshop/etc. It is NOT a perfect pixel by pixel comparison and those changes causes the hash to change. Yet it could still flag it.

“The purpose of the NeuralHash is to ensure that identical and visually similar images produce the same hash”

Straight up from the white paper. I can’t copy it verbatim from my phone for some reason. But check the NeuralHash section.
 
Last edited:
That’s not how it would have worked
No, but it might have reported you because an AI algorithm hallucinated that the pattern on your shower curtain matched something in a known, nasty CSAM image.

Or, more accurately, Apple were effusive about how their AI hashing system couldn't be fooled by cropping, scaling or re-colouring "bad" images but offered no convincing explanation as to why that didn't create a risk of false positives when applied to - probably - tens of billions of images, without any way for a human to check that the actual images matched... and there are a lot of dangerous misconceptions about low-probability events circulating where they shouldn't.

AI image detection is quirky and hard to predict.
 
  • Like
Reactions: Shamgar
without any way for a human to check that the actual images matched
First you would have needed to reach an unspecified threshold of hashes matches. Let's say it's 10 pictures. Even if you have 2-3 false positives, it's very unlikely you would reach this threshold without having illegal content in your library. So nothing would have been flagged to Apple until you reach this threshold. Then there was a human review of only the flagged pictures, to ensure that there are indeed matching with the CSAM database. Only then they would have been reported to law enforcement. The process was far from perfect, but Apple designed it with a few safeguards to avoid false positives being reported to law enforcement
 
No, but it might have reported you because an AI algorithm hallucinated that the pattern on your shower curtain matched something in a known, nasty CSAM image.

Or, more accurately, Apple were effusive about how their AI hashing system couldn't be fooled by cropping, scaling or re-colouring "bad" images but offered no convincing explanation as to why that didn't create a risk of false positives when applied to - probably - tens of billions of images, without any way for a human to check that the actual images matched... and there are a lot of dangerous misconceptions about low-probability events circulating where they shouldn't.

AI image detection is quirky and hard to predict.
Yes! Exactly! Their white paper eluded to “Good luck trying to fool it with photoshop!” And we are some spoken down too because we didn’t look into it.
 
Young looking adults wouldn't appear in a CSAM database. I don't know how much we have to repeat that your private pictures wouldn't be exposed
...and how often does it have to be repeated that the proposed system was designed to match similar-looking images because requiring an exact match would be too easy to fool? When you're scanning the images from a billion iPhone users, even a million-to-one chance of a false match will be to much to properly investigate.
 
Not often a company listens, changes their mind, and explains why. Good on Apple here.

Most decisions have trade-offs and Apple thought the privacy preserving method they invented had enough balance that the slippery slope trade-offs were acceptable. Once more security researchers got involved, they decided the trade-offs were not worth it.

It's also now clear that Apple invented this new method knowing that E2EE was coming. Now that E2EE is here, Apple has punted back to the governments to decide what to do if anything.
 
  • Love
Reactions: PantherKang
First you would have needed to reach an unspecified threshold of hashes matches. Let's say it's 10 pictures. Even if you have 2-3 false positives, it's very unlikely you would reach this threshold without having illegal content in your library. So nothing would have been flagged to Apple until you reach this threshold. Then there was a human review of only the flagged pictures, to ensure that there are indeed matching with the CSAM database. Only then they would have been reported to law enforcement. The process was far from perfect, but Apple designed it with a few safeguards to avoid false positives being reported to law enforcement
If everyone is saying there is NO CHANCE for false positives, why was there a plan in place to address just that? That’s what some of us are saying. Yet we have others saying it’s just NOT POSSIBLE but Apple themselves knew it was possible which is why they implemented such threshold.
 
If everyone is saying there is NO CHANCE for false positives, why was there a plan in place to address just that? That’s what some of us are saying. Yet we have others saying it’s just NOT POSSIBLE but Apple themselves knew it was possible which is why they implemented such threshold.
I'm not saying that this process was perfect, nor that there was no chance of false positive being reported. But if your concern is the two pictures of your toddler in a bath you have on your library, the chances to be even flagged to Apple were very very close to 0. Chances of being reported to law enforcement even smaller.

For the record, I'm glad this feature is dead. I just think that many people misunderstood how it was designed to work
 
Read my other comment. I looked into this heavily. It detects photo manipulations/distortions/cropping/photoshop/etc. It is NOT a perfect pixel by pixel comparison and those changes causes the hash to change. Yet it could still flag it.

“The purpose of the NeuralHash is to ensure that identical and visually similar images produce the same hash”

Straight up from the white paper. I can’t copy it verbatim from my phone for some reason. But check the NeuralHash section.

A pixel perfect matching system would be utterly useless for obvious reasons. The truth is that we simply have no idea how well Apple's hash mashing would have worked. From their perspective, not wanting to be hosting and storing illegal and despicable content on their servers, I totally understand how they arrived at this solution even if they did botch the PR face of it. All the various distributed anonymous storage platforms that have launch over the years (FreeNet, IPFS, TOR)... the thought of hosting a node myself on my own gear makes me queasy considering all the lowlife filth I'd certainly be unknowingly serving, no matter how fragmented or encrypted it is.
 
In the same sense that a straw-man is a "real thing." ie, a logical fallacy.
"Slippery slope" is not a fallacy:
A leads to B​
B risks leading to C​
Therefore A risks leading to C​

"The Slippery Slope Fallacy" is a fallacy (or, rather, a common weak argument trope) because it misses that middle step:
A leads to B​
(Wave hands)​
Therefore A will inevitably lead to Z and we'll all be murdered in our beds.​

Look at your favourite fallacy meme site (e.g. https://yourlogicalfallacyis.com/slippery-slope) and and read beyond the headline.
 
  • Like
Reactions: Shamgar
Not really.

Given the real example I posted, the same concept can be applied to the CSAM issue.
Yes really. That's literally what it is.

Because A led to B, X must lead to Y. It's not a persuasive argument because it relies on fundamentally flawed reasoning.
 
"Slippery slope" is not a fallacy:
A leads to B​
B risks leading to C​
Therefore A risks leading to C​

"The Slippery Slope Fallacy" is a fallacy (or, rather, a common weak argument trope) because it misses that middle step:
A leads to B​
(Wave hands)​
Therefore A will inevitably lead to Z and we'll all be murdered in our beds.​

Look at your favourite fallacy meme site (e.g. https://yourlogicalfallacyis.com/slippery-slope) and and read beyond the headline.
Causality exists, no doubt.
 
ecause A led to B, X must lead to Y.
X could lead to Y based off of the A to B example.

The argument is not that it would happen, that it could happen. My post was a reply to someone saying that it was a leap that CSAM could lead to non-child abuse related scans.
 
A pixel perfect matching system would be utterly useless for obvious reasons. The truth is that we simply have no idea how well Apple's hash mashing would have worked. From their perspective, not wanting to be hosting and storing illegal and despicable content on their servers, I totally understand how they arrived at this solution even if they did botch the PR face of it. All the various distributed anonymous storage platforms that have launch over the years (FreeNet, IPFS, TOR)... the thought of hosting a node myself on my own gear makes me queasy considering all the lowlife filth I'd certainly be unknowingly serving, no matter how fragmented or encrypted it is.
That’s precisely the point. The AI is making some judgement calls. Visually similar images produce the same hash. Not the same picture cropped. So someone’s toddler COULD produce a false positive. Apple knew so and implemented a workflow for it.

Personally I wouldn’t store those pictures. But it is still a concern because some judgement is happening without your knowledge. And no possible way to clear it saying “no that’s my kid….here is proof” or “no that’s my wife she just looks young….see she is 22!”

If there was a way to clear out the false positives with proof, I would have been fine with it.
 
No, but it might have reported you because an AI algorithm hallucinated that the pattern on your shower curtain matched something in a known, nasty CSAM image.

Or, more accurately, Apple were effusive about how their AI hashing system couldn't be fooled by cropping, scaling or re-colouring "bad" images but offered no convincing explanation as to why that didn't create a risk of false positives when applied to - probably - tens of billions of images, without any way for a human to check that the actual images matched... and there are a lot of dangerous misconceptions about low-probability events circulating where they shouldn't.

AI image detection is quirky and hard to predict.
Again, that isn’t how it works. Look up PhotoDNA.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.