Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You guys are missing the point.

It's not about scanning for CSAM.

It's about scanning AT ALL.

The opportunity for abuse is HUGE. All it takes is changing the database hash, and now you can scan for political messages, terrorism, BLM, Antifa... you name it.

Do you have a problem with Facebook, Google, Microsoft, and others scanning currently?

"Facebook reported more than 20 million child sexual abuse images on its platform in 2020..."

Do you think this keeps these platforms busy enough? Or will they also add scanning for BLM, Antifa, etc?

Look... I get your argument. But in terms of a slippery slope... I'm afraid we're already rolling down the hill.

The only cure is to use a flip-phone and store your vacation photos on a local NAS.

 
But it wouldn't get to the courts.

Once the Apple employee verifies that it's *not* a known MCNEC CSAM photo... and instead it's a picture of a mermaid at Disneyland... it's done.

Besides... there has to be 30 photos before this manual review even gets triggered. Do you think the hashes will create false positives 30 times in a row for a single user?

And let's be clear... this sort of scanning is ALREADY happening across billions of photos on Facebook, Google Drive, OneDrive, etc. This didn't start with Apple and their recent announcement.

Is the court system clogged with phony accusations?

No.

Look... I know a guy who is currently serving a 7-year sentence for possessing and distributing child porn.

He did have this objectionable material on his Android phone and in his Google Drive and GMail. And he was investigated for 6 months by a child-safety task-force before local law enforcement was even called. The court case took two years. He was first arrested in April 2019 and he went to jail in May 2021.

The system works at catching the *real* bad guys.

It's not gonna send you to jail accidentally.
no system is perfect, that is the point, and this is a case where i don't want to be the guy they make the mistake on, once accused of these crimes, you can be as innocent as the driven snow and you never get your reputation back, never, no matter what the authorities do to make you whole
 
no system is perfect, that is the point, and this is a case where i don't want to be the guy they make the mistake on, once accused of these crimes, you can be as innocent as the driven snow and you never get your reputation back, never, no matter what the authorities do to make you whole

All I'm saying is... there are a lot of steps that have to take place before the cops come to kick in your door, drag you to jail, and falsely ruin your reputation.

If we believe the "one in a trillion" claim... your pictures won't even get flagged in the first place.

But if, for some reason, one of your innocent pictures accidentally gets flagged by automation... it will be reviewed by a human and dismissed.

That's as far as it will go.

Look... let's try to remember that child porn scans have been happening for YEARS by all the big platforms. This is nothing new.

And yet... I don't remember anyone worrying about being the one guy who gets his life ruined by a false claim.

Can someone have a problem with Apple's on-device scanning versus cloud scanning? Sure!

But neither technique will mistakenly send cops to your house.

Honestly... did people worry about hashes and databases and mistaken photos three weeks ago?

:p
 
  • Like
Reactions: jseymour
Look... let's try to remember that child porn scans have been happening for YEARS by all the big platforms. This is nothing new.

And yet... I don't remember anyone worrying about being the one guy who gets his life ruined by a false claim.

Can someone have a problem with Apple's on-device scanning versus cloud scanning? Sure!

But neither technique will mistakenly send cops to your house.
This ^^^^^

Let us not get all wrapped around the axle over something that, near as I've been able to discern, has so little a probability of happening as to be effectively zero. There's enough wrong with this plan of Apple's without having to Make Stuff Up.

Mind you: There's a distinct possibility the system can be gamed so as to register false positives on the first pass. But, given the safeguards Apple claims will be in place, those should never result in the FBI busting down your door.
 
  • Love
Reactions: Michael Scrip
The above being said, I don't want to downplay the potential problem with false positives, either. While my understanding has it that false positives are exceedingly unlikely to result in a problem for iThings users, if enough people set out to game the system, it could, conceivably, render it of questionable utility. Consider:

Kenneth White, a cryptography expert and founder of the Open Crypto Audit Project, said in a tweet: “I think some people aren’t grasping that the time between the iOS NeuralHash code being found and [the] first collision was not months or days, but a couple of hours.”
[Emphasis added]

Full article: Apple’s CSAM detection tech is under fire — again (N.B.: Article is three days old. So this isn't exactly new news.)

Now, if one researcher was able to produce a hash collision in just a couple hours, imagine what a group of hacktivists could accomplish if they wished to demonstrate their annoyance with what Apple's doing. Or 'net vandals, out simply to have a little fun at Apple's expense?
 
The above being said, I don't want to downplay the potential problem with false positives, either. While my understanding has it that false positives are exceedingly unlikely to result in a problem for iThings users, if enough people set out to game the system, it could, conceivably, render it of questionable utility. Consider:

Kenneth White, a cryptography expert and founder of the Open Crypto Audit Project, said in a tweet: “I think some people aren’t grasping that the time between the iOS NeuralHash code being found and [the] first collision was not months or days, but a couple of hours.”

[Emphasis added]

Full article: Apple’s CSAM detection tech is under fire — again (N.B.: Article is three days old. So this isn't exactly new news.)

Now, if one researcher was able to produce a hash collision in just a couple hours, imagine what a group of hacktivists could accomplish if they wished to demonstrate their annoyance with what Apple's doing. Or 'net vandals, out simply to have a little fun at Apple's expense?

True... but that's why the human-review is so important.

Apple did say that the version of NeuralHash that was reverse-engineered is a generic version, and not the complete version that will roll out later this year. So I don't know what that means in the long run.

I'm glad people are testing as much as they can today. We'll have to see what further testing can be done once it gets rolled out fully.

Though if people decide to game the system... and Apple starts getting thousands of flags a day of innocent birthday party images... I guess they'll have to go back to the drawing board.

Question... what technology are the Facebooks, and Googles, and Microsofts of the world using right now?

Is it more or less accurate? Or able to be gamed?
 
  • Like
Reactions: jseymour
All I'm saying is... there are a lot of steps that have to take place before the cops come to kick in your door, drag you to jail, and falsely ruin your reputation.

If we believe the "one in a trillion" claim... your pictures won't even get flagged in the first place.

But if, for some reason, one of your innocent pictures accidentally gets flagged by automation... it will be reviewed by a human and dismissed.

That's as far as it will go.

Look... let's try to remember that child porn scans have been happening for YEARS by all the big platforms. This is nothing new.

And yet... I don't remember anyone worrying about being the one guy who gets his life ruined by a false claim.

Can someone have a problem with Apple's on-device scanning versus cloud scanning? Sure!

But neither technique will mistakenly send cops to your house.

Honestly... did people worry about hashes and databases and mistaken photos three weeks ago?

:p
sure ok, no argument, the simple reason i am leaving is that i am going to build my own cloud and manage my own media and files, something i should have done a long time ago, continue to use my iphone on ios 14 until i get my new hearing aids next year and then drop the iphone and i will be gone, we all make our own calls, the idea of being accused tried and convicted that bothers me but each to his own, we all make our own calls
 
the idea of being accused, tried, and convicted that bothers me...

Again... all the big cloud services have been scanning photos for YEARS yet nobody had a problem with it then. ;)

i am going to build my own cloud and manage my own media and files

I applaud your plan to build your own cloud now.

I'm fond of using a Synology NAS for personal storage. They have tons of great apps to host your own photo storage, have your own PLEX server, etc.

You'll love it. 👍
 
Again... all the big cloud services have been scanning photos for YEARS yet nobody had a problem with it then. ;)



I applaud your plan to build your own cloud now.

I'm fond of using a Synology NAS for personal storage. They have tons of great apps to host your own photo storage, have your own PLEX server, etc.

You'll love it. 👍
yeah, getting ready to pick a synology now, i have never been comfortable with cloud storage from the get-go so this is a perfect wake up call to me, the only real bummer for me is have become a huge fan of apple notes and dont see anything like them on windows, yet anyway
 
yeah, getting ready to pick a synology now, i have never been comfortable with cloud storage ...
We already have a Synology NAS, but it's currently 60% full and not backed up. (There's nothing upon it that would be a disastrous loss. It's used only for a networked DVR and for the surveillance system.) There's the main Linux server, but it's filesystems are even fuller.

I imagine once I upgrade the main fileserver I'll look to putting a private cloud storage solution on it.

I used DropBox a bit while we were still on Android. Never quite trusted it, so I never put anything at all sensitive on it. I was lulled into complacency by my trust in Apple's claims of dedication to preserving privacy. What a wake-up call, eh? Won't get fooled again.
 
If any of you think anything is truly private and "yours" these days.... well they are not. Look around. How many cameras, all go to a cloud. Look at the fact your phone knows exactly where you are at all times, again info goes to a cloud. We are tracked with everything, nothing is yours anymore. Get used to it. Or live in a cave with nothing.
 
Do you have a problem with Facebook, Google, Microsoft, and others scanning currently?

"Facebook reported more than 20 million child sexual abuse images on its platform in 2020..."

Do you think this keeps these platforms busy enough? Or will they also add scanning for BLM, Antifa, etc?

Look... I get your argument. But in terms of a slippery slope... I'm afraid we're already rolling down the hill.

The only cure is to use a flip-phone and store your vacation photos on a local NAS.

That's Facebook scanning their own stuff on their own server.

That's not Facebook scanning your PHONE before you even post it.
 
  • Like
Reactions: Mendota
Nope! You would not! Because there are plenty of people who would not believe the wrongly accused. If one gets accused for child porn or rape, something stays behind in terms of reputation, even if the court decides he was innocent.
Exactly. It's like the old joke "Headlines go on the front page at 72 point font... retractions go on page 23 in 8 point font..."
 
  • Like
Reactions: Mendota and lxmeta
I'm hearing a lot of conflicting stuff in this thread.
"Apple only scans for CSAM for stuff that uploads to iCloud".
"Apple has E2EE for pictures"
"Apple will review your pictures if there is a CSAM Hit."
"Apple can't review your encrypted stuff."
"But Apple will review your encrypted stuff"
So, what is it? Is the stuff encrypted when sent to iCloud (I think the answer to that is "no"). And if that is the case, why doesn't apple scan for it on iCloud? And if they ARE scanning for it in iCloud then what is the point of scanning our devices? If stuff IS encrypted when sent to iCloud, then how does Apple review it in person if there are CSAM hits? Does apple have a master decryption key (I think the answer to that one is "Yes"). And if they have a master decryption key, why do they need to scan it on our devices, if they can decrypt it anyway in iCloud?

There's a lot of conflicting stuff. And, once again, this is a CHOICE by Apple to proactively scan our devices, not a requirement of the law.

It's very much like the COVID meme that I saw recently... you have a 99.997% chance you won't get cancer... but we're going to give you chemo anyway. Well, there's a 99.997% (probably greater) chance that people aren't going to upload CSAM to iCloud.... BUT... we're going to install software on your phone anyway.
 
No. Because Apple is only scanning it when you try to send it to their servers. They can't scan it on the server side because they promised you the data would be encrypted and they would never be able to see it for themselves.
This is Apple KEEPING their promise of privacy.
... until they get a warrant.

 
  • Like
Reactions: lxmeta
"Apple only scans for CSAM for stuff that uploads to iCloud".
That is what has been announced. (on device scanning before it's uploaded.) Some of us don't like the on device scanning especially and feel it's both a breech of trust and something that could easily be subverted to scanning for other data at the request of governments. Apple is the only actor that has announced device side scanning.

"Apple has E2EE for pictures"
I'm pretty sure no, and it hasn't been announced as a future feature.

"Apple will review your pictures if there is a CSAM Hit."
A series of 30 hits supposedly.

"Apple can't review your encrypted stuff."
If you encrypt it yourself, I suppose it's possible. For normal stuff, yes, they have the master keys for iCloud and they have full access to your iPhone, so they can see everything.

"But Apple will review your encrypted stuff"
They send a low res copy or your photo (it's only photos for now) with a report to Apple, they review it to see if it's CSAM material, and report it if it is.

I think it is encrypted in iCloud, but I'm not sure of that, but if it is, Apple has the keys and decrypt it, so to Apple, it's decryptable and seeable.
 
I'm hearing a lot of conflicting stuff in this thread.
"Apple only scans for CSAM for stuff that uploads to iCloud".
"Apple has E2EE for pictures"
"Apple will review your pictures if there is a CSAM Hit."
"Apple can't review your encrypted stuff."
"But Apple will review your encrypted stuff"
So, what is it? Is the stuff encrypted when sent to iCloud (I think the answer to that is "no"). And if that is the case, why doesn't apple scan for it on iCloud? And if they ARE scanning for it in iCloud then what is the point of scanning our devices? If stuff IS encrypted when sent to iCloud, then how does Apple review it in person if there are CSAM hits? Does apple have a master decryption key (I think the answer to that one is "Yes"). And if they have a master decryption key, why do they need to scan it on our devices, if they can decrypt it anyway in iCloud?

There's a lot of conflicting stuff. And, once again, this is a CHOICE by Apple to proactively scan our devices, not a requirement of the law.

It's very much like the COVID meme that I saw recently... you have a 99.997% chance you won't get cancer... but we're going to give you chemo anyway. Well, there's a 99.997% (probably greater) chance that people aren't going to upload CSAM to iCloud.... BUT... we're going to install software on your phone anyway.
Repeating incorrect info that Apple has clearly pointed out answers to in multiple documents and interviews won’t make conspiracy theories any more true.

* Apple does not “scan” anything on device. The do a hash comparison if, and only if, one has iCloud Photos turned on.
* If iCloud Photos is NOT turned on, no hash comparison takes place for anything.
* If iCloud Photos IS turned on, hash comparison takes place, but no one, including, Apple knows about it or “sees” anything. People don’t have to believe that, but it is a fact. It is nothing more than an on-device comparison that will mark photos if they match on-device hashes.
* Apple does NOT review every pic that may have a hash match. As reported, it will take upwards of 30 matches to take place before Apple reviews low-resolution versions of pics to see if they match what is in the database.
* Apple does NOT report to law enforcement/governments. If all of the above criteria has been met and an account is positively shown to be uploading known child pornography to iCloud, Apple suspends the account and reports the relevant information to the NCEMC, a non-government not for profit organization set up specifically to protect children from being exploited. Yes, the NCEMC may involve law enforcement at this point, but only if their research further proves that the account/person in question is in fact in possession of child pornography.
* No one said Apple wasn’t doing this by choice. But, by knowing about child pornography on their servers, they are required by law to report that.

So, why is this BETTER than what every other company mentioned is doing?

Facebook, Google, etc. actively compare every single one of your personal photos uploaded to their servers. Every single one!

Apple will NOT compare any photo if you are not uploading them to iCloud. If you are uploading to iCloud, they are only tagging those specific photos and never review/see/tag any of your other photos, even those uploaded to iCloud. Again, Facebook, Google and others scan every single thing you send or upload to your accounts with them.

If your concern then is that the has database hard coded in iOS which is 100% controlled by Apple (no different than any other hash database already on your phone…yes, there are others that have been there for years) can somehow be ”hacked” and used against you to wrongly convict you of being a child pornographer, I’m not sure what I can say to convince you of the one in one trillion number that they have clearly presented.

My only comment on that is that there are HUNDREDS of easier ways for any hacker to frame you or access info on your phone. They wouldn’t waste their time using a system set up with so many checks and balances on all sides involved to even bother.

Inserting “alternative” hash info into iOS is not only impossible unless you are the person at Apple that actually does that programming, but what happens next? If you want to jump to the conclusion that Apple can then work with government to not only add this, but then share that info, again, not sure what I can say to convince a person that believes such a level of conspiracy that this has even a LOWER chance than one in one trillion of happening.

I think if people would take less than 15 minutes to read the FAQ Apple released, while not answering ALL of the questions or going into the details of the tech, it does answer the vast majority of “real” questions and not the “what ifs” people will always have for any tech that is introduced. Practically ANYTHING can happen…but “what are the odds” are the questions people either don’t want to recognize or simply choose to avoid so it can support what they believe or want to believe to be true in their mind. Nothing I or Apple or anyone else can say will change that.
 
That is what has been announced. (on device scanning before it's uploaded.) Some of us don't like the on device scanning especially and feel it's both a breech of trust and something that could easily be subverted to scanning for other data at the request of governments. Apple is the only actor that has announced device side scanning.
I’m curious, can you provide an example of how it may be subverted? Give us a “real world” example as it applies to this specific technology being added to iOS.
 
I’m curious, can you provide an example of how it may be subverted? Give us a “real world” example as it applies to this specific technology being added to iOS.
Perhaps subverted isn't a good word for what I mean.

Apple themselves will do the changing as requested by some government, and the subverting is just putting pic hash's in the DB that have nothing to do with CSAM, but something else. It's basically a scanner for any picture like data that they, as in foreign governments, might want searched for. And that's not even expanding the scanning algorithms or when/where it scans, which could also be changed in any old Apple update.
 
  • Like
Reactions: Euronimus Sanchez
Perhaps subverted isn't a good word for what I mean.

Apple themselves will do the changing as requested by some government, and the subverting is just putting pic hash's in the DB that have nothing to do with CSAM, but something else. It's basically a scanner for any picture like data that they, as in foreign governments, might want searched for. And that's not even expanding the scanning algorithms or when/where it scans, which could also be changed in any old Apple update.
Okay..that’s what I thought you meant…and subverted is an okay word to describe that actually.

To be clear, you feel that just because it IS possible, Apple will take direction from a government entity to add an altered hash database to iOS which will “scan” for matching pics (btw..it never checks for “content”, it has to match an identical picture, but okay) and then Apple will have those images tagged, then “see” them somehow (let’s assume the user uploads them to iCloud) at which point Apple will be notified, and then in turn tell this government that has asked for them. Does sum up what you are saying could happen?

You are not the only one who has inferred this could happen, so I just want to be clear before I reply with a response that is appropriate.
 
Repeating incorrect info that Apple has clearly pointed out answers to in multiple documents and interviews won’t make conspiracy theories any more true.

* Apple does not “scan” anything on device. The do a hash comparison if, and only if, one has iCloud Photos turned on.
* If iCloud Photos is NOT turned on, no hash comparison takes place for anything.
* If iCloud Photos IS turned on, hash comparison takes place, but no one, including, Apple knows about it or “sees” anything. People don’t have to believe that, but it is a fact. It is nothing more than an on-device comparison that will mark photos if they match on-device hashes.
* Apple does NOT review every pic that may have a hash match. As reported, it will take upwards of 30 matches to take place before Apple reviews low-resolution versions of pics to see if they match what is in the database.
* Apple does NOT report to law enforcement/governments. If all of the above criteria has been met and an account is positively shown to be uploading known child pornography to iCloud, Apple suspends the account and reports the relevant information to the NCEMC, a non-government not for profit organization set up specifically to protect children from being exploited. Yes, the NCEMC may involve law enforcement at this point, but only if their research further proves that the account/person in question is in fact in possession of child pornography.
* No one said Apple wasn’t doing this by choice. But, by knowing about child pornography on their servers, they are required by law to report that.

So, why is this BETTER than what every other company mentioned is doing?

Facebook, Google, etc. actively compare every single one of your personal photos uploaded to their servers. Every single one!

Apple will NOT compare any photo if you are not uploading them to iCloud. If you are uploading to iCloud, they are only tagging those specific photos and never review/see/tag any of your other photos, even those uploaded to iCloud. Again, Facebook, Google and others scan every single thing you send or upload to your accounts with them.

If your concern then is that the has database hard coded in iOS which is 100% controlled by Apple (no different than any other hash database already on your phone…yes, there are others that have been there for years) can somehow be ”hacked” and used against you to wrongly convict you of being a child pornographer, I’m not sure what I can say to convince you of the one in one trillion number that they have clearly presented.

My only comment on that is that there are HUNDREDS of easier ways for any hacker to frame you or access info on your phone. They wouldn’t waste their time using a system set up with so many checks and balances on all sides involved to even bother.

Inserting “alternative” hash info into iOS is not only impossible unless you are the person at Apple that actually does that programming, but what happens next? If you want to jump to the conclusion that Apple can then work with government to not only add this, but then share that info, again, not sure what I can say to convince a person that believes such a level of conspiracy that this has even a LOWER chance than one in one trillion of happening.

I think if people would take less than 15 minutes to read the FAQ Apple released, while not answering ALL of the questions or going into the details of the tech, it does answer the vast majority of “real” questions and not the “what ifs” people will always have for any tech that is introduced. Practically ANYTHING can happen…but “what are the odds” are the questions people either don’t want to recognize or simply choose to avoid so it can support what they believe or want to believe to be true in their mind. Nothing I or Apple or anyone else can say will change that.
This message paid for by Apple, makers of the new spiPhone.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.