Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If apple is already scanning on their end on iCloud servers, why scan on device too?

This has been covered ad nauseam, but I guess another time won't hurt. They are MOVING the scanning process TO the device starting with iOS 15, which restricts Apple's access to ALL scanning data unless a user uploads 30 or more detected CSAM images to iCloud. So it's not a "too" but an "instead." And the reason is privacy.
 
Well if it won't happen, what is the point of the manual review process from Apple in the first place? Especially if they cannot actively compare it with the CSAM images. Why not just send it to review by those that DO have access to those images?
I didn't say it won't happen. It theoretically could happen. In which case should Apple really be blindly sending an innocent person's private data to any 3rd party and accusing them of child porn offenses? Would that policy reassure anyone? Also, the review is a defence against one of their considered attack vectors - where a sinister agency mangers to insert phoney hashes into the DB to catch other images other than CSAM.
 
What are you talking about. Apple confirmed that disabling iCloud photos disables the scanning completely. Also, the “scanning” ONLY TAKES PLACE DURING THE UPLOAD PROCESS.
Not trying to be pedantic here, but from what I read, there will be no 'scanning' of photos on device. Only photo(s) that is to be uploaded to iCloud Photo will be hashed checked. Otherwise there will not be any hash checking done. This to me is not the definition of scanning.
 
I didn't say it won't happen. It theoretically could happen. In which case should Apple really be blindly sending an innocent person's private data to any 3rd party and accusing them of child porn offenses? Would that policy reassure anyone? Also, the review is a defence against one of their considered attack vectors - where a sinister agency mangers to insert phoney hashes into the DB to catch other images other than CSAM.
What we still don't know is WHAT Apple is doing during the review process. Do they have the raw images of the CSAM and can actively compare flagged images to CSAM images? How else can they determine false positives? Are they just sending ANY picture that has kids in them to further review to those that DO have access to CSAM? Again, if its the latter would they be making judgement calls on 15 year olds or 25 year olds?
 
Not trying to be pedantic here, but from what I read, there will be no 'scanning' of photos on device. Only photo(s) that is to be uploaded to iCloud Photo will be hashed checked. Otherwise there will not be any hash checking done. This to me is not the definition of scanning.
That’s why I put “scanning” in quotes since that’s what so many people are fixated on. Sounds like you agree with me. The hashing is done on a per-file-being-uploaded basis.
 
  • Like
Reactions: quarkysg
Not trying to be pedantic here, but from what I read, there will be no 'scanning' of photos on device. Only photo(s) that is to be uploaded to iCloud Photo will be hashed checked. Otherwise there will not be any hash checking done. This to me is not the definition of scanning.
right, the only time any photos will be scanned is IF you have icloud photos turned on and you add an image to photos

turn off icloud photos and use another cloud photo service and apple will know nothing about the photos on your device

and if doing this allows apple to implement e2ee then your whole device will be behind a wall and nobody including apple will see anything on your device
 
  • Like
Reactions: quarkysg
right, the only time any photos will be scanned is IF you have icloud photos turned on and you add an image to photos

turn off icloud photos and use another cloud photo service and apple will know nothing about the photos on your device

and if doing this allows apple to implement e2ee then your whole device will be behind a wall and nobody including apple will see anything on your device
But what about some hypotheticals about the future!!!!
 
What we still don't know is WHAT Apple is doing during the review process. Do they have the raw images of the CSAM and can actively compare flagged images to CSAM images? How else can they determine false positives? Are they just sending ANY picture that has kids in them to further review to those that DO have access to CSAM? Again, if its the latter would they be making judgement calls on 15 year olds or 25 year olds?
The chances of a review are zero so why are you so worried about this? They are only getting reviewed after they find 30 pictures with a hash that matches the hash of already known images of child abuse. They aren’t looking for new pictures. Assuming you don’t have 30 already known images of child porn on your phone you have nothing to worry about. The chance if mistakenly identifying one of your pics is like one in a trillion…..remember , the hashes have to match, not the contents of the picture. They don’t know what the pictures are. Look at it this way :They’re comparing a bunch of numbers on your phone that represent your pictures to a list of numbers that represent known images of child porn looking for matches. If they find 30 matches someone will turn those numbers into a picture to see what it is. The chance that one of your pics is flagged in error is astronomically small. The chance of 30 mistakes is zero.
 
Last edited:
yeah, totally, does facebook or google lay out in detail how they are scanning for csam ?

i don't think so, their system of server side scanning is infinitely more opaque, dangerous and open to considersably more abuse, they may only be using 1 database of csam which could be corrupted, they are not detailing how they are getting images or using and comparing them

apple is using only those hashes which appear in 2 separate databases which insures that the material being compared against is actual verified csam

apple is putting the database in plain sight and providing the hash for all to see and allowing 3rd party audit, we the users can check the hash on our device

if the nsa really wants to use a backdoor via the fisa they can damn sure do it anytime they want, this attempt by apple to deal with csam neither stops them nor does it make it any easier for the nsa to do what it wants, they'll just do it

i am afraid we are seeing a river of paranoia here along the lines, as you say, of "anything is possible", yeah anything is possible, tim cook may be the manchurian candidate, an nsa stooge

i don't think people are taking into account just what it would take for governments to pass legislation permanently breaking encryption and security on phones

we need to find a new way forward that balances the needs of government and the rights of users and from what i see apple is trying to do just that

in the end it all comes down to trust, if you really don't trust apple then go somewhere else, how about google :)

i think we have no choice but to trust and take the tradeoff of transparent, verifiable on device scanning, for the e2ee encryption that i believe apple is working toward
Apologies fo the delayed reply - issues with the OS I call BS....Have had nothing but issues since upgrading to Big Sur....

For me, as you noted above, one of the many issues is that Apple is given the database of hashes, as I understand it. There is no public oversight, no monitoring of the database. As part of the security, as all seem to admit is only NCMEC, and other law enforcement know what is contained in the database.....

To me, this reeks of the still ongoing infamous "No Fly Lists". Think of all the mistakes, errors and just utter "f-ups" associated with the No Fly Lists. I think this has the ability for similar mistakes, however well intentioned.
 
I am asking about the human review process, not the matching process. Have any details about how that process works been publicly stated? Will they be checking drivers licenses for the subject's age?
Of course, for me this is another issue. If the system is as flawless and perfect as Apple claims it to be, when must Apple review. Why is not the file/information automatically forwarded to NCMEC, the de facto law enforcement agency. Apple is a private citizen in this example.
 
  • Like
Reactions: Ethosik
well yes, we are dealing with a lot of trust here

apple creates the database by merging images from 2 different agencies and they only use matching images so that the images in the database are deemed child porn by 2 different agencies

they then put the database on every phone in the world that ios15 and the database is signed with the operating system

apple publishes a hash of the database which users and third parties can inspect, that database will sit there unaltered until presumably apple releases another os and loads a new database with a new hash

it makes changing the database impossible until they replace it with a completely new one

but yeah we are placing a lot of trust in apple

to me it's better than facebook where i have zero knowledge of the database they are using on their servers or google
As I understand, Apple has absolutely no idea of what is in the database. All they receive is the database of hashes....? So, in a worst case scenario, anything could be in the database.

And when Apple reviews a flagged phone, I assume they are only confirming the hashes are the exact match?
 
I believe Apple is under legal guidelines to make sure CSAM content is not on their servers - thus scanning iCloud data (which it does currently). I honestly do believe Apple is doing this with as much focus on privacy as possible.

A good comparison would be you have a lot of guns at your house and you do have permits/licenses for those guns. So everything is legal right? But you cannot bring them with you in the airport.

This is ONLY applicable to images being uploaded to iCloud anyway. Its part of the iCloud pipeline.
To use your example, however, Airport Security is coming to your house unannounced, without your permission to rifle, no pun intended, through all your belongings BEFORE you even purchase a ticket.
 
right, the only time any photos will be scanned is IF you have icloud photos turned on and you add an image to photos

turn off icloud photos and use another cloud photo service and apple will know nothing about the photos on your device

and if doing this allows apple to implement e2ee then your whole device will be behind a wall and nobody including apple will see anything on your device
My read is that if you agree to use iCloud, it scans all photos. I assume initially to create a safe database, then after that only the new images....?
 
  • Like
Reactions: BurgDog
My read is that if you agree to use iCloud, it scans all photos. I assume initially to create a safe database, then after that only the new images....?
i don’t think so, it starts with your first upload after you install os15, the csam database sits on the phone and when a photo is added to photos and then uploaded it is hashed and compared against csam database, it doesn’t work on photos already in the cloud since it only is only used upon upload

like, if you now have csam sitting in icloud, even when you download and install os15, they won’t be seen or hashed and even if you upload brand new csam that ncmec or other agencies haven’t ever seen, those won’t be flagged either
 
well, hmmm, surely we can find some way to be paranoid about the future 🤔😊

Heh... that's pretty easy. Just toss a bucket of Apple chum in the water having anything to do with how Apple is improving security/privacy for its customers. That'll be worth another 1,000+ paranoia posts.
 
To use your example, however, Airport Security is coming to your house unannounced, without your permission to rifle, no pun intended, through all your belongings BEFORE you even purchase a ticket.
Uhm Apple’s “scan” only happens if you’ve already purchased ticktes to the iCloud and your pics are already in the boarding area.
 
Because that’s the whole point to not decrypt innocent users’ pics on-server and what you just described is exactly what makes the system “less invasive”? 🤔
Sorry but its clear you have no idea of what even says this procedure will do or how it works. The software on YOUR is operational whether you intend to go to iCloud or not, so your 'innocent' pictures' still go through the process, and the same thing would happen if it were on iCloud? There's no additional safety in having it on your hardware at all, no extra anonymity as Apple state its Anonymised? So having it on your system confers no benefits but much more extra potential risk.
 
did you read the technical docs ? the database is signed with os15 and the hash of the database will be published by apple in a knowledge base article, there is no way they can alter the database, plus like the blockchain, it is in full view of everyone including users

the same system in the cloud is completely opaque we have no idea what is being matched or where the images came from or whether they have been added to or subtracted from

it would be infinitely easier for a bad actor to insert non-csam materials in cloud scanning and we wouldn't even know it

apple has chosen to put the database in every phone so it can be seen and not altered or messed with in any way
Perhaps its you that should revisit Apple Docs. Have you heard of System Integration Protection? its what stops others from modifying your systems, but where Apple is the signatory so bypasses that, which allows them to modify software at will without any checks and balances.

Who said they will alter the database ? That is a complete non issue? but unlimited access to your hardware, they can virtually change the proposed tools to do anything.

Its not just going to be in every phone is it? that is only the start.

The same NeuralHash can be on iCloud, so arguments about iCloud etc., are also rather ridiculous in any suggestion there is a benefit of having OUR hardware contain tools for surveilling our material, and giving the potential for much more serious problems.

I'm so pleased Apple employees get it, even if quite a few on this bb don't!
 
  • Like
Reactions: BurgDog
A scan is not a scan if its output is cryptographic gibberish and it sits on my phone till the end of times without being relayed to Apple, sorry. It’s a courtesy pre-scan. It’s half-a-scan. Locally, it’s nothing.
The best description of gibberish I've seen: "A scan is not a scan if its output is cryptographic gibberish"
A scan is a scan ...simples.
 
All iPhones globally will contain the same hash database. Users will be able to verify this on their devices. The hash db will contain only entries which are provided by child safety organisation in at least two different countries. Only images flagged in both countries will appear in the DB. The DB is a part of the OS not a separate download. This is their system against manipulated DBs and coercion. Apple are insisting that there's is only one version of iOS because they don't have the means to create multiple different versions. This is the defence they used in the San Bernadino case against the FBI - there is one iOS and you can't make us develop a separate one for your projects.
"Apple are insisting that there's is only one version of iOS because they don't have the means to create multiple different versions."

And yet instead of putting one piece of code in iCloud that will do the same thing, just as anonymously, that doesn't infect your hardware, that doesn't set a dangerous precent, that doesn't require 1,000,000,000+ users to download said software, that doesn't require server time to download that software, doesn't require the Hardware of the user to use its processing power, its electricity, etc. etc. Doesn't require Apple to go against its own eco credentials where the overhead of not having it on server is immense?

And people wonder why the trade is suspicious, why Apple employers are suspicious, why the media in general is suspicious?
 
For those of you who like Germany's heise magazin

"Comment: Apple's CSAM scans - breaking a taboo that leads to total surveillance":

Not clear if Apple's checks for CSAM are allowed or not in Europe under the GDPR. Perhaps yes, due to EULA and exceptions to the rules. But it would be very interesting to see EU's reaction when Apple will try to deploy this technology there.
 
"Apple are insisting that there's is only one version of iOS because they don't have the means to create multiple different versions."

And yet instead of putting one piece of code in iCloud that will do the same thing, just as anonymously, that doesn't infect your hardware, that doesn't set a dangerous precent, that doesn't require 1,000,000,000+ users to download said software, that doesn't require server time to download that software, doesn't require the Hardware of the user to use its processing power, its electricity, etc. etc. Doesn't require Apple to go against its own eco credentials where the overhead of not having it on server is immense?

And people wonder why the trade is suspicious, why Apple employers are suspicious, why the media in general is suspicious?
While throwing out your scary trigger words ('Infect!' :rolleyes:) you continue to minimise the obvious problem with having that software on Apple's servers - it requires Apple servers to be able to see the contents of your Photos. That means you need 100% trust in them not to scan them for all kinds of material, or store your photos in perpetuity to be scanned at some future year when the political winds change, and to prevent other agencies hacking into them and doing those those things (again). A server with 1,000,000,000+ users' private data is a tempting target for spies and when it's breached to target one user we are all exposed.

If the software runs on our phones, security researchers can see the code running and raise the alarm if it appears to do something its not supposed to be doing, or if it appears to be changing suspiciously, or if its different on some phones in China or Hungary, or if bad actors appear to be targeting it with viruses or trojans. If somebody wants to corrupt the code to spy on us all, they would need to do it out in the open and succeed 1,000,000,000+ times and be caught 0 times. I haven't seen any credible researcher who doesn't concede that advantage.

And please, the idea Apple is doing it to save electricity is just silly. You could make the same complaint right now about where Photos generates thumbnails and low-res previews, or any other process happening 'on device' for privacy reasons.
 
No thanks, I'll pass. If you had some credentials regarding infectious disease/epidemiology research to lay out, I might be interested in investigating further. Without that, I have no idea if your assessment questioning measurements is even valid to begin with. Even if I had the time, I don't have the background to make or judge that assessment.

It would be like me making generalizations regarding analog superheterodyne radios being better or worse than digital radios; ie digitally sampling an analog IF and then using digital downconversion and filtering techniques.

I wouldn't expect you to believe my assessment, or even related measurements (phase noise, noise figure, sensitivity, spur free dynamic range, strong signal third order intermodulation, filter shape factor and ultimate rejection, and on and on) and the equipment used to measure those parameters being adequate for the task, unless I was able to demonstrate competence in digital radio design.

Getting back on track, with respect to infectious diseases and pandemics, I'll rely on those who have many decades of experience and a demonstrated track record dealing with them. Simple.

If you want to call them out because you believe measurements are faulty, feel free. I suspect most people will simply ignore your assertions.
What I know is irrelevant. As I wrote, there is specific data readily available on the limits of what humans can know.

It would be really embarrassing, if instead of responding ad nasseum to me several times, you could have googled something, and found a variety of completely mainstream virology and microbiology materials that explicitly state, or very obviously imply, that “human science still lacks a reliable method for viral observation”.

Are you familiar with Fort Detrick and its reputation for virology? It’s relationship with the NIH/CDC? Wonder what kind of nuts they are trying to crack, maybe to improve viral TEM & STEM observations and techniques, at scales even magnitudes larger than those of current popular human concern?

Maybe not…. because if I understand correctly from your reply, you have faith and that’s good enough for you. It is indeed a popular belief among humans that there are benefits just from the power of belief alone.

Well like they say: “Don’t eat that apple, Eve!”
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.