Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It is not open source, but anyone enrolled on Apple's Security Research Device Program can audit all these processes. This is the third time I say this and somehow you're ignoring it. Apple's documentation on CSAM even states this multiple times:

> The perceptual CSAM hash database is included, in an encrypted form, as part of the signed operating system. It is never downloaded or updated separately over the Internet or through any other mechanism. This claim is subject to code inspection by security researchers like all other iOS device-side security claims.

> That the calculation of the root hash shown to the user in Settings is accurate is subject to code inspection by security researchers like all other iOS device-side security claims.

1. Apple routinely updates its OS so that regularly updating this database and pushing it to users is trivial.

2. Apple do not have access to the original database only the hashes. If the hash for a photo of BLM was added then Apple wouldn’t have a clue and would just push it to users.
 
The argument of future scenarios are entirely illogical because they didnt apply to apple when it comes to iCloud scanning so its illogical to think magically they will apply to device scanning when iCloud scanning is more intrusive than device scanning.

Also these scenarios are worst case. They were worst case even before Apple announced CSAM scanning but for some reason scanning for pedophiles starts these conspiracy theories not any prior stuff
You don‘t remember E. Snoden, you must be very young, sorry, I thus addressed the wrong one.
 
  • Like
Reactions: 09872738
1. Apple routinely updates its OS so that regularly updating this database and pushing it to users is trivial.

2. Apple do not have access to the original database only the hashes. If the hash for a photo of BLM was added then Apple wouldn’t have a clue and would just push it to users.

All of these updates are signed so security researchers can audit each of them individually. If these checks happen in the cloud, there is no way to know when they update the database. By doing this locally, you can.

Regarding your second point, I leave you with @TheToolGuide's answer:
You obviously did not read the material. First they only take CSAM that has been provided by multiple agencies of different countries to reduce that kind of attack. Then the hashes have to be on the device and the individual, because of how hashes work, have to be EXACT hashes. Not close to or similar but exact. So if you have a hash that’s was a confederate flag image before it got converted to a hash and you are wearing a confederate flag t-shirt in some of your images they won’t match and won’t get flagged. Then there needs to be 30 of those hashes before it is flagged for review. At which time an Apple employee has to actually look at the image derivatives, they don’t look at any other images in the account, just the flagged ones. So if they are all political images and not CP then nothing gets reported. They covered all of this in their documentation.

I’m not saying there shouldn’t be discussion however before putting your opinion to the screen please try to be educated on what you’re talking about. If your issue is that you don’t trust that they are being honest at least say that. What you said is not how it works and could not function like that. You don’t add value to either side of the argument when you aren’t educated.
 
You obviously did not read the material. First they only take CSAM that has been provided by multiple agencies of different countries to reduce that kind of attack. Then the hashes have to be on the device and the individual, because of how hashes work, have to be EXACT hashes. Not close to or similar but exact. So if you have a hash that’s was a confederate flag image before it got converted to a hash and you are wearing a confederate flag t-shirt in some of your images they won’t match and won’t get flagged. Then there needs to be 30 of those hashes before it is flagged for review. At which time an Apple employee has to actually look at the images, they don’t look at any other images in the account, just the flagged ones. So if they are all political images and not CP then nothing gets reported. They covered all of this in their documentation.

I’m not saying there shouldn’t be discussion however before putting your opinion to the screen please try to be educated on what you’re talking about. If your issue is that you don’t trust that they are being honest at least say that. What you said is not how it works and could not function like that. You don’t add value to either side of the argument when you aren’t educated.
if I'm not mistaken the 30 threshold is because in Apple's internal testing Apple found that the device AI falsely flagged 3 images out of 1 million images. That means a user needs 30 million images on device (impossible) for them to send a user information to apple with false tags.
 
Google security researchers will find out in 10 seconds if iOS is scanning private user data beyond CSAM. They often report system flaws to Apple and to public at same time so its really silly to freak out if Apple does something behind the scenes without users knowing
Google guys are blind as anybody, cause they just see hashes, not images of e.g. politically persecuted people injected.
 
All of these updates are signed so security researchers can audit each of them individually. If these checks happen in the cloud, there is no way to know when they update the database. By doing this locally, you can.

Regarding your second point, I leave you with @TheToolGuide's answer:

This means nothing. Non-CSAM image hashes can be added to the database and no one but the original database editor will know.
 
All the answers are here:

Interviewer: So what started all this? Why now?
Apple: uhhhh. We just thought it was the right thing to do.



Great. So what else do they feel like "is the right thing" to do? Slippery slope.
 
You obviously did not read the material. First they only take CSAM that has been provided by multiple agencies of different countries to reduce that kind of attack. Then the hashes have to be on the device and the individual, because of how hashes work, have to be EXACT hashes. Not close to or similar but exact. So if you have a hash that’s was a confederate flag image before it got converted to a hash and you are wearing a confederate flag t-shirt in some of your images they won’t match and won’t get flagged. Then there needs to be 30 of those hashes before it is flagged for review. At which time an Apple employee has to actually look at the images, they don’t look at any other images in the account, just the flagged ones. So if they are all political images and not CP then nothing gets reported. They covered all of this in their documentation.

I’m not saying there shouldn’t be discussion however before putting your opinion to the screen please try to be educated on what you’re talking about. If your issue is that you don’t trust that they are being honest at least say that. What you said is not how it works and could not function like that. You don’t add value to either side of the argument when you aren’t educated.
been provided by multiple agencies (…)

read what you write. Then you proceed.
 
This means nothing. Non-CSAM image hashes can be added to the database and no one but the original database editor will know.

Again, you missed the point:

- There's no single editor of the database. The database is the intersection of multiple agencies from different jurisdictions.
- If you happen to have 30 matches, then there's a manual review. If the image derivative is non CSAM, then Apple does not report it (it considers it a false positive).
 
How many press releases and FAQs do we need to polish this turd?

Apple designed a system so that an external authority can gain control of your phone to scan your private files and report the results to the police. End of.
Worse, Apple set the system so they have plausible deniability as Apple don't even know what's in the database. It's all just hashes. Really smart for Apple. They satisfied their virtue signaling and their liability.
 
This means nothing. Non-CSAM image hashes can be added to the database and no one but the original database editor will know.
I think the reverse is also just as dangerous… Apple are going to retain all of the user-side hashes until the end of time. So not only could they determine what what was in my library now but also well into the future when they scrap the entirety of Google, Instagram and the internet for images to be able to digitally fingerprint us that way. They will have one side of the comparison now and forever.

It is definitely the time when the ToU are going to be read for me!
 
This means nothing. Non-CSAM image hashes can be added to the database and no one but the original database editor will know.
Where was this concerns about compromised DB when Google, Microsoft, Amazon, FB, Twitter, DropBox, Flickr all were scanning for CSAM from the same databases? Where was your thread about it? Where was ANY major news push for that?
 
  • Like
Reactions: DanielDD
Interviewer: So what started all this? Why now?
Apple: uhhhh. We just thought it was the right thing to do.



Great. So what else do they feel like "is the right thing" to do? Slippery slope.
Craig does not lie well. Like a deer caught in the headlights. So deplorable.
 
I think the reverse is also just as dangerous… Apple are going to retain all of the user-side hashes until the end of time. So not only could they determine what what was in my library now but also well into the future when they scrap the entirety of Google, Instagram and the internet for images to be able to digitally fingerprint us that way. They will have one side of the comparison now and forever.

It is definitely the time when the ToU are going to be read for me!
Apple only retains hashes of Pedophile users. They do not access non Pedophile user data. Apple never receives ANY user hashes or other info unless the user is a pedophile
 
  • Haha
Reactions: Morgenland
if I'm not mistaken the 30 threshold is because in Apple's internal testing Apple found that the device AI falsely flagged 3 images out of 1 million images. That means a user needs 30 million images on device (impossible) for them to send a user information to apple with false tags.
I haven’t read the 3 in 1 million but even it that were true that’s a pretty high number. Not to mention it still gets a human to review it before anything gets sent to a government agency. I personally, which is just an opinion, like those odds.
 
I haven’t read the 3 in 1 million but even it that were true that’s a pretty high number. Not to mention it still gets a human to review it before anything gets sent to a government agency. I personally, which is just an opinion, like those odds.

According to Apple the probability of a false positive in 30 matches is 1 in a trillion.
 
Apple only retains hashes of Pedophile users. They do not access non Pedophile user data
No… they're not. They're going to retain ***ALL*** hashes as new hashes can be added to the other side at any time and they'll need to be able to compare those. This won't be a use once and delete.

And to ask… where did they write that they will not retain hash data?
 
  • Like
Reactions: Morgenland
You obviously did not read the material. First they only take CSAM that has been provided by multiple agencies of different countries to reduce that kind of attack. Then the hashes have to be on the device and the individual, because of how hashes work, have to be EXACT hashes. Not close to or similar but exact. So if you have a hash that’s was a confederate flag image before it got converted to a hash and you are wearing a confederate flag t-shirt in some of your images they won’t match and won’t get flagged. Then there needs to be 30 of those hashes before it is flagged for review. At which time an Apple employee has to actually look at the images, they don’t look at any other images in the account, just the flagged ones. So if they are all political images and not CP then nothing gets reported. They covered all of this in their documentation.

I’m not saying there shouldn’t be discussion however before putting your opinion to the screen please try to be educated on what you’re talking about. If your issue is that you don’t trust that they are being honest at least say that. What you said is not how it works and could not function like that. You don’t add value to either side of the argument when you aren’t educated.
I did see the article but many of the replies are pretty naive as in humans are robots that follow the rules. Snowden covered this stuff multiple times in his talk about the NSA, CIA's involvement in the vaccination programs that were also international organizations, the list goes on. Many of those so called agencies still function under the same governments that Snowden talked about or have no means to defend against high level cyber interference.
 
  • Love
Reactions: Morgenland
1. Apple routinely updates its OS so that regularly updating this database and pushing it to users is trivial.

2. Apple do not have access to the original database only the hashes. If the hash for a photo of BLM was added then Apple wouldn’t have a clue and would just push it to users.
Number 2 is intentional to protect Apple themselves. They basically have plausible deniability when things go south as all they have are hashes. Really smart of Apple.
 
  • Love
Reactions: Morgenland
I think the reverse is also just as dangerous… Apple are going to retain all of the user-side hashes until the end of time. So not only could they determine what what was in my library now but also well into the future when they scrap the entirety of Google, Instagram and the internet for images to be able to digitally fingerprint us that way. They will have one side of the comparison now and forever.

It is definitely the time when the ToU are going to be read for me!

Vote with your wallet and find both happiness and a phone that meets your requirements.
 
Does anyone know how long the hash is? Hopefully more than 8 characters but I doubt it. It’s gonna be a huge database we have to store on our iOS devices, how will that affect battery life, storage and performance if keep 50k photos on our devices? Will it have to scan every photo again when there is an update to the database? What about the environmental impact of all this unnecessary cpu cycles? Will people be encouraged to jailbreak their devices in order to disable this “feature”?
It comes from subtle cues in their wording, but my understanding is that CSAM scanning will apply only to new photos after it launches. It’s part of the upload process (since it attaches a “safety voucher” to the photo for upload). I don’t see why photos would be scanned again after upload if it’s part of the upload process.

They’re always cautious to say that this will catch people who are “starting collections” of known CSAM, which is what led me to this belief along with it being part of the upload process.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.