I have to ask how you know this to be true. What's your sample size? Exactly how many people "into child porn" do you normally associate with?The chances that someone into child porn has only 29 images is slim to none.
I have to ask how you know this to be true. What's your sample size? Exactly how many people "into child porn" do you normally associate with?The chances that someone into child porn has only 29 images is slim to none.
Not if you want to be able to add new images to the database when new CSAM arrives.Hm. Not sure, but since the network is not supposed to generalize, but find images in the training set only, wouldn‘t overfitting be kindof a desired feature?
I am deeply rooted in the AI/ML world, and their language suggests they have embedding layers in the network (see word2vec and word embeddings) in order to assess semantic and perceptual likeness --- this is not simply face detection or object detection, they are looking for (and only looking for) images that nearly identically match.We don't actually know that, because we don't have details of the proprietary neural hashing. We know that it involves determining if images are "similar", because it supposedly works even if images are cropped or individual pixels are changed. Were that not the case, it would be ridiculously easy for even the densest of CSAM aficionados to circumvent.
The question of what "similarity" means is a big issue in the AI/ML world, where facial recognition systems have proven to have significant biases (for example, they do very poorly with members of some ethnic groups). Images that some AI/ML system "thinks" are similar may not be similar at all to a human being (hence the human verification step before Apple risks considerable legal liability by reporting an innocent image to the authorities in the US).
Sure, but the existing system would be a lot better at being a surveillance system than NeuralHash, which seems nearly impossible to turn into one.There's a significant difference between "identifying pictures of people and collecting them in an album for your convenience" and "flagging material in order to report you to the authorities".
I don't disagree with this. Then again, it's been a policy decision to not always use the existing facial scanning tech against us (at least we've been led to believe this).That's a policy decision, not a a hard technical limitation. Policies change.
You do not know this for sure, because you do not know how the neural hash is calculated. Since it is based on "similarity" as determined by the hashing process, and "similarity" to an ML algorithm is not necessarily the same thing as "similarity" determined by a human being who knows the image context, it's impossible to say without running the image through the hashing process whether or not it "matches" the hash of some random piece of CSAM in the database. The match has to be somewhat loose, or it wouldn't be able to detect cropped or reprocessed images, which Apple claims it can.facepalm.,
man people really need to read what's in this. this is why so many are confused.
No. it will not flag your photo of your daughter taking a bath because that Image's HASH is NOT in the DB . it DOES NOT SCAN FOR NAKED PEOPLE FOLKS. understand this. It scans the HASHES . the back end of the file not the front end.
You are explicitly assuming that you are dealing with simple bitwise or cryptographic hash, when Apple has explicitly stated this is not the case. If it were the case, then cropped versions of the same CSAM image would be undetectable, as would images that had one or a few pixels altered.the hashes are generated from images from the national institute of protecting exploited children. their DB doesnt have IMAGES but hashes.
for example
IMAGE A is one an innocent user took with his dick , sent to his girlfriend, the hash for it is 01ASEFH901JAO for example
IMAGE B is one from NCEMA Database of a mans dick as well, exact same angle, same lighting, same EXIF data, this Image B was taken at same time in the same apartment building in the floor above IMAGE A , It has a Hash of 07B6TF431AFGA
hash scanning will NOT scan for dicks or naked bodies.
It will scan for the HASH, in this case 07B6TF431AFGA
Do you get it now?
Or what the algorithm "thinks" are naked people, more to the point.Faceplam.
It scans the image to generate a hash, if that hash is close to a kid porn hash you get flagged.
Just because it doesnt compare images like a human would, doesnt mean its not looking for naked people.
That is a policy decision, not a hard technical limitation. I suspect it will remain policy in the US. I suspect that policy will be different in the PRC or Saudi Arabia, when this "feature" is inevitably rolled out there.I believe Apple has said that all the databases are verified by multiple sources to prevent this from happening. They’ve designed it so that no government can hijack the system.
They're making a bet here. They're betting that the slogan "Apple protects children" will generate more revenue than "Apple thinks you're a pervert and spies on you" will cost them. That's all.Nobody:
Nobody at all:
Apple : We are going to start scanning your iPhone for child porn
I was surprised Apple dropped the ball on this and didn’t internally realise this would be widely panned and fly against their Privacy crusade.
Now I’m CONCERNED. Concerned that in the face of widespread global condemnation they’ve chosen to double down. That’s a step past incompetent. That’s not the Apple I know and love. Tone deaf.
I feel the need to point out that "Apple protects children from sexual exploitation" is every bit as much virtue-signalling PR as "Apple respects privacy".Hmm, interesting observation, Cook is indeed the PR man when it comes Apple privacy. Have they figured out that the more they discuss this in public, the worse it'll get for them. I think they might want to bury the PR on this, and just proceed with it, hoping the public noise dies down.
probably because apple knows people are excited for the os update and new phones and products and this would play against their dislike of the new csam scanning routineWhat I find interesting, is the timing of Apple's decision to implement this software. It comes shortly before the release of iOS 15, and a month before the official announcement of the iPhone 13.
That's really bad timing for ticking off a lot of users.
It's curious as to what might have prompted their decision...
This!Don't take our word for it. Here's Bruce Schneier's (one of the foremost information security experts on the planet) take: https://www.schneier.com/blog/archi...backdoor-to-imesssage-and-icloud-storage.html
That is a policy restriction, not a technical limitation. While I fully expect that policy to be maintained for the foreseeable future in the United States (incurable optimist that I am), I expect Apple to fold like a cheap suit when they're pressured by the CCP, Saudi Arabia, or repressive-regime-of-your-choice. This policy will be different there. We know this is true based on Apple's past capitulations to such regimes.You're misunderstanding. The foreign jurisdiction doesn't review your photos --- they agree with other jurisdictions on what is the set of CSAM photos to check against. This way, China, Russia, the US, etc., can't try to target their citizens, for some local crime.
Thank you. Yes, exactly.. Final point is just because the code is currently configured to send matched hashes to Apple if iCloud photos is turned on, that is not a technical dependency. A simple code change could send data constantly.
The silliness of these arguments you replied too, is mind boggling.There's a significant difference between "identifying pictures of people and collecting them in an album for your convenience" and "flagging material in order to report you to the authorities".
they have said explicitly that they are going to fight it, if you think they are lying or are going to be forced to do this by government without notifying users then all bets are off with regard to your relationship with apple and it's time to go elsewhereBut the FBI does.
What will Apple do when they receive a national security letter with a gag order ordering them to report certain pictures?
Have they promised not to scan your phone or MacBook for anything but CSAM in the future? (or even the next x years?)
No!
And they're not going to tell us that because they know we won't like the answer.
Nah, I believe they made a very cynical business decision that claiming to protect kids from perverts sells better than claiming to protect privacy.I feel like some people have no desire to understand or seek out the larger context. And instead believe Apple execs, apparently believing their not very smart, just decided to do this on a whim one morning feeling the public would be fine and there would be no adverse company consequences.
Sort of. Technically, it’s the Hamming Distance. The distinction between nearly identical and merely similar is most likely a few bits at best. Totally configurable by Apple and not likely very accurate based on:I am deeply rooted in the AI/ML world, and their language suggests they have embedding layers in the network (see word2vec and word embeddings) in order to assess semantic and perceptual likeness --- this is not simply face detection or object detection, they are looking for (and only looking for) images that nearly identically match.
You do not know this, because you do not know exactly how the perceptual/neural hash is derived from the image file.It is a match of hash, not a scan, from a child porn database from NCMEC. No way this activate other way that your a pedophile.
If that's true, then why are they shipping the hash DB with IOS (which they've explicitly said they are)? There would be no reason to do this unless the test is performed on the phone.this analogy is completely incorrect. What’s actually happening is that before you leave your house, you create an inventory of all items you’re gonna bring across state and deliver that list to the FBI agent once you get to the border. That way, the agent does not need to go through your actual items.
let’s make this clear: the iPhone creates the hash. The matching process happens on the could.
We don't know if it's completely policy -- it may be built-in. In fact, their documentation suggests they take an set intersection of hashes from the various databases, and only consider true CSAM when the set intersection has a cardinality >= 2. This suggests that it's not a policy decision to upload or not based on some manual review of agreements between countries, but a technical implementation, which would require edits to iOS (they only ship 1 version of iOS worldwide).That is a policy restriction, not a technical limitation. While I fully expect that policy to be maintained for the foreseeable future in the United States (incurable optimist that I am), I expect Apple to fold like cheap suit when they're pressured by the CCP, Saudi Arabia, or repressive-regime-of-your-choice. This policy will be different there. We know this is true based on Apple's past capitulations to such regimes.
And again, that's a policy decision not a strict technical limitation. I'm reasonably sure that things will stay this way in the US. Other places, not so much.As far as I understood it from Craig’s WSJ interview, this is an opt-in child protection iMessage feature that will be alerting the respective parents.