Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The chances that someone into child porn has only 29 images is slim to none.
I have to ask how you know this to be true. What's your sample size? Exactly how many people "into child porn" do you normally associate with?
 
So am I correct to assume if you don’t want this Then you can not upgrade to ios15 ? Or are they just turning it on in icloud…. I currently do pay for family icloud
 
Hm. Not sure, but since the network is not supposed to generalize, but find images in the training set only, wouldn‘t overfitting be kindof a desired feature?
Not if you want to be able to add new images to the database when new CSAM arrives.
 
We don't actually know that, because we don't have details of the proprietary neural hashing. We know that it involves determining if images are "similar", because it supposedly works even if images are cropped or individual pixels are changed. Were that not the case, it would be ridiculously easy for even the densest of CSAM aficionados to circumvent.

The question of what "similarity" means is a big issue in the AI/ML world, where facial recognition systems have proven to have significant biases (for example, they do very poorly with members of some ethnic groups). Images that some AI/ML system "thinks" are similar may not be similar at all to a human being (hence the human verification step before Apple risks considerable legal liability by reporting an innocent image to the authorities in the US).
I am deeply rooted in the AI/ML world, and their language suggests they have embedding layers in the network (see word2vec and word embeddings) in order to assess semantic and perceptual likeness --- this is not simply face detection or object detection, they are looking for (and only looking for) images that nearly identically match.
 
  • Like
Reactions: farewelwilliams
There's a significant difference between "identifying pictures of people and collecting them in an album for your convenience" and "flagging material in order to report you to the authorities".
Sure, but the existing system would be a lot better at being a surveillance system than NeuralHash, which seems nearly impossible to turn into one.
 
That's a policy decision, not a a hard technical limitation. Policies change.
I don't disagree with this. Then again, it's been a policy decision to not always use the existing facial scanning tech against us (at least we've been led to believe this).
 
facepalm.,


man people really need to read what's in this. this is why so many are confused.

No. it will not flag your photo of your daughter taking a bath because that Image's HASH is NOT in the DB . it DOES NOT SCAN FOR NAKED PEOPLE FOLKS. understand this. It scans the HASHES . the back end of the file not the front end.
You do not know this for sure, because you do not know how the neural hash is calculated. Since it is based on "similarity" as determined by the hashing process, and "similarity" to an ML algorithm is not necessarily the same thing as "similarity" determined by a human being who knows the image context, it's impossible to say without running the image through the hashing process whether or not it "matches" the hash of some random piece of CSAM in the database. The match has to be somewhat loose, or it wouldn't be able to detect cropped or reprocessed images, which Apple claims it can.

This is not some simple cryptographic hash where you can say "it's 32 bits long, so the odds of a collision are 1/2^32-1)". That's why I don't believe they can back up their "1 in a trillion" claim.
 
the hashes are generated from images from the national institute of protecting exploited children. their DB doesnt have IMAGES but hashes.

for example


IMAGE A is one an innocent user took with his dick , sent to his girlfriend, the hash for it is 01ASEFH901JAO for example

IMAGE B is one from NCEMA Database of a mans dick as well, exact same angle, same lighting, same EXIF data, this Image B was taken at same time in the same apartment building in the floor above IMAGE A , It has a Hash of 07B6TF431AFGA

hash scanning will NOT scan for dicks or naked bodies.

It will scan for the HASH, in this case 07B6TF431AFGA


Do you get it now?
You are explicitly assuming that you are dealing with simple bitwise or cryptographic hash, when Apple has explicitly stated this is not the case. If it were the case, then cropped versions of the same CSAM image would be undetectable, as would images that had one or a few pixels altered.
 
Faceplam.

It scans the image to generate a hash, if that hash is close to a kid porn hash you get flagged.

Just because it doesnt compare images like a human would, doesnt mean its not looking for naked people.
Or what the algorithm "thinks" are naked people, more to the point.
 
I believe Apple has said that all the databases are verified by multiple sources to prevent this from happening. They’ve designed it so that no government can hijack the system.
That is a policy decision, not a hard technical limitation. I suspect it will remain policy in the US. I suspect that policy will be different in the PRC or Saudi Arabia, when this "feature" is inevitably rolled out there.
 
  • Like
Reactions: BurgDog
Nobody:

Nobody at all:

Apple  : We are going to start scanning your iPhone for child porn

I was surprised Apple dropped the ball on this and didn’t internally realise this would be widely panned and fly against their Privacy crusade.

Now I’m CONCERNED. Concerned that in the face of widespread global condemnation they’ve chosen to double down. That’s a step past incompetent. That’s not the Apple I know and love. Tone deaf.
They're making a bet here. They're betting that the slogan "Apple protects children" will generate more revenue than "Apple thinks you're a pervert and spies on you" will cost them. That's all.

As far as "knowing and loving" Apple is concerned:

1) Apple is a corporation, not a human being, and unless you're in the C-suite you don't "know Apple" (as all the Apple employees who've been complaining about this on Slack have suddenly realized), and

2) Apple is a corporation, not a human being. It is incapable of "love", and as a very wise man told me about 40 years ago, "don't never love nothin' what can't love you back"
 
B$. Stage 1 of a possible back tracking initiated by Apple due to backlash.
I believe this anti privacy feature as per usual is dedicated for USA customers only. Where there is little to no consumer privacy laws to start with. Same with Google , Facebook, WhatsApp etc … there’s certain things they can’t do or implement worldwide, but they can and do for their local customers in the USA.
 
  • Like
Reactions: BurgDog
Hmm, interesting observation, Cook is indeed the PR man when it comes Apple privacy. Have they figured out that the more they discuss this in public, the worse it'll get for them. I think they might want to bury the PR on this, and just proceed with it, hoping the public noise dies down.
I feel the need to point out that "Apple protects children from sexual exploitation" is every bit as much virtue-signalling PR as "Apple respects privacy".
 
What I find interesting, is the timing of Apple's decision to implement this software. It comes shortly before the release of iOS 15, and a month before the official announcement of the iPhone 13.

That's really bad timing for ticking off a lot of users.

It's curious as to what might have prompted their decision...
probably because apple knows people are excited for the os update and new phones and products and this would play against their dislike of the new csam scanning routine

also, i bet they are getting ready to roll out e2e encryption and the csam scanning is designed to mollify law enforcement
 
Don't take our word for it. Here's Bruce Schneier's (one of the foremost information security experts on the planet) take: https://www.schneier.com/blog/archi...backdoor-to-imesssage-and-icloud-storage.html
This!

I feel like, for so many people, this is the first time they’ve had to learn and deal with this topic. The functional architecture here is spyware; full stop. This isn’t new. We’ve been having these debates since the dawn of the Internet. The problem existed before that, but we’d mostly worked out some reasonable human rights protections. Probably the first real attempt at giving away the keys to the kingdom was the Clipper Chip, but there have been many attempts/discussions over the years. On device spyware was always a bridge too far and not compatible with a free society.
 
You're misunderstanding. The foreign jurisdiction doesn't review your photos --- they agree with other jurisdictions on what is the set of CSAM photos to check against. This way, China, Russia, the US, etc., can't try to target their citizens, for some local crime.
That is a policy restriction, not a technical limitation. While I fully expect that policy to be maintained for the foreseeable future in the United States (incurable optimist that I am), I expect Apple to fold like a cheap suit when they're pressured by the CCP, Saudi Arabia, or repressive-regime-of-your-choice. This policy will be different there. We know this is true based on Apple's past capitulations to such regimes.
 
Last edited:
. Final point is just because the code is currently configured to send matched hashes to Apple if iCloud photos is turned on, that is not a technical dependency. A simple code change could send data constantly.
Thank you. Yes, exactly.
 
There's a significant difference between "identifying pictures of people and collecting them in an album for your convenience" and "flagging material in order to report you to the authorities".
The silliness of these arguments you replied too, is mind boggling.

My device needs my encryption keys to function is not at all similar to Apple giving my encryption keys to law enforcement or any third part including Apple itself.

Scanning my photos or indexing my files is not at all similar to Apple giving them to law enforcement or any third party, etc.

Keep up the good fight. This is probably the most important privacy fight of this decade.
 
But the FBI does.
What will Apple do when they receive a national security letter with a gag order ordering them to report certain pictures?
Have they promised not to scan your phone or MacBook for anything but CSAM in the future? (or even the next x years?)
No!
And they're not going to tell us that because they know we won't like the answer.
they have said explicitly that they are going to fight it, if you think they are lying or are going to be forced to do this by government without notifying users then all bets are off with regard to your relationship with apple and it's time to go elsewhere

this is definitely an issue of trust between the company and its customers, they/we need each other since there is no real alternative to apple, forget google as they are likely to do the same thing soon not to mention all the other ways they surveil their customers

you need to decide to make the leap of trust or go to one of the more privacy focussed os's and phones

personally i think it's better to just conclude that any real privacy when using technology in daily life is just over, it's finished and so i now act accordingly
 
I feel like some people have no desire to understand or seek out the larger context. And instead believe Apple execs, apparently believing their not very smart, just decided to do this on a whim one morning feeling the public would be fine and there would be no adverse company consequences.
Nah, I believe they made a very cynical business decision that claiming to protect kids from perverts sells better than claiming to protect privacy.
 
I am deeply rooted in the AI/ML world, and their language suggests they have embedding layers in the network (see word2vec and word embeddings) in order to assess semantic and perceptual likeness --- this is not simply face detection or object detection, they are looking for (and only looking for) images that nearly identically match.
Sort of. Technically, it’s the Hamming Distance. The distinction between nearly identical and merely similar is most likely a few bits at best. Totally configurable by Apple and not likely very accurate based on:

1) Mobile device processing/battery limitations. Apple has a good neural engine, but it’s still edge processing.

2) The fact they need to open it up to 30 matches to meet their false positive requirements. I ran some quick probability estimates, but there’s still too many assumptions and statistical estimation of perceptual hashes are not my area of expertise, so I’m likely am missing something. I’d like to see the real math direct from Apple, but the performance does not appear confidence inspiring.

3) Because of the backlash and multiple AI experts calling, most likely, BS on the 1 in a trillion claim, Apple added a middle step between on device hash matching and human review. There’s now a larger, higher performing, independent (at least so they claim) perceptual hash algorithm that verifies the edge hashing and runs in the cloud (I.e. big, traditional servers). Only if that also matches, will a human review it. If your edge processing is good enough, you don’t need that; it should be obvious Apples is not.
 
  • Like
Reactions: IG88 and BurgDog
this analogy is completely incorrect. What’s actually happening is that before you leave your house, you create an inventory of all items you’re gonna bring across state and deliver that list to the FBI agent once you get to the border. That way, the agent does not need to go through your actual items.

let’s make this clear: the iPhone creates the hash. The matching process happens on the could.
If that's true, then why are they shipping the hash DB with IOS (which they've explicitly said they are)? There would be no reason to do this unless the test is performed on the phone.
 
That is a policy restriction, not a technical limitation. While I fully expect that policy to be maintained for the foreseeable future in the United States (incurable optimist that I am), I expect Apple to fold like cheap suit when they're pressured by the CCP, Saudi Arabia, or repressive-regime-of-your-choice. This policy will be different there. We know this is true based on Apple's past capitulations to such regimes.
We don't know if it's completely policy -- it may be built-in. In fact, their documentation suggests they take an set intersection of hashes from the various databases, and only consider true CSAM when the set intersection has a cardinality >= 2. This suggests that it's not a policy decision to upload or not based on some manual review of agreements between countries, but a technical implementation, which would require edits to iOS (they only ship 1 version of iOS worldwide).
 
Last edited:
As far as I understood it from Craig’s WSJ interview, this is an opt-in child protection iMessage feature that will be alerting the respective parents.
And again, that's a policy decision not a strict technical limitation. I'm reasonably sure that things will stay this way in the US. Other places, not so much.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.