Here it is, it's the first time I've heard it explained like that.
Thanks for posting. I've watched it now, and while it was sensational viewing, I think you guys would do well to apply a liberal dose of that good old skepticism of yours.
Rob Braxman states (with assumed authority) that Apple and Google don't care
at all about user privacy,
only profit. That's a cynical position, and a little ironic coming from someone who appears to make his money selling products that promise to protect you from them. (i.e. He has a commercial incentive to convince you of this message.) Apple may or may not really care, but keep in mind, Apple makes its money from selling hardware and services, not from selling advertising and user data. What's the commercial incentive for Apple to compromise the privacy of its users?
Now, regarding the AI component of Apple's NeuralHash technology, which is really the meat of Rob's presentation… Yes, Apple uses AI on your device as part of the hashing process, which can be verified by reading Apple's
CSAM Detection Technical Summary for ourselves (something I really ought to have read before engaging seriously in this discussion). But here's where he seems to go off the rails. He asserts (starting around 15:40) that Apple isn't really using a single hash for each image at all, but rather, they are using the AI to identify a whole collection of individual characteristics for each image and hashing them separately. He supposes these characteristics would include things like the faces of abused children (through facial recognition), body positions, nudity, and environmental context. Where is his evidence for this? Towards the start of the video, he makes it sound like his suspicions were confirmed by reading Apple's technical data, but later on (around 10:30) he talks like it's more of a conspiracy, postulating about 'something else at work here, something that's underneath the covers that we are not seeing, and they're not telling'. It honestly sounds to me like he's just making stuff up.
Here's what Apple's Technical Summary actually says about the AI component of NeuralHash:
The neural network that generates the descriptor is trained through a self-supervised training scheme. Images are perturbed with transformations that keep them perceptually identical to the original, creating an original/perturbed pair. The neural network is taught to generate descriptors that are close to one another for the original/perturbed pair. Similarly, the network is also taught to generate descriptors that are farther away from one another for an original/distractor pair. A distractor is any image that is not considered identical to the original.
In other words, machine learning was used to train and refine the algorithm to see 'perceptually identical' images (images which are essentially the same, but which might have different resolutions, minor cropping, colour differences, etc) as very similar, and substantively different images as very different. (While the whole system is very complex, that much is conceptually pretty simple and logical, right?) Finally, those numbers are passed to the hash function which is designed to output an identical hash for perceptually identical images.
Okay, but I know you're all reluctant to trust what Apple says. So let me ask you this… Why would Apple do it Rob Braxman's way? It's more complex. It requires the storing of multiple hashes. Heck, it would require that Apple have hoodwinked the NCMEC into storing hash tables multitudes bigger than claimed (or else they're in on it too!), or, as Rob actually claims (around 16.30), Apple has full possession of the database of CSAM images!
None of this makes sense,
unless… Apple's real game is to build some kind of nefarious Big Brother surveillance system, the likes of which the world has never seen, and it starts right here folks! Well… they could be… I suppose. They could have been working on it for years. Where's the evidence? Good question. But you see, if you take Rob's word for it that this is how the technology works, that's ample evidence for the conspiracy, and the conspiracy is ample evidence that this is how the technology must work. Makes perfect sense once you start thinking that way.
Apple devs and execs must be banging their heads against that curved glass wall in frustration right now. Well, either that, or out of sheer terror that their evil plans have been foiled again by those dratted kids on MR!
The video is kind of still worth watching, because if nothing else, it makes you stop and think about all that other surveillance tech that is built into our phones to make our lives easier—you know, to show us where we left our phone, our keys, our friends… to give us live traffic reports and all the rest of it. Whether Apple goes ahead with this or not, they (and the same goes for Google) could build the stuff of nightmares if they really wanted to. It's a sobering thought.