Thanks for posting. I've watched it now, and while it was sensational viewing, I think you guys would do well to apply a liberal dose of that good old skepticism of yours.
I didn't really need to apply any skepticism, I only posted the video because of the AI thing. He talks tech in that part. I didn't take much stock of the doom and gloom part and the iPhone being the world's best surveillance device. It may be true, maybe not, I didn't really focus on that and I didn't post the video because I agree with him or think he's right, I just posted it for the argument as I thought it was an interesting analysis.
Rob Braxman states (with assumed authority) that Apple and Google don't care at all about user privacy, only profit...
A company that is willing to have its products manufactured by children in China does not
care about my privacy or my wellbeing in the true sense of the word. They care in the sense that they want to do well in that area because it's their competitive advantage over the other tech giants, who don't even pretend to provide privacy. Basically, I thought privacy was Apple's business model and in that way they did care. That seems to have changed, we'll see.
He has a commercial incentive to convince you of this message.
Possibly, but I think you're being unfair and a little hypocritical. I don't know this Braxman character, first time I've seen him was in this video (btw he has another one up about this subject, will watch later) so I can't speak to his motivations, but what you say about him can be applied to almost anyone who is qualified to speak on the subject. Isn't this creating a catch 22 - for your opinion to matter, you have to be in the know, you have to be in that field in order for your analysis to have any weight, but if you're in the field, your opinion will be dismissed because you must have an incentive one way or another. That's not fair as a blanket assessment if you don't know the source.
And the fact the guy sells this kind of product should be all the more reason to pay attention to what he has to say, because he makes his living off of this so you'd think he knows what he's talking about, right? Also, who has the most interest in this tech not being scrutinized at all? Who makes a lot of money selling devices that will carry this tech, but whose word you trust? This is the hypocritical part, as you can see.
...which can be verified by reading Apple's
CSAM Detection Technical Summary for ourselves (something I really ought to have read before engaging seriously in this discussion).
You gave me a good laugh when I read this. You didn't even read what
Apple has to say about the technology that you argued the technical side for, but you engaged in these threads telling people that they should educate themselves on it 😆 Dude, at least I said outright I don't know the technical side of it. This was good, I'll give you that 😂
But here's where he seems to go off the rails. He asserts (starting around 15:40) that Apple isn't really using a single hash for each image at all, but rather, they are using the AI to identify a whole collection of individual characteristics for each image and hashing them separately. He supposes these characteristics would include things like the faces of abused children (through facial recognition), body positions, nudity, and environmental context. Where is his evidence for this?
Can he have evidence? He doesn't have the system to test it so he's drawing conclusions from the available data. You yourself asked me how you could possibly know how Apple arrived at the magic number 30. This guy, whether right or wrong, applied his expertise in the field to the info he can find on this tech. Doesn't seem like he can do better than that at the moment, and this is another hypocritical aspect of your post, dismissing the assumptions and conclusions of someone who should actually have at least some knowledge of this stuff. Mind you, I'm not arguing he's right as I have no idea, I'm arguing that he has the references, this being his bread and butter.
In other words, machine learning was used to train and refine the algorithm to see 'perceptually identical' images (images which are essentially the same, but which might have different resolutions, minor cropping, colour differences, etc) as very similar, and substantively different images as very different. (While the whole system is very complex, that much is conceptually pretty simple and logical, right?).
Yes, that seems logical. This, then, confirms that more images can share a hash if the AI thinks they are not substantively different, just somewhat altered. Since the AI is not a person, a photo very similar to a CSAM photo could fool it. I'm not saying there is a better solution than this, since obviously countless people cannot go through countless photos of other countless people.
Okay, but I know you're all reluctant to trust what Apple says. So let me ask you this… Why would Apple do it Rob Braxman's way? It's more complex. It requires the storing of multiple hashes. Heck, it would require that Apple have hoodwinked the NCMEC into storing hash tables multitudes bigger than claimed (or else they're in on it too!), or, as Rob actually claims (around 16.30), Apple has full possession of the database of CSAM images!
Hell if I know, really. I hope we find out, but I don't expect we will. And about the last sentence, let me turn this around on you and ask if you can prove that Apple does not have full possession of the CSAM database?
None of this makes sense, unless… Apple's real game is to build some kind of nefarious Big Brother surveillance system, the likes of which the world has never seen, and it starts right here folks! Well… they could be… I suppose. They could have been working on it for years.
Like I said, I didn't really bother with the more negative part of the video, so I don't know. Apple may be doing it, and maybe not. What I can say is what I know, and this is obvious to anyone who reads history. Power will always want more power, and those in power will usually use the tools at their disposal to advance their interests. This isn't conspiracy talk, it's just realistic. Whether Apple is doing something nefarious, I have no idea.
Another thing that is clear is that we approach this in a fundamentally different way. You trust Apple (because why not, right?), even though no one in this world (literally) has more interest in misleading you than Apple, if in fact Apple were doing something nefarious. This is the kind of thing that would cry out for a healthy dose or skepticism. You do this without evidence to support Apple's benevolence, while at the same time dismissing opposing views for lack of evidence. I, on the other hand, approach this with a liberal dose of skepticism and base my negative views mostly on the fact that new tech like this doesn't get rolled back or scaled back, it only gets improved and expanded. Many people have already said as much, which includes the source you trust the most. This doesn't make me burst with confidence and enthusiasm for what comes next, and I don't think there's anything tin-foil or unhealthy about that. It's not like I want to be in this situation, I love my Apple products.
As you may have noticed, I accidentally learned how to break up quoted parts.