Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So you'd rather have all your photos out in the open and being scanned for the same exact content and reported to the same authorities?

At least on your device, Apple doesn't know about any of your photos until there's 30 matches. Then they only have access to those 30 photos vs having access to all your photos.

Plus, if you don't want your device(s) scanned, don't use iCloud Photo Library (which would be the same exact thing if they only scanned on the server).

Again, it's MORE private, not less.

Yes.
If I choose to utilize the Cloud (note I said I choose) and part of that use is that the Cloud “Owner” will have a CSAM scan process (I’m thinking AI), I have a choice to either agree to that or find another Cloud provider. I use Google and Dropbox today and both use a form of situational scanning.

What you continue to miss, or ignore, is that I do not want this on my personal devices. There are other options and technologies available that do not impact my personal “space”.

Apple’s solution more secure? Not.

btw - OS upgrades can and have turned on functions in Settings that you may not even be aware of until some other item alerts you to that fact. Add to this the less than stellar track record we have seen regarding “bugs” in Apple software.
 
Yes.
If I choose to utilize the Cloud (note I said I choose) and part of that use is that the Cloud “Owner” will have a CSAM scan process (I’m thinking AI), I have a choice to either agree to that or find another Cloud provider. I use Google and Dropbox today and both a form of situational scanning.

What you continue to miss, or ignore, is that I do not want this on my personal devices. There are other options and technologies available that do not impact my personal “space”.

Apple’s solution more secure? Not.

btw - OS upgrades can and have turned on functions in Settings that you may not even be aware of until some other item alerts you to that fact. Add to this the less than stellar track record we have see regarding “bugs” in Apple software.
Apple's solution is more secure. It's kinda hilarious how blind you are to this.
 
  • Angry
Reactions: Pummers
Yeah man, I hate those conspiracy theorists too. They're always harping on about absurd things that could never happen like "What if the US government started doing biological experiments on black people" or "What if the government started profiling us based on everything we do online" or "What if the government started tracking everywhere we went" or "What if the government dropped a nuclear bomb on innocent people and killed 200,000 of them in the process" or "What if the government had a policy to ensure a baseline level of extreme poverty to control inflation" or "What if the government lied to the public in order to invade another country for economic purposes"

The people that like to think about "what if?" are often the people that have the ability to see more than 100 feet beyond them. They're the same people that are currently thinking about what our planet will look like 50 years from now whereas many people don't think beyond where they want to get lunch over the weekend. They're also the people that protect our rights by analyzing historical precedent and extrapolating what an authority could do with a given power. Some words used to describe them might be "investigative journalist" or "academic."

Lumping in the minority of people whose version of "what if?" is thinking "what if gatorade makes my pet dog gay" in the same category as people that ask "what if the government had the capability of spying on everything we do" is extraordinarily ignorant and disrespectful to smart people that actually give a damn about our country and planet at large.

It's not just enthusiasts and ordinary people like myself expressing concern about this, many people smarter than us with a greater understanding of the technology, and importantly history, are also expressing deep concern.
You forgot "What if the government decided to start sterilizing non-whites" (google Eugenics)
 
To me it is odd that you see on device as more private. Especially as this solution bypasses device encryption.

Do me a favor and go back and read the EU doc I linked in post #623.
Your files have to be unencrypted to be scanned either way. So you either do it before the upload or after the upload. The good news is that if you do it before the upload, Apple only has the key to unlock the flagged photos rather than all your photos. Plus, they ONLY have the key to those flagged photos after 30 images are accumulated. The odds of that happening are 1 trillion to 1 (unless you're a sicko). So yeah, I'd say the on-device scan is 100% privacy orientated for my stuff at least.
 
  • Like
Reactions: kalsta
Your files have to be unencrypted to be scanned either way. So you either do it before the upload or after the upload. The good news is that if you do it before the upload, Apple only has the key to unlock the flagged photos rather than all your photos. Plus, they ONLY have the key to those flagged photos after 30 images are accumulated. The odds of that happening are 1 trillion to 1 (unless you're a sicko). So yeah, I'd say the on-device scan is 100% privacy orientated for my stuff at least.
Untrue. Apple has a "master decryption" key. They have used it before, when the FBI came with a warrant to examine some dude's iCloud photos for evidence of a car bombing (IIRC).
 
Untrue. Apple has a "master decryption" key. They have used it before, when the FBI came with a warrant to examine some dude's iCloud photos for evidence of a car bombing (IIRC).
Apple doesn't scan our photos at all right now. Not even in the cloud, but if they were to move all the scanning to the cloud, they would have to decrypt all of our photos rather than just the 1 in a trillion chance they need to decrypt 30 of them. See what I'm saying?
 
Apple doesn't scan our photos at all right now. Not even in the cloud, but if they were to move all the scanning to the cloud, they would have to decrypt all of our photos rather than just the 1 in a trillion chance they need to decrypt 30 of them. See what I'm saying?
Incorrect. Apple IS Scanning iCloud photos for CSAM. Links all over the place in this conversation regarding that.
 
  • Like
Reactions: eltoslightfoot
Your files have to be unencrypted to be scanned either way. So you either do it before the upload or after the upload. The good news is that if you do it before the upload, Apple only has the key to unlock the flagged photos rather than all your photos. Plus, they ONLY have the key to those flagged photos after 30 images are accumulated. The odds of that happening are 1 trillion to 1 (unless you're a sicko). So yeah, I'd say the on-device scan is 100% privacy orientated for my stuff at least.

Personally however Apple wants to handle scanning is fine as long as they are executing the scan post device.

You do realize that 1:1 trillion number based on 30 consecutive false positives. The fact they need that many tells me there is some underlying accuracy fault in the hash method chosen. Simple math.
Wonder if they are also accounting for false negatives? Would love to see the testing results on match accuracy.

Either way, I would prefer that this action occur off my device.

Apple doesn't scan our photos at all right now. Not even in the cloud, but if they were to move all the scanning to the cloud, they would have to decrypt all of our photos rather than just the 1 in a trillion chance they need to decrypt 30 of them. See what I'm saying?

Currently Apple only scans by request of authorities. They are scanning based on the writ definition, not specifically for CSAM.
Apple does scan Mail (iCloud) for CSAM currently. Based on numbers reported they are not finding much.
 
  • Like
Reactions: Pummers
You do realize that 1:1 trillion number on based on 30 consecutive false positives. The fact they need that many tells me there is some underlying fault in the hash method chosen. Simple math.
That's exactly why I trust it. The chances of having 30 images that accidentally match CSAM content is basically 0, but even if somehow 30 of them get through the device-scan, they're scanned again after being decrypted on the server to rule out the possibility of a false positive.

If that's the way it works, it's way better than having everything done server side.

Edit: Did you miss the part where Apple ran 100 million photos through the system and only 3 of them were false positives? That's 3 in 100 million. Now imagine 30 false positives all coming from ONE account. Almost impossible odds. That's why they say it's 1 in 1 trillion per year, per account.

Also, if the images are falsely flagged and they're not CSAM, then you won't be reported to authorities.

After all of that, you're still scared?
 
That's exactly why I trust it. The chances of having 30 images that accidentally match CSAM content is basically 0, but even if somehow 30 of them get through the device-scan, they're scanned again after being decrypted on the server to rule out the possibility of a false positive.

If that's the way it works, it's way better than having everything done server side.

Edit: Did you miss the part where Apple ran 100 million photos through the system and only 3 of them were false positives? That's 3 in 100 million. Now imagine 30 false positives all coming from ONE account. Almost impossible odds. That's why they say it's 1 in 1 trillion per year, per account.

Also, if the images are falsely flagged and they're not CSAM, then you won't be reported to authorities.

After all of that, you're still scared?

Didn't miss it. That's just a headline. What was the actual test.

As the risk of being mod chastised, your last comment was totally uncalled for.
If you think that is the issue we have nothing further to discuss.
For me this is STRICTLY a fact that I don't want that type of scanning design on my devices. The material being scanned has nada to do with it.

Later.
 
Didn't miss it. That's just a headline. What was the actual test.

As the risk of being mod chastised, your last comment was totally uncalled for.
If you think that is the issue we have nothing further to discuss.
For me this is STRICTLY a fact that I don't want that type of scanning design on my devices. The material being scanned has nada to do with it.

Later.
Sorry that I hurt your feelings. You have the right to be scared, just like I have the right to not be.
 
Thanks for posting. I've watched it now, and while it was sensational viewing, I think you guys would do well to apply a liberal dose of that good old skepticism of yours.

I didn't really need to apply any skepticism, I only posted the video because of the AI thing. He talks tech in that part. I didn't take much stock of the doom and gloom part and the iPhone being the world's best surveillance device. It may be true, maybe not, I didn't really focus on that and I didn't post the video because I agree with him or think he's right, I just posted it for the argument as I thought it was an interesting analysis.

Rob Braxman states (with assumed authority) that Apple and Google don't care at all about user privacy, only profit...

A company that is willing to have its products manufactured by children in China does not care about my privacy or my wellbeing in the true sense of the word. They care in the sense that they want to do well in that area because it's their competitive advantage over the other tech giants, who don't even pretend to provide privacy. Basically, I thought privacy was Apple's business model and in that way they did care. That seems to have changed, we'll see.

He has a commercial incentive to convince you of this message.

Possibly, but I think you're being unfair and a little hypocritical. I don't know this Braxman character, first time I've seen him was in this video (btw he has another one up about this subject, will watch later) so I can't speak to his motivations, but what you say about him can be applied to almost anyone who is qualified to speak on the subject. Isn't this creating a catch 22 - for your opinion to matter, you have to be in the know, you have to be in that field in order for your analysis to have any weight, but if you're in the field, your opinion will be dismissed because you must have an incentive one way or another. That's not fair as a blanket assessment if you don't know the source.

And the fact the guy sells this kind of product should be all the more reason to pay attention to what he has to say, because he makes his living off of this so you'd think he knows what he's talking about, right? Also, who has the most interest in this tech not being scrutinized at all? Who makes a lot of money selling devices that will carry this tech, but whose word you trust? This is the hypocritical part, as you can see.

...which can be verified by reading Apple's CSAM Detection Technical Summary for ourselves (something I really ought to have read before engaging seriously in this discussion).

You gave me a good laugh when I read this. You didn't even read what Apple has to say about the technology that you argued the technical side for, but you engaged in these threads telling people that they should educate themselves on it 😆 Dude, at least I said outright I don't know the technical side of it. This was good, I'll give you that 😂

But here's where he seems to go off the rails. He asserts (starting around 15:40) that Apple isn't really using a single hash for each image at all, but rather, they are using the AI to identify a whole collection of individual characteristics for each image and hashing them separately. He supposes these characteristics would include things like the faces of abused children (through facial recognition), body positions, nudity, and environmental context. Where is his evidence for this?

Can he have evidence? He doesn't have the system to test it so he's drawing conclusions from the available data. You yourself asked me how you could possibly know how Apple arrived at the magic number 30. This guy, whether right or wrong, applied his expertise in the field to the info he can find on this tech. Doesn't seem like he can do better than that at the moment, and this is another hypocritical aspect of your post, dismissing the assumptions and conclusions of someone who should actually have at least some knowledge of this stuff. Mind you, I'm not arguing he's right as I have no idea, I'm arguing that he has the references, this being his bread and butter.

In other words, machine learning was used to train and refine the algorithm to see 'perceptually identical' images (images which are essentially the same, but which might have different resolutions, minor cropping, colour differences, etc) as very similar, and substantively different images as very different. (While the whole system is very complex, that much is conceptually pretty simple and logical, right?).

Yes, that seems logical. This, then, confirms that more images can share a hash if the AI thinks they are not substantively different, just somewhat altered. Since the AI is not a person, a photo very similar to a CSAM photo could fool it. I'm not saying there is a better solution than this, since obviously countless people cannot go through countless photos of other countless people.

Okay, but I know you're all reluctant to trust what Apple says. So let me ask you this… Why would Apple do it Rob Braxman's way? It's more complex. It requires the storing of multiple hashes. Heck, it would require that Apple have hoodwinked the NCMEC into storing hash tables multitudes bigger than claimed (or else they're in on it too!), or, as Rob actually claims (around 16.30), Apple has full possession of the database of CSAM images!

Hell if I know, really. I hope we find out, but I don't expect we will. And about the last sentence, let me turn this around on you and ask if you can prove that Apple does not have full possession of the CSAM database?

None of this makes sense, unless… Apple's real game is to build some kind of nefarious Big Brother surveillance system, the likes of which the world has never seen, and it starts right here folks! Well… they could be… I suppose. They could have been working on it for years.

Like I said, I didn't really bother with the more negative part of the video, so I don't know. Apple may be doing it, and maybe not. What I can say is what I know, and this is obvious to anyone who reads history. Power will always want more power, and those in power will usually use the tools at their disposal to advance their interests. This isn't conspiracy talk, it's just realistic. Whether Apple is doing something nefarious, I have no idea.

Another thing that is clear is that we approach this in a fundamentally different way. You trust Apple (because why not, right?), even though no one in this world (literally) has more interest in misleading you than Apple, if in fact Apple were doing something nefarious. This is the kind of thing that would cry out for a healthy dose or skepticism. You do this without evidence to support Apple's benevolence, while at the same time dismissing opposing views for lack of evidence. I, on the other hand, approach this with a liberal dose of skepticism and base my negative views mostly on the fact that new tech like this doesn't get rolled back or scaled back, it only gets improved and expanded. Many people have already said as much, which includes the source you trust the most. This doesn't make me burst with confidence and enthusiasm for what comes next, and I don't think there's anything tin-foil or unhealthy about that. It's not like I want to be in this situation, I love my Apple products.

As you may have noticed, I accidentally learned how to break up quoted parts.
 
  • Like
Reactions: dk001
It is a perfectly legit demand that people want their own private phones to be private. Private from software, scanning and searches whatever the "good" intention might be. If this red line needs to be trespassed there is a legal system to provide a search warrant. Otherwise everybody has to be regarded as innocent and this must not be proven all the time to Apple.

It's no argument to claim other parties have violated this right before. This is about the new apple software scanning private devices before they send pictures to apple.

Apple has for many years now scanned every URL you visited, compared to a list provided by a third party, and under certain circumstances reported it to a third party.

I can't remember seeing anyone reacting negatively to this.
 
Ok, let us pretend I'm Emperor of the World for Life (which would only be right and proper if the universe made sense)

There ya go. Hope this helps :D
Love it! Thanks for being a good sport and playing along. 🙂

Perhaps someone can turn that imaginary neighbourhood into an AI simulation, so we can test out your system against others and see how the SIMs all get along? Could be a very useful political training tool! (Much safer than electing another demagogue anyway and waiting to see what happens. 😬) Okay, that’s enough from me! Cheers again.
 
  • Like
Reactions: jseymour
Good video.
Some info in here I am unsure of - requires further digging - however it does possibly answer a nagging gap I kept running into on how this hash can recognize changes. This feeds into the Messages parental check for kids.

Interesting.

This person is incorrect about some important stuff.

  • Apple doesn't do facial scan on their servers, its done on device and synced through their servers. In the beginning even this was not allowed.
  • Apple doesn't scan iCloud Photo Library on their server even though they have the ability to do it.
  • Completely misunderstood the NeuralHash algorithm by claiming it has some AI doing recognition based on the faces and nudity of the photo. He believes the NeuralHash is good at detecting photos from the same category like pornography, faces, food, guns, protest activity etc.
The photo AI he is describing is something which is already in the Photos app and something similar coming to Messages.
 
To me it is odd that you see on device as more private. Especially as this solution bypasses device encryption.

Do me a favor and go back and read the EU doc I linked in post #623.

It doesn't bypass device encryption. You logged into the device and there you unlocked the key used for decryption.

The operating system has the ability to decrypt almost everything on the iPhone when you're logged in, unless you encrypted it yourself using your own tool.
 
You do realize that 1:1 trillion number based on 30 consecutive false positives. The fact they need that many tells me there is some underlying accuracy fault in the hash method chosen. Simple math.
Wonder if they are also accounting for false negatives? Would love to see the testing results on match accuracy.

Even cryptographic hashes has a non-zero probability of being broken. All hashes (except perfect hash functions) has a small probability of collisions, false positives in this situation.

Apple has used 1 false positive every 1 million photos scanned and a threshold of 30 to get the 1 in 1 trillion accounts per year.

A real world test showed 1 false positive for every 33 million photos scanned.
 
  • Like
Reactions: MozMan68
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.