Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This person is incorrect about some important stuff.

  • Apple doesn't do facial scan on their servers, its done on device and synced through their servers. In the beginning even this was not allowed.
  • Apple doesn't scan iCloud Photo Library on their server even though they have the ability to do it.
  • Completely misunderstood the NeuralHash algorithm by claiming it has some AI doing recognition based on the faces and nudity of the photo. He believes the NeuralHash is good at detecting photos from the same category like pornography, faces, food, guns, protest activity etc.
The photo AI he is describing is something which is already in the Photos app and something similar coming to Messages.

The first two I knew but on your third point he kind of lost me there. Only way it works is if it is a mix of hash and AI. I think. I’m learning as I go as this stuff is not in my bailiwick.
 
It's called Safari Safe Browsing and has been in Safari since around 2008 I believe.

I thought that was a Google feature Apple leveraged. Then later Apple ran the check through their servers to limit data gathering by Google. But my understanding was that this is a Google service.
 
Last edited:
It doesn't bypass device encryption. You logged into the device and there you unlocked the key used for decryption.

The operating system has the ability to decrypt almost everything on the iPhone when you're logged in, unless you encrypted it yourself using your own tool.

Splitting hairs. A far as I know, for the hash and check to work the data being scanned has to be in “readable” format which means unencrypted.
For this feature I fail to see why Apple cannot do this between your device and their cloud. Kind of a middle stop check.
 
Even cryptographic hashes has a non-zero probability of being broken. All hashes (except perfect hash functions) has a small probability of collisions, false positives in this situation.

Apple has used 1 false positive every 1 million photos scanned and a threshold of 30 to get the 1 in 1 trillion accounts per year.

A real world test showed 1 false positive for every 33 million photos scanned.

Okay. Nice.
Still, it would be better ton see or read the actual testing protocol / scripts instead of the summary headline.
Better to understand what they did instead of “We did it.”.
 
A real world test showed 1 false positive for every 33 million photos scanned.
Exactly, and that's not confined to just 1 account. There would have to be 30 false positives in one single account to get flagged. To me, this alone makes me 100% confident that the photos I upload to iCloud will not be seen by another person unless I share it online.

That's good enough for me.
 
It's called Safari Safe Browsing and has been in Safari since around 2008 I believe.
Safari warns you if the site you’re visiting is a suspected phishing website. Phishing is a fraudulent attempt to steal your personal information, such as user names, passwords, and other account information. A fraudulent website masquerades as a legitimate one, such as a bank, financial institution, or email service provider. Before you visit a website, Safari may send information calculated from the website address to Google Safe Browsing to check if the website is fraudulent. If you have China mainland set as your region in the Language & Region pane of System Preferences, Safari may also use Tencent Safe Browsing to do this check. The actual website address is never shared with the safe browsing provider. These safe browsing providers may also log your IP address when information is sent to them.

2019 saw significant changes to safari safe browsing.

 
Can he have evidence? He doesn't have the system to test it so he's drawing conclusions from the available data. You yourself asked me how you could possibly know how Apple arrived at the magic number 30. This guy, whether right or wrong, applied his expertise in the field to the info he can find on this tech. Doesn't seem like he can do better than that at the moment, and this is another hypocritical aspect of your post, dismissing the assumptions and conclusions of someone who should actually have at least some knowledge of this stuff. Mind you, I'm not arguing he's right as I have no idea, I'm arguing that he has the references, this being his bread and butter.

He is a perfect example of a little knowledge is a dangerous thing. He is wrong and he doesn't understand Apple's technical description of NeuralHash.

Where he goes wrong is that he tried to induce how NeuralHash works by looking at what he believes to be the end result of a NeuralHash and his knowledge of more regular hashing functions.

This leads him to believe

NeuralHash = photo AI + hash

He describes photo AI to be similar to the scanning the Photo app is already doing.

If he was right, the CSAM detection system could be used for everything people are worried about: finding people smoking pot, participating in protest, having (illegal) guns, posing in MAGA caps, having a non-heterosexual preferences etc.

Fortunately,

NeuralHash = 1) algorithm for generating multidimensional floating point descriptors + 2) convolutional neural network + 3) hyperplane locality sensitivity hashing

The goal of 1) isn't to find similar images, but to

A. find images which are the same (or derivates) to images in the NCMEC database
B. and at the same time be extremely bad at finding similar images
C. and be extremely bad at finding dissimilar images

These are competing goals. That's why this algorithm is "sent through" a neural network (2) to optimise for these three tasks by testing many variations of the algorithm (1) on millions of non-CSAM images.

It is because of B this system is so inefficient to be misused by police and governments in many circumstances.

(It's important to my argument that you understand the difference between "derivative" and "similar" images).
 
  • Like
Reactions: MozMan68
He is a perfect example of a little knowledge is a dangerous thing. He is wrong and he doesn't understand Apple's technical description of NeuralHash.

Where he goes wrong is that he tried to induce how NeuralHash works by looking at what he believes to be the end result of a NeuralHash and his knowledge of more regular hashing functions.

This leads him to believe

NeuralHash = photo AI + hash

He describes photo AI to be similar to the scanning the Photo app is already doing.

If he was right, the CSAM detection system could be used for everything people are worried about: finding people smoking pot, participating in protest, having (illegal) guns, posing in MAGA caps, having a non-heterosexual preferences etc.

Fortunately,

NeuralHash = 1) algorithm for generating multidimensional floating point descriptors + 2) convolutional neural network + 3) hyperplane locality sensitivity hashing

The goal of 1) isn't to find similar images, but to

A. find images which are the same (or derivates) to images in the NCMEC database
B. and at the same time be extremely bad at finding similar images
C. and be extremely bad at finding dissimilar images

These are competing goals. That's why this algorithm is "sent through" a neural network (2) to optimise for these three tasks by testing many variations of the algorithm (1) on millions of non-CSAM images.

It is because of B this system is so inefficient to be misused by police and governments in many circumstances.

(It's important to my argument that you understand the difference between "derivative" and "similar" images).
Apple even said themselves that there’s no machine learning or training the system.
 
  • Like
Reactions: MozMan68
A company that is willing to have its products manufactured by children in China does not care about my privacy or my wellbeing in the true sense of the word.
You know, when I said the same thing, I got a nastygram from a moderator saying that it was a "political" post...
 
  • Like
Reactions: Schismz
Splitting hairs. A far as I know, for the hash and check to work the data being scanned has to be in “readable” format which means unencrypted.
For this feature I fail to see why Apple cannot do this between your device and their cloud. Kind of a middle stop check.

It's not splitting hairs. When you log into a device (iPhone, Mac, Windows PC) you unlock decryption keys which are necessary to decrypt encrypted content.

If you didn't the operating system wouldn't be able to read the data in any meaningful way. Encryption isn't broken. This is working as designed. When you login you are implicitly giving the operating system access to your decryption keys.

Apple can implement this scanning on device, third party servers or their own servers. They choose the first one.
If you implement it on device there is no encryption to worry about since the user has the authority to decrypt every file they own. The CSAM Detection System is running with the authority of the user when it comes to accessing the data.

If implemented anywhere else it either had to be unencrypted or not end-to-end encrypted.
 
It's not splitting hairs. When you log into a device (iPhone, Mac, Windows PC) you unlock decryption keys which are necessary to decrypt encrypted content.

If you didn't the operating system wouldn't be able to read the data in any meaningful way. Encryption isn't broken. This is working as designed. When you login you are implicitly giving the operating system access to your decryption keys.

Apple can implement this scanning on device, third party servers or their own servers. They choose the first one.
If you implement it on device there is no encryption to worry about since the user has the authority to decrypt every file they own. The CSAM Detection System is running with the authority of the user when it comes to accessing the data.

If implemented anywhere else it either had to be unencrypted or not end-to-end encrypted.

and Apple has the keys for the rest of the process. I still fail to see WHY? Apple is doing it on device.
… and Apple isn’t doing E2EE.
 
  • Like
Reactions: Pummers
Exactly, and that's not confined to just 1 account. There would have to be 30 false positives in one single account to get flagged. To me, this alone makes me 100% confident that the photos I upload to iCloud will not be seen by another person unless I share it online.

That's good enough for me.
Apple does, and will, hand over their encryption keys to the FBI.
Source: https://www.androidauthority.com/fbi-document-messaging-apps-3069511/
1638546920744.png
 
Wouldn't they need a reason to get into your iCloud account?
The image in the post you quoted had words about “with a search warrant”. Those need to be approved by a judge to be valid, though I’m sure the FBI has judges in their back pocket. Still, you’d have to become interesting to the FBI first.
 
  • Like
Reactions: crymimefireworks
Apple does, and will, hand over their encryption keys to the FBI.
Source: https://www.androidauthority.com/fbi-document-messaging-apps-3069511/
View attachment 1922486
Apple has always said if it's on iCloud they will comply with any warrants for the information... They won't assist in extracting data from the phone, for now that is. Who knows what laws might be passed in the future. I have a feeling one day they won't have any choice but to give complete access
 
The image in the post you quoted had words about “with a search warrant”. Those need to be approved by a judge to be valid, though I’m sure the FBI has judges in their back pocket. Still, you’d have to become interesting to the FBI first.
It's worse - the FISA courts are required to rubberstamp warrants if the agency asking for it simply says it's 'related' to terrorism. The judge can't even ask clarifying questions.
 
  • Like
Reactions: Pummers
Has Apple officially updated it's position on the whole issue? My last count is they just don't activate the end user search feature for now. So will they just do it later? Do they reevaluate the entire concept? Is there a schedule or hearing or maybe expert's advice coming?
 
Has Apple officially updated it's position on the whole issue? My last count is they just don't activate the end user search feature for now. So will they just do it later? Do they reevaluate the entire concept? Is there a schedule or hearing or maybe expert's advice coming?

The last update from Apple was in September 2021:
Last month we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material. Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.

Source: https://techcrunch.com/2021/09/03/apple-csam-detection-delayed/
 
  • Like
Reactions: Pummers and Mega ST
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.