Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So many elitists on here insist that if you don't like this you must not understand it, must be ignorant, must not have read up on it. They are the ones who don't understand that people can know exactly how this works, maybe more than they do, and still disagree with it.
I'm sure a lot of people here do understand what is going on but you can't deny that there are a lot of people also here who don't understand. In this small thread alone we have people asking if album covers will get them in trouble, denying that they authorized Apple's scanning, and those who don't believe that the scanning is disabled if you don't use iCloud.
 
So there's about 1.4 trillion new photographs taken per year now on smartphones. That means that one poor sot per year could be accused incorrectly of harboring it?
Again, no one is going to be reported without a human confirming the results. If you don't post CSAM, you won't be reported to NCMEC. Apple isn't reporting anyone to the police or FBI. That isn't their job.
 
  • Like
  • Haha
Reactions: Puonti and xpxp2002
Can someone clarify how is hashing an image an invasion of privacy? I mean, password hashes are stored, but websites don't have our actual passwords. Isn't this basically just comparing encrypted versions of images, the way they do for passwords. I get that they are doing this without user consent, but from a user perspective what's the downside, if we disregard the "collisions" the article talks about?

I genuinely want to understand this better.
 
Can someone clarify how is hashing an image an invasion of privacy? I mean, password hashes are stored, but websites don't have our actual passwords. Isn't this basically just comparing encrypted versions of images, the way they do for passwords. I get that they are doing this without user consent, but from a user perspective what's the downside, if we disregard the "collisions" the article talks about?

I genuinely want to understand this better.
I don't think the actual scanning of hashes is what has most people upset. It's that if a machine learning algorithm detects what it reads as CSAM, the device will automatically inform Apple of the detection. This is a pretty far cry from "what is on your iPhone stays on your iPhone." On the other hand, I doubt very much that Apple feels that they have any good choices for this. They can either preserve some privacy for the vast majority of people on iCloud by scanning on device or they can scan every photo uploaded on their own servers preventing any privacy and disallowing a future version that provides end-to-end encryption and storage on iCloud.
 
Can someone clarify how is hashing an image an invasion of privacy? I mean, password hashes are stored, but websites don't have our actual passwords. Isn't this basically just comparing encrypted versions of images, the way they do for passwords. I get that they are doing this without user consent, but from a user perspective what's the downside, if we disregard the "collisions" the article talks about?

I genuinely want to understand this better.
Most agree it would not be a problem to find child abuse, the problem is, what's next, this tech can lead to all kinds of finding stuff inside your phone, like politics in countries where you can get into deep trouble venting your opinion.
 
I'm sure a lot of people here do understand what is going on but you can't deny that there are a lot of people also here who don't understand. In this small thread alone we have people asking if album covers will get them in trouble, denying that they authorized Apple's scanning, and those who don't believe that the scanning is disabled if you don't use iCloud.
Then again, the code is installed and has access to your photo library. Whether or not Apple turns off the function when you turn off iCloud, if someone else that hacks your phone is able to turn it on and use it for something Apple didn't intend as a zero day exploit is a real possibility. Its hard enough to keep things secure without them adding crap like this.
 
Wait - I’m confused by this. Does this mean that everyone’s iCloud is going to be scanned without user’s authorization in the name of child welfare??

While I am sure people may agree with this, it seems like one step away from doctors/dentists submitting DNA samples of *every* patient because it is in the public interest.

This program seems just a small morale slip away from being an invasion of privacy on a monumental scale. Give what Snowden revealed the US government has a huge thirst for data collection like this. It’s a short hop to scan for compromising photos of your political rivals, yes?

But, but, we *must* think of the children! /S
 
Then again, the code is installed and has access to your photo library. Whether or not Apple turns off the function when you turn off iCloud, if someone else that hacks your phone is able to turn it on and use it for something Apple didn't intend as a zero day exploit is a real possibility. Its hard enough to keep things secure without them adding crap like this.
But there are already algorithms on your iPhone and Mac that have access to your photo library. Why is this one any more susceptible to attack than the existing ones that do things like tag your photos and match faces to names?

Edit: Which BTW don't seem to be able to be turned off.
 
With hashing you will always have a chance of collisions because it is inherently lossy by design.
And that's why Apple says there's a one in a trillion chance an account will be accidentally flagged and not that it's impossible to be accidentally flagged.

yes. every photo on your iphone will converted to a "hash" by your phone before uploading to icloud. the hash will be sent along with the photo. the hashes (not the actual image) will be scanned to see if they match a know child abuse photo hash and if you have more than approximately 30 such images in your icloud collection a human being at Apple will be able to view just those images to verify they are indeed images that match known child abuse imagery. if so law enforcement will be notified.

like you said the fear is apple could use the same technology to search someday for other forms of imagery and could be coerced to do so by foreign governments. they promise they won't but the backdoor is there now.
No.
The hashes to scan for are built into iOS.
If a photo is fingerprinted and does not match a known hash then it's ignored.
ONLY if a hash fingerprint match is made is a voucher with the hash and matched photo created, and that voucher doesn't leave your iPhone and no-one can see the contents of the hash (or I think that it even exists) until you get to 30 vouchers and then the entire lot is sent to Apple for validation by automated systems and if that system concurs is sent to an Apple validation team. ONLY of that team sees the images actually are CSAM is the content reported to any form of government.
 
But there are already algorithms on your iPhone and Mac that have access to your photo library. Why is this one any more susceptible to attack than the existing ones that do things like tag your photos and match faces to names?

I get your argument, I mean, there are other undiscovered zero day exploits on the phone so why fix the ones that have been found, right?
 
I get your argument, I mean, there are other undiscovered zero day exploits on the phone so why fix the ones that have been found, right?
I'm not sure I understand what you are saying. Are you saying someone found a zero-day in the CSAM detection? That isn't true from anything that I've read. The iPhoto scanning for names and tagging has been there for years and I haven't heard of it being targeted by hackers. Maybe I missed that. But what zero-days are you talking about?
 
I'm not sure I understand what you are saying. Are you saying someone found a zero-day in the CSAM detection? That isn't true from anything that I've read. The iPhoto scanning for names and tagging has been there for years and I haven't heard of it being targeted by hackers. Maybe I missed that. But what zero-days are you talking about?
I didn't say that, but its clear by your history you aren't worth arguing with
 
Apple: "Uhhh guys, this is totally not the finished algorithm! Believe us!"
I do believe them. Because their has isn't generated by pixel values, but rather by image features. Expect a slightly different way of generating those features with every iOS update. That doesn't make it any better. If you want to see how vulnerable that is, do a google scholar search for adversarial attacks. It's the whole reason we have a research field for safe AI.
 
Well Apple doesn't want your illegal child porn on their servers so if you choose to use their iCloud photo library they will make sure what you are sending them doesn't contain that kind of material.

Don't like it? Don't use iCloud.
“Dont use icloud” is such a flip answer that sounds good if you ignore this is apple. “Dont like other peoples airtags using your phone or dont trust that airtag data? Turn off find my”. Of course that also turns off all sorts of other important features.

so a person is phone only or perhaps phone ipad only. Since apple protects us it is difficult to find alternative integrated photo systems.
 


Developer Asuhariet Yvgar this morning said that he had reverse-engineered the NeuralHash algorithm that Apple is using to detect Child Sexual Abuse Materials (CSAM) in iCloud Photos, posting evidence on GitHub and details on Reddit.

Child-Safety-Feature-Blue.jpg

Yvgar said that he reverse-engineered the NeuralHash algorithm from iOS 14.3, where the code was hidden, and he rebuilt it in Python. After he uploaded his findings, another user was able to create a collision, an issue where two non-matching images share the same hash. Security researchers have warned about this possibility because the potential for collisions could allow the CSAM system to be exploited.

In a statement to Motherboard, Apple said that the version of the NeuralHash that Yvgar reverse-engineered is not the same as the final implementation that will be used with the CSAM system. Apple also said that it made the algorithm publicly available for security researchers to verify, but there is a second private server-side algorithm that verifies a CSAM match after the threshold is exceeded, along with human verification.

Matthew Green, who teaches cryptography at Johns Hopkins University and who has been a vocal critic of Apple's CSAM system, told Motherboard that if collisions "exist for this function," then he expects "they'll exist in the system Apple eventually activates."

"Of course, it's possible that they will re-spin the hash function before they deploy," he said. "But as a proof of concept, this is definitely valid," he said of the information shared on GitHub.

Because of the human element, though, another researcher, Nicholas Weaver, told Motherboard that all people can do with manipulating non-CSAM hashes into CSAM is "annoy Apple's response team with garbage images until they implement a filter" to get rid of false positives. Actually fooling Apple's system would also require access to the hashes provided by NCMEC and it would require the production of over 30 colliding images, with the end result not fooling the human oversight.

Apple is using its NeuralHash system to match a database of image hashes provided by agencies like the National Center for Missing Children (NCMEC) to images on user devices to search for CSAM. The system is designed to produce exact matches and Apple says there's a one in a trillion chance that an iCloud account can be accidentally flagged.

Apple is planning to implement the NeuralHash CSAM system in iOS and iPadOS 15 as part of a suite of child safety features, and it has been a hugely controversial decision with Apple receiving criticism from customers and privacy advocates. Apple has been attempting to reassure customers and security researchers about the implementation of the system with additional documentation and executive interviews.

Article Link: Apple Says NeuralHash Tech Impacted by 'Hash Collisions' Is Not the Version Used for CSAM Detection
Pro user is using this hash to simply tell a user that the photo is ineligible for icloud upload. Anti-user is secretly viewing the pictures and sending them to the cops
 
Apple also said that it made the algorithm publicly available for security researchers to verify,

Uh. It did? Where?

but there is a second private server-side algorithm that verifies a CSAM match after the threshold is exceeded, along with human verification.

I still have only skimmed their PDF, but this seems the first time I've heard there's additional matching, and the "Inner-Layer Unwrapping of Vouchers in iCloud" doesn't seem to mention this?

Matthew Green, who teaches cryptography at Johns Hopkins University and who has been a vocal critic of Apple's CSAM system, told Motherboard that if collisions "exist for this function," then he expects "they'll exist in the system Apple eventually activates."

I imagine this is a simplification for journalism's sake, but surely that depends on variables we don't know. Is the final version of their hash algorithm the same with more bits? Or is it an altogether different algorithm?

 
I'm not sure I understand what you are saying. Are you saying someone found a zero-day in the CSAM detection? That isn't true from anything that I've read. The iPhoto scanning for names and tagging has been there for years and I haven't heard of it being targeted by hackers. Maybe I missed that. But what zero-days are you talking about?

They're saying that just because other pathways exist doesn't mean this one particular pathway can't be criticized.
 
They're saying that just because other pathways exist doesn't mean this one particular pathway can't be criticized.
I will always criticize Apple for not catching more zero-days. But I just don't understand why this algorithm running on your devices is seen as more susceptible to attack than any other. I think it is likely that it is actually less likely to be hacked given the amount of security and privacy scrutiny it is receiving.
 
Wait - I’m confused by this. Does this mean that everyone’s iCloud is going to be scanned without user’s authorization in the name of child welfare??

While I am sure people may agree with this, it seems like one step away from doctors/dentists submitting DNA samples of *every* patient because it is in the public interest.

This program seems just a small morale slip away from being an invasion of privacy on a monumental scale. Give what Snowden revealed the US government has a huge thirst for data collection like this. It’s a short hop to scan for compromising photos of your political rivals, yes?
It's both better and worse than that. Apple's competitors scan the hashes of encrypted documents posted in the cloud, so there's nothing particularly new if that's what were happening here. Apple is taking this a step further by using the device itself to perform the scan. In that sense, it's worse than what you describe.

However, it isn't the type of scanning your DNA analogy suggests, because everything remains encrypted (and therefore private, and better than what you describe) except in the case of a hash-match above the threshold point.
 
  • Like
Reactions: shbumc and jdb8167
iCloud photos is automatically checked ON when a new device is set up and Apple ID is entered for the first time.
A user has to dig through the settings to manually turn it off.
 
  • Like
Reactions: snek and hugo7
I will always criticize Apple for not catching more zero-days. But I just don't understand why this algorithm running on your devices is seen as more susceptible to attack than any other.

For starters, because this is a process that has access to at least: 1) your images, 2) the Neural Engine, 3) the local copy of the NCMEC hash database, 4) credentials to upload your photos Apple's servers. Probably more than that.

(It's possible they've sandboxed it further into separate processes, but I don't believe so.)

In terms of vectors, you could do worse.

 
  • Like
Reactions: bernuli
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.