So there's about 1.4 trillion new photographs taken per year now on smartphones. That means that one poor sot per year could be accused incorrectly of harboring it?
I'm sure a lot of people here do understand what is going on but you can't deny that there are a lot of people also here who don't understand. In this small thread alone we have people asking if album covers will get them in trouble, denying that they authorized Apple's scanning, and those who don't believe that the scanning is disabled if you don't use iCloud.So many elitists on here insist that if you don't like this you must not understand it, must be ignorant, must not have read up on it. They are the ones who don't understand that people can know exactly how this works, maybe more than they do, and still disagree with it.
Again, no one is going to be reported without a human confirming the results. If you don't post CSAM, you won't be reported to NCMEC. Apple isn't reporting anyone to the police or FBI. That isn't their job.So there's about 1.4 trillion new photographs taken per year now on smartphones. That means that one poor sot per year could be accused incorrectly of harboring it?
I don't think the actual scanning of hashes is what has most people upset. It's that if a machine learning algorithm detects what it reads as CSAM, the device will automatically inform Apple of the detection. This is a pretty far cry from "what is on your iPhone stays on your iPhone." On the other hand, I doubt very much that Apple feels that they have any good choices for this. They can either preserve some privacy for the vast majority of people on iCloud by scanning on device or they can scan every photo uploaded on their own servers preventing any privacy and disallowing a future version that provides end-to-end encryption and storage on iCloud.Can someone clarify how is hashing an image an invasion of privacy? I mean, password hashes are stored, but websites don't have our actual passwords. Isn't this basically just comparing encrypted versions of images, the way they do for passwords. I get that they are doing this without user consent, but from a user perspective what's the downside, if we disregard the "collisions" the article talks about?
I genuinely want to understand this better.
Most agree it would not be a problem to find child abuse, the problem is, what's next, this tech can lead to all kinds of finding stuff inside your phone, like politics in countries where you can get into deep trouble venting your opinion.Can someone clarify how is hashing an image an invasion of privacy? I mean, password hashes are stored, but websites don't have our actual passwords. Isn't this basically just comparing encrypted versions of images, the way they do for passwords. I get that they are doing this without user consent, but from a user perspective what's the downside, if we disregard the "collisions" the article talks about?
I genuinely want to understand this better.
Then again, the code is installed and has access to your photo library. Whether or not Apple turns off the function when you turn off iCloud, if someone else that hacks your phone is able to turn it on and use it for something Apple didn't intend as a zero day exploit is a real possibility. Its hard enough to keep things secure without them adding crap like this.I'm sure a lot of people here do understand what is going on but you can't deny that there are a lot of people also here who don't understand. In this small thread alone we have people asking if album covers will get them in trouble, denying that they authorized Apple's scanning, and those who don't believe that the scanning is disabled if you don't use iCloud.
Wait - I’m confused by this. Does this mean that everyone’s iCloud is going to be scanned without user’s authorization in the name of child welfare??
While I am sure people may agree with this, it seems like one step away from doctors/dentists submitting DNA samples of *every* patient because it is in the public interest.
This program seems just a small morale slip away from being an invasion of privacy on a monumental scale. Give what Snowden revealed the US government has a huge thirst for data collection like this. It’s a short hop to scan for compromising photos of your political rivals, yes?
But there are already algorithms on your iPhone and Mac that have access to your photo library. Why is this one any more susceptible to attack than the existing ones that do things like tag your photos and match faces to names?Then again, the code is installed and has access to your photo library. Whether or not Apple turns off the function when you turn off iCloud, if someone else that hacks your phone is able to turn it on and use it for something Apple didn't intend as a zero day exploit is a real possibility. Its hard enough to keep things secure without them adding crap like this.
And that's why Apple says there's a one in a trillion chance an account will be accidentally flagged and not that it's impossible to be accidentally flagged.With hashing you will always have a chance of collisions because it is inherently lossy by design.
No.yes. every photo on your iphone will converted to a "hash" by your phone before uploading to icloud. the hash will be sent along with the photo. the hashes (not the actual image) will be scanned to see if they match a know child abuse photo hash and if you have more than approximately 30 such images in your icloud collection a human being at Apple will be able to view just those images to verify they are indeed images that match known child abuse imagery. if so law enforcement will be notified.
like you said the fear is apple could use the same technology to search someday for other forms of imagery and could be coerced to do so by foreign governments. they promise they won't but the backdoor is there now.
But there are already algorithms on your iPhone and Mac that have access to your photo library. Why is this one any more susceptible to attack than the existing ones that do things like tag your photos and match faces to names?
I'm not sure I understand what you are saying. Are you saying someone found a zero-day in the CSAM detection? That isn't true from anything that I've read. The iPhoto scanning for names and tagging has been there for years and I haven't heard of it being targeted by hackers. Maybe I missed that. But what zero-days are you talking about?I get your argument, I mean, there are other undiscovered zero day exploits on the phone so why fix the ones that have been found, right?
I didn't say that, but its clear by your history you aren't worth arguing withI'm not sure I understand what you are saying. Are you saying someone found a zero-day in the CSAM detection? That isn't true from anything that I've read. The iPhoto scanning for names and tagging has been there for years and I haven't heard of it being targeted by hackers. Maybe I missed that. But what zero-days are you talking about?
I do believe them. Because their has isn't generated by pixel values, but rather by image features. Expect a slightly different way of generating those features with every iOS update. That doesn't make it any better. If you want to see how vulnerable that is, do a google scholar search for adversarial attacks. It's the whole reason we have a research field for safe AI.Apple: "Uhhh guys, this is totally not the finished algorithm! Believe us!"
“Dont use icloud” is such a flip answer that sounds good if you ignore this is apple. “Dont like other peoples airtags using your phone or dont trust that airtag data? Turn off find my”. Of course that also turns off all sorts of other important features.Well Apple doesn't want your illegal child porn on their servers so if you choose to use their iCloud photo library they will make sure what you are sending them doesn't contain that kind of material.
Don't like it? Don't use iCloud.
Pro user is using this hash to simply tell a user that the photo is ineligible for icloud upload. Anti-user is secretly viewing the pictures and sending them to the cops
Developer Asuhariet Yvgar this morning said that he had reverse-engineered the NeuralHash algorithm that Apple is using to detect Child Sexual Abuse Materials (CSAM) in iCloud Photos, posting evidence on GitHub and details on Reddit.
![]()
Yvgar said that he reverse-engineered the NeuralHash algorithm from iOS 14.3, where the code was hidden, and he rebuilt it in Python. After he uploaded his findings, another user was able to create a collision, an issue where two non-matching images share the same hash. Security researchers have warned about this possibility because the potential for collisions could allow the CSAM system to be exploited.
In a statement to Motherboard, Apple said that the version of the NeuralHash that Yvgar reverse-engineered is not the same as the final implementation that will be used with the CSAM system. Apple also said that it made the algorithm publicly available for security researchers to verify, but there is a second private server-side algorithm that verifies a CSAM match after the threshold is exceeded, along with human verification.
Matthew Green, who teaches cryptography at Johns Hopkins University and who has been a vocal critic of Apple's CSAM system, told Motherboard that if collisions "exist for this function," then he expects "they'll exist in the system Apple eventually activates."
"Of course, it's possible that they will re-spin the hash function before they deploy," he said. "But as a proof of concept, this is definitely valid," he said of the information shared on GitHub.
Because of the human element, though, another researcher, Nicholas Weaver, told Motherboard that all people can do with manipulating non-CSAM hashes into CSAM is "annoy Apple's response team with garbage images until they implement a filter" to get rid of false positives. Actually fooling Apple's system would also require access to the hashes provided by NCMEC and it would require the production of over 30 colliding images, with the end result not fooling the human oversight.
Apple is using its NeuralHash system to match a database of image hashes provided by agencies like the National Center for Missing Children (NCMEC) to images on user devices to search for CSAM. The system is designed to produce exact matches and Apple says there's a one in a trillion chance that an iCloud account can be accidentally flagged.
Apple is planning to implement the NeuralHash CSAM system in iOS and iPadOS 15 as part of a suite of child safety features, and it has been a hugely controversial decision with Apple receiving criticism from customers and privacy advocates. Apple has been attempting to reassure customers and security researchers about the implementation of the system with additional documentation and executive interviews.
Article Link: Apple Says NeuralHash Tech Impacted by 'Hash Collisions' Is Not the Version Used for CSAM Detection
Apple also said that it made the algorithm publicly available for security researchers to verify,
but there is a second private server-side algorithm that verifies a CSAM match after the threshold is exceeded, along with human verification.
Matthew Green, who teaches cryptography at Johns Hopkins University and who has been a vocal critic of Apple's CSAM system, told Motherboard that if collisions "exist for this function," then he expects "they'll exist in the system Apple eventually activates."
I'm not sure I understand what you are saying. Are you saying someone found a zero-day in the CSAM detection? That isn't true from anything that I've read. The iPhoto scanning for names and tagging has been there for years and I haven't heard of it being targeted by hackers. Maybe I missed that. But what zero-days are you talking about?
I will always criticize Apple for not catching more zero-days. But I just don't understand why this algorithm running on your devices is seen as more susceptible to attack than any other. I think it is likely that it is actually less likely to be hacked given the amount of security and privacy scrutiny it is receiving.They're saying that just because other pathways exist doesn't mean this one particular pathway can't be criticized.
It's both better and worse than that. Apple's competitors scan the hashes of encrypted documents posted in the cloud, so there's nothing particularly new if that's what were happening here. Apple is taking this a step further by using the device itself to perform the scan. In that sense, it's worse than what you describe.Wait - I’m confused by this. Does this mean that everyone’s iCloud is going to be scanned without user’s authorization in the name of child welfare??
While I am sure people may agree with this, it seems like one step away from doctors/dentists submitting DNA samples of *every* patient because it is in the public interest.
This program seems just a small morale slip away from being an invasion of privacy on a monumental scale. Give what Snowden revealed the US government has a huge thirst for data collection like this. It’s a short hop to scan for compromising photos of your political rivals, yes?
I will always criticize Apple for not catching more zero-days. But I just don't understand why this algorithm running on your devices is seen as more susceptible to attack than any other.