Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple spokesperson Shane Bauer told The Verge that though the CSAM detection feature is no longer mentioned on its website, plans for CSAM detection have not changed since September, which means CSAM detection is still coming in the future.
Right. However, I would not entirely rule out that this is part of a slow cancellation strategy. Apple painted themselves into a corner with their confident original announcement and their emphasis of the importance of CSAM detection. Cancelling it outright could lead to an outcry from the supportive side - "so Apple lied when they said they were serious about protecting children!" Instead Apple postpones the introduction indefinitely, without a roadmap, and slowly removes all traces, until it all fades into oblivion. If someone asks about it in three years Apple can always claim they are still working on it.
 
  • Like
Reactions: KoolAid-Drink
Apple’s proposal would have set a precedent of mobile devices where it’s normalized that ALL images would be encrypted in the cloud with a key the provider doesn’t have. Thus, not available when authorities request them from the provider.
Well, then Apple's system is doubly stupid since it can be easily be defeated by selective modification of images (see my post above).
 
It's easier than you think. In research, we find these all the time.
Let’s see 1. Just one pair of images where both visibly look normal and not like someone was trying quite hard to make the hash match. :)

All it takes is your co-worker who doesn't like you sending you a bunch of images, work related or not, and you syncing all your images to iCloud. This won't stick in the end, but it is enough to get you in trouble and a manual review process.
THIS WON’T STICK IN THE END… correct. Your image get’s flagged, someone manually reviews the image, sees it’s a false positive, done. No one’s getting in trouble for NOT having CP.
 
Last edited:
  • Like
Reactions: axcess99
It's easier than you think. In research, we find these all the time. Mostly non-intentional by chance for many different applications. But that is not the point, while there is a chance a random image matches, there's also the danger of people intentionally creating images with neural hashes that matches CSAM images and spread those via internet, messaging services, etc. All it takes is your co-worker who doesn't like you sending you a bunch of images, work related or not, and you syncing all your images to iCloud. This won't stick in the end, but it is enough to get you in trouble and a manual review process. There are more issues with the neural hash approach, which I have described in other threads before, so I won't do it again here. This is a poor solution, there are better approaches for this. But in the end, it's Apples choice, they can do whatever they want and people can react to it however they want and handle accordingly.
CSAM is being used by Google and Android and for many years. I am sure there are other tech companies that use it also and yet there has not been an outpouring of grief or dispair resulting from misuse of CSAM. If people were going to manipulate images so others would get caught, we would have seen it by now but the only security researchers who spoke publicly about being able to manipulate images was when Apple announced they was going to use CSAM. Now surely if disgruntled employees or other disgruntled people used CSAM to get revenge on people, the public would have heard about it by now, security researchers would have heard about it by now but we haven't so that tells me people are not manipulating the system in the manner you describe.
 
It's easier than you think. In research, we find these all the time. Mostly non-intentional by chance for many different applications. But that is not the point, while there is a chance a random image matches, there's also the danger of people intentionally creating images with neural hashes that matches CSAM images and spread those via internet, messaging services, etc. All it takes is your co-worker who doesn't like you sending you a bunch of images, work related or not, and you syncing all your images to iCloud. This won't stick in the end, but it is enough to get you in trouble and a manual review process. There are more issues with the neural hash approach, which I have described in other threads before, so I won't do it again here. This is a poor solution, there are better approaches for this. But in the end, it's Apples choice, they can do whatever they want and people can react to it however they want and handle accordingly.
I wonder if you noticed the part in the original doc that said it includes a second verification hash performed on the iCloud servers. Generating a dual collision for both the local voucher and the server one which the "co-worker" can not predict/test/analyze is beyond impractical (and that is before the final step where an employee reviews the previously encrypted thumbnail that was contained in the flagged voucher)

Those last steps of server CSAM and review are what happens in most cloud services today (onedrive, google drive, twitter, reddit, etc). The client voucher system just gives you more privacy and reduces the providers burden.
 
  • Like
Reactions: Unregistered 4U
Well, then Apple's system is doubly stupid since it can be easily be defeated by selective modification of images (see my post above).
If you artificially create an image that matches, YAY, you’ve created an image on your device that matches a CSAM hash! The hash hasn’t been defeated, you’ve just created an image that matches it.
 
Just the tiniest semantic correction, I believe it’s if there are a statistically significant number of matching hashes, not just one. Because it’s a hash, while it’s extremely unlikely that a single random non-CP photo would match, it’s still within the realm of possibility. Action would only be taken after a good series of matches have been detected, a number such that it’s astronomically unlikely that anyone could have that many false positives.
I specifically referenced illegal photo. So if 30 hashes matched and they looked at all 30 of those images and only one was an illegal photo they would still report it and lock the account. I’m aware they wouldn’t look at an account if only one hash flag popped up in all of their photos.

Another way of saying it is you’re allowed to have up to 29 illegal hashes on your account before human review but you are still going to get reported if even one illegal photo is verified of the 30 illegal hashes.
 
  • Like
Reactions: Unregistered 4U
Wrong. Doesn't have to be illegal to be a hash match. This was proven by several independent researchers.

How do you know it's only "illegal" pictures that are in the hash database? Apple doesn't even control / maintain / verify the hash database. The hash database is provided by "others."
They addressed this in their material. Read on page 5 and 6 in their support article here but I’ve copied the relevant paragraphs below. Also covered below is that all reports require human review. So if somehow hashes that are not CP were injected into the database they would be viewed by a human who would verify they are not child pornography and therefore not report it to the authorities. This would also put up red flags that the database is trying to be manipulated and they would report that and also tighten up their protocols. Last also covered below there is no automated reporting to authorities.

“This was proven by several independent researchers.” This was hypothesized (corrected) by those same researchers using other hash systems which Apple also stated they are not using that exact version of hash and also those articles do state those researchers aren’t using the same system for their testing either. Keep in mind that those research companies do gain from discrediting Apple to ad validity to the individual or their companies they represent.

So while I do agree they should be considered they also are not using the same version to make their arguments. That’s like saying a cup with holes at the bottom will leak water (hashes) so Apple’s cup with holes only at the top and not the bottom (reaching 30 hashes) will also leak water at the bottom. I do think it would be fair to allow security experts access to Apple’s protocols and hash system for verification.


Quoted from Apple’s public materials:

Security for CSAM detection for iCloud Photos

Can the CSAM detection system in iCloud Photos be used to detect things other than CSAM?
Our process is designed to prevent that from happening. CSAM detection for iCloud Photos is built so that the system only works with CSAM image hashes provided by NCMEC and other child safety organizations. This set of image hashes is based on images acquired and validated to be CSAM by at least two child safety organizations. There is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. As a result, the system is only designed to report photos that are known CSAM in iCloud Photos. In most countries, including the United States, simply possessing these images is a crime and Apple is obligated to report any instances we learn of to the appropriate authorities.

Could governments force Apple to add non-CSAM images to the hash list?

No. Apple would refuse such demands and our system has been designed to prevent that from happening. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. The set of image hashes used for matching are from known, existing images of CSAM and only contains entries that were independently submitted by two or more child safety organizations operating in separate sovereign jurisdictions. Apple does not add to the set of known CSAM image hashes, and the system is designed to be auditable. The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under this design. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system identifies photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.
We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.

Can non-CSAM images be “injected” into the system to identify accounts for things other than CSAM?

Our process is designed to prevent that from happening. The set of image hashes used for matching are from known, existing images of CSAM that have been acquired and validated by at least two child safety organizations. Apple does not add to the set of known CSAM image hashes. The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under our design. Finally, there is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. In the unlikely event of the system identifying images that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.
 
Also covered below is that all reports require human review. So if somehow hashes that are not CP were injected into the database they would be viewed by a human who would verify they are not child pornography and therefore not report it to the authorities. This would also put up red flags that the database is trying to be manipulated and they would report that and also tighten up their protocols. Last also covered below there is no automated reporting to authorities.
Are these the same crack team of reviewers that let scam apps slip by to get into the App Store? Yeah I'm sure Apple will put their best & brightest on this. The same company that was secretly throttling phones with bad batteries can totes be trusted to not turn someone over to the Feds wrongfully. Riiiiiiiiiight.
 
  • Haha
Reactions: crymimefireworks
All I hear are people who have photos they shouldn’t have, complaining that they shouldn’t be caught with illegal images. This thread would make Josh Duggar happy!

Hope apple gets this enabled soon!

Im sorry but you dont understand the issue at all. Maybe you should educate yourself starting from the links above (the first message, third paragraph.)
 
  • Like
Reactions: FindingAvalon
Just a note:
Just because this "feature" has disappeared from the website does not mean that it has been discontinued. It would need a clear statement from Apple.

Edit:
"Update: Apple spokesperson Shane Bauer told The Verge that though the CSAM detection feature is no longer mentioned on its website, plans for CSAM detection have not changed since September, which means CSAM detection is still coming in the future.

"Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features," Apple said in September."
A bit confused here.
They said plans hadn’t changed since September
However, the update we got in September was that they were working on improvements before the features launched based on feedback.
Improvements, such as, just maybe, server-side matching instead of client-side matching? Just possibly?
I don’t think this is Apple confirming that it’s still 100% happening, I just think this is Apple‘s way of saying “we have nothing new to announce on the subject today.”
I find it very interesting that they said our plans haven’t changed since “September,” when they announced that they were making changes, not “August” when the original feature was announced.
 
  • Like
Reactions: BurgDog
A bit confused here.
They said plans hadn’t changed since September
However, the update we got in September was that they were working on improvements before the features launched based on feedback.
Improvements, such as, just maybe, server-side matching instead of client-side matching? Just possibly?
I don’t think this is Apple confirming that it’s still 100% happening, I just think this is Apple‘s way of saying “we have nothing new to announce on the subject today.”
I find it very interesting that they said our plans haven’t changed since “September,” when they announced that they were making changes, not “August” when the original feature was announced.

people havent voufht enough apple xmas toys… something had to be said to improve low sale?
 
State Hacker can just build they own spyware. They are doing that right now btw. This fear you guys have about CSAM is laughable.
I'd almost agree, but you never know if a state actor hacker could quietly override Apple's own limits and start looking for images of things political.
 
If you artificially create an image that matches, YAY, you’ve created an image on your device that matches a CSAM hash! The hash hasn’t been defeated, you’ve just created an image that matches it.
I think you misunderstand the point I was trying to make. The point is computer scientists have already defeated the system by creating a way of moodifying images so that (1) they still look the same to the human eye and (2) they are not detected by the kind of system Apple's proposing, which would necessitate Apple loosening its criterion for matching, thereby elevating seriously the number of false positives. See https://forums.macrumors.com/thread...esearchers-in-new-study.2317024/post-30610726 for the details, if you missed that I posted this once already in this thread.
 
They kinda are, actually. NCMEC is funded by the US government, but they are a nonprofit, private organization. They have agency-like powers - by law, companies must report CSAM findings to NCMEC (and only to NCMEC), and NCMEC has the authority to order providers to take down websites (and the providers must obey without mentioning NCMEC). Yet as a private organization NCMEC is not subject to oversight and transparency requirements like a real agency.

There is no independent auditing of their database or their decisions. Given the legal situation and the nature of the material they are dealing with, it is virtually impossible to check on them without committing a serious crime. However, NCMEC provides notifications of detected CSAM even to international law enforcement, and according to the Swiss federal police, 90% of the notifications they get from NCMEC are false positives. Hence the quality of NCMEC's data and work seems rather dubious. Combined with their shielding from oversight, that makes them an extremely shady organization in my book.
Funny, NCMEC disputes that stat and says 63% of its referrals were explicitly graphic, the rest being suspicious, and the Swiss federal police refuses to clarify. Sounds like the Swiss police are shady and trying to say CSAM isn't a real problem.
 
I'd almost agree, but you never know if a state actor hacker could quietly override Apple's own limits and start looking for images of things political.
The political side (though not like you are using it) is why I'm so against it. If this CSAM scanner only reported to me and by disallowing certain photos to be uploaded to iCloud, I'd be fine with it. The possibility of Apple's original purpose being subverted for even worse search and seizure is a big reason I wont accept it either.

It's the reporting to the government part that pushes it way over the top for me. Yes, I know it takes a certain amount of positives, and review by an Apple employee, but eventually it could get reported to the government for prosecution. And that's where I think it fails the illegal search and seizure clauses in the U.S. constitution, and I'm an extreme stickler for laws, not like the current crop of politicians that like to just interpret laws for their own benefit.
 
Funny, NCMEC disputes that stat and says 63% of its referrals were explicitly graphic, the rest being suspicious, and the Swiss federal police refuses to clarify. Sounds like the Swiss police are shady and trying to say CSAM isn't a real problem.
Do you really think NCMEC saying (only) 63% were definitely positive is a good thing??? That's an F in school and I'd fire their posteriors if I were their boss.
 
Let’s see 1. Just one pair of images where both visibly look normal and not like someone was trying quite hard to make the hash match. :)
image-10.png

Nail and ski (these are two different images) with neural hash match.

image-12.png

Hatchet and nematode neural hash match.

Good enough? Look at the ImageNet dataset, you'll find more. Also look at deserts, the old Google AI back in 2016/2017 (not neural hash based) incorrectly classified some desert images as porn images. Turns out some of these desert images have the same neural hash as actual porn images. You can easily find the desert images, the matching porn images, you have to come up with yourself. ;)
THIS WON’T STICK IN THE END… correct. Your image get’s flagged, someone manually reviews the image, sees it’s a false positive, done. No one’s getting in trouble for NOT having CP.
Yeah, only some people sitting in dark offices looking at private data. What's on peoples private devices is no ones business.
Same could be said for the FBI knocking at your door with a full squad and taking you in for questioning while your neighbors are watching. Damage is done, even if you walk free.
CSAM is being used by Google and Android and for many years. I am sure there are other tech companies that use it also and yet there has not been an outpouring of grief or dispair resulting from misuse of CSAM.
Scanning is done everywhere, just not on private devices. And when using E2EE, could services see nothing.
 
  • Like
Reactions: VulchR
Do you really think NCMEC saying (only) 63% were definitely positive is a good thing??? That's an F in school and I'd fire their posteriors if I were their boss.
Why shouldn't police look into people grooming children?
 
Are these the same crack team of reviewers that let scam apps slip by to get into the App Store? Yeah I'm sure Apple will put their best & brightest on this. The same company that was secretly throttling phones with bad batteries can totes be trusted to not turn someone over to the Feds wrongfully. Riiiiiiiiiight.
Yes, they can be trusted.

However you can choose to take a verdict by a court that Apple intentionally throttled phones for planned obsolescence, which Apple maintains they didn’t do, and also take a system for testing literally thousands of apps daily that need to be tested in hundreds of different ways and possibly submitted with misleading information to get past Apple’s checks as a means to discredit the employees that have to review images as being child pornography.

Because its completely the same thing testing Apps in hundreds of different ways and looking at two images and verifying they are/aren’t identical (sarcasm). I don’t mind your opinion/view but please support it with a relevant comparison.

You sir/madam appear to not trust Apple and can’t separate the different sets of information from one another to form a good opinion and express it as a good argument or debate.

This feature totally needs to be scrutinized, but it needs to be reviewed on the merits of its features and not on extraneous factors that are only related by the name of the company.
 
Like they are not even the same thing. One is a camera looking at everything you do in your home and you literally can identify everything that is in the camera’s view (counter, pets, plants, people, clothing cupboards etc). The other is hashes being created from the photos (which means you don’t even know what they are because they can’t be reversed back into the image) than comparing them to a list of illegal hashes (which you also can’t see) and then if they are a match 30 times you get a human review of the illegal hashes. Not every photo on your iCloud account.

So that Apple employee who has to review it doesn’t just get access to all your photos. If you have an illegal photo they lock the account. Contact the authorities (which will likely have to get a warrant) and the authorities do their job. Apple has no interaction on it once the illegal photo(s) have been verified.

The analogy of a government camera in my home means they have all the access to what they are looking for and what they are not looking for. So even if I never did anything wrong they can see it.

The hash system only reports to them when something illegal is detected. So no this is not the same as having a camera in your home. Not at all.
Seriously dude, that was a long winded bunch of laughable nothingness. The point that went waaaay over your head is that the search is being done on your property.
 
  • Like
Reactions: IG88
Im sorry but you dont understand the issue at all. Maybe you should educate yourself starting from the links above (the first message, third paragraph.)
I've read up on the "issue". And it's nonsense. People on here parroting the same tired "OMG! What if China used this!" over and over again. If China wanted to use it, they would be using it already.

Or, my other favorite line: it's like they put a camera in your house to look for domestic abuse. No! This is like you wanting to beat your wife at the mall -- and complaining that the mall has you on tape and the cops are called when you do.

If you don't want to have your photos scanned, don't UPLOAD THEM to Apple's iCloud server -- Apple doesn't want your child porn on its servers! This isn't complicated.

Or, better yet, don't have child porn on your phone!

Every time I see someone go crazy over this, I just assume you're trying to model your life after Josh Duggar.

Apple wants to do end-to-end encryption of photos and this will allow them to do so. I *welcome* this new CSAM scanning as it will protect my data better than the current situation.
 
Last edited by a moderator:
  • Disagree
  • Haha
Reactions: VulchR and bobcomer
Seriously dude, that was a long winded bunch of laughable nothingness. The point that went waaaay over your head is that the search is being done on your property.
So they can encrypt it on their servers end-to-end.

Your property.. Sure.. But, there's a license agreement, and you agreed to let them do it if you use the device. Just like you agreed to let them do all sorts of other things on your "property".

You own the physical device, but you sure as heck don't own the software. You own the rights to use it.
 
They addressed this in their material. Read on page 5 and 6 in their support article here but I’ve copied the relevant paragraphs below. Also covered below is that all reports require human review. So if somehow hashes that were of MAGA hats or firearms were injected into the database they would be viewed by a human who would verify they are not child pornography and therefore not report it to the authorities. This would also put up red flags that the database is trying to be manipulated and they would report that and also tighten up their protocols. Last also covered below there is no automated reporting to authorities.

“This was proven by several independent researchers.” This was hypothesized (corrected) by those same researchers using other hash systems which Apple also stated they are not using that exact version of hash and also those articles do state those researchers aren’t using the same system for their testing either. Keep in mind that those research companies do gain from discrediting Apple to ad validity to the individual or their companies they represent.

So while I do agree they should be considered they also are not using the same version to make their arguments. That’s like saying a cup with holes at the bottom will leak water (hashes) so Apple’s cup with holes only at the top and not the bottom (reaching 30 hashes) will also leak water at the bottom. I do think it would be fair to allow security experts access to Apple’s protocols and hash system for verification.


Quoted from Apple’s public materials:

Security for CSAM detection for iCloud Photos

Can the CSAM detection system in iCloud Photos be used to detect things other than CSAM?
Our process is designed to prevent that from happening. CSAM detection for iCloud Photos is built so that the system only works with CSAM image hashes provided by NCMEC and other child safety organizations. This set of image hashes is based on images acquired and validated to be CSAM by at least two child safety organizations. There is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. As a result, the system is only designed to report photos that are known CSAM in iCloud Photos. In most countries, including the United States, simply possessing these images is a crime and Apple is obligated to report any instances we learn of to the appropriate authorities.

Could governments force Apple to add non-CSAM images to the hash list?

No. Apple would refuse such demands and our system has been designed to prevent that from happening. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. The set of image hashes used for matching are from known, existing images of CSAM and only contains entries that were independently submitted by two or more child safety organizations operating in separate sovereign jurisdictions. Apple does not add to the set of known CSAM image hashes, and the system is designed to be auditable. The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under this design. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system identifies photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.
We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.

Can non-CSAM images be “injected” into the system to identify accounts for things other than CSAM?

Our process is designed to prevent that from happening. The set of image hashes used for matching are from known, existing images of CSAM that have been acquired and validated by at least two child safety organizations. Apple does not add to the set of known CSAM image hashes. The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under our design. Finally, there is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. In the unlikely event of the system identifying images that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.
This sounds 100% reasonable. The only people who are complaining are either 1) pedophiles, or 2) people who can't understand what the system does/how it works.
 
Last edited by a moderator:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.