Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I hope Apple brings it back. We need to stop pedophiles and child sex abuse. Children need to be protected.
Absolutely…
They should also make VPNs, P2P networks, anonymous browsers, and encrypted images illegal too. These are the most widely used tools of a pedophile. And every iPhone must have a gov’t ‘back door’. If Apple can do it for China, they can for the US as well. Our children deserve all the protection we can give them.
 
Im still not going to activate iMessage again or iCloud backup for pictures. Thank you Tim.
TBH, I had already ditched them for their glitchy unreliability. iMessage was losing the messages members of my family were sending. And iCloud is just a mess of unreliable crud, on top of being massively overpriced, I don't understand why anyone uses it. And I tell you what, once you try and remove iCloud, it's like a virus, constantly hounding you to start it back up again, and you simply can't get rid of it 100%, gah.
 
  • Like
Reactions: motulist
TBH, I had already ditched them for their glitchy unreliability. iMessage was losing the messages members of my family were sending. And iCloud is just a mess of unreliable crud, on top of being massively overpriced, I don't understand why anyone uses it. And I tell you what, once you try and remove iCloud, it's like a virus, constantly hounding you to start it back up again, and you simply can't get rid of it 100%, gah.
Our family has no issues with iMessage or iCloud. iCloud backup just works. As with everything, ymmv.
 
I hope Apple will drop the on-device scanning part and declare they have met their goals with upload scanning on the server instead. Saving face is their main goal now. They are taking their time to work on a message where they don't admit they did something wrong, but just figured out a better way to do it, that coincidentally meets the concerns of the people who don't want on-device spyware.
More likely, instead of making it all on for everyone using iCloud, they’re working out the more difficult problem of having it configurable per account. That way, those that understand what the tech does can have end-to-end encrypted image backups.
 
It isn’t as far off of an analogy as you imagine it is. Client side scanning is the equivalent of them looking at something on your property instead of waiting until it is on their property as is with server side scanning.
Like they are not even the same thing. One is a camera looking at everything you do in your home and you literally can identify everything that is in the camera’s view (counter, pets, plants, people, clothing cupboards etc). The other is hashes being created from the photos (which means you don’t even know what they are because they can’t be reversed back into the image) than comparing them to a list of illegal hashes (which you also can’t see) and then if they are a match 30 times you get a human review of the illegal hashes. Not every photo on your iCloud account.

So that Apple employee who has to review it doesn’t just get access to all your photos. If you have an illegal photo they lock the account. Contact the authorities (which will likely have to get a warrant) and the authorities do their job. Apple has no interaction on it once the illegal photo(s) have been verified.

The analogy of a government camera in my home means they have all the access to what they are looking for and what they are not looking for. So even if I never did anything wrong they can see it.

The hash system only reports to them when something illegal is detected. So no this is not the same as having a camera in your home. Not at all.
 
If you have an illegal photo they lock the account. Contact the authorities (which will likely have to get a warrant) and the authorities do their job. Apple has no interaction on it once the illegal photo(s) have been verified.
Just the tiniest semantic correction, I believe it’s if there are a statistically significant number of matching hashes, not just one. Because it’s a hash, while it’s extremely unlikely that a single random non-CP photo would match, it’s still within the realm of possibility. Action would only be taken after a good series of matches have been detected, a number such that it’s astronomically unlikely that anyone could have that many false positives.
 
The hash system only reports to them when something illegal is detected
Wrong. Doesn't have to be illegal to be a hash match. This was proven by several independent researchers.

How do you know it's only "illegal" pictures that are in the hash database? Apple doesn't even control / maintain / verify the hash database. The hash database is provided by "others."
 
Last edited by a moderator:
Wrong. Doesn't have to be illegal to be a hash match. This was proven by several independent researchers.

How do you know it's only "illegal" pictures that are in the hash database? Apple doesn't even control / maintain / verify the hash database. The hash database is provided by "others."
The database is provided by the National Center for Missing and Exploited Children. They’re not some mysterious no name.

And it’s looking for specific photographs, so unless you’re worried it’d has your pictures, you have nothing to be concerned about.
 
Just a note:
Just because this "feature" has disappeared from the website does not mean that it has been discontinued. It would need a clear statement from Apple.

Edit:
"Update: Apple spokesperson Shane Bauer told The Verge that though the CSAM detection feature is no longer mentioned on its website, plans for CSAM detection have not changed since September, which means CSAM detection is still coming in the future.

"Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features," Apple said in September."
 
Last edited:
I’m glad this client side scanning is against EU ePrivacy Directive and therefore Apple can’t implement it in Europe. IMHO in the case of CSAM the Apple’s proposed solution is like killing flies with a nuke. There must be a better way of protecting children than invading the privacy of billions of Apple product users. But hey, “its for the children”… in this context it sounds like a bull **** argument. To me this feels like a way to introduce and test client side processed mass surveillance of epic proportions. Just think about it. Potentially billions of people have their data scanned and processed with minimum server side processing required. One of the biggest problems of mass surveillance is the ammount of data collected and the cost of processing this data. Now with this new wonderful invention Apple has decided the processing will be done client side with the devices we have purchased with our own money. I’ll bet every single intelligence/security agency in the world are beyond excited about Apple’s CSAM client side scanning possibilities. Apple is breaking new ground with this surveillance technology. It seems as Apple is becoming the big brother and not the woman with the sledge hammer kicking ass. Honestly, when I purchased Apple products I didn’t sign up for this surveillance ****.
 
So Apple CSAM says these are the same images...
doge.png

iteration_28000.png

At least these two images generate the same neural hash (https://github.com/greentfrapp/apple-neuralhash-attack) and therefore a match.
Many more such examples available... with all the research out there on adversarial attacks people still it's a good idea to use neural hashes for CSAM...
200.gif
 
So Apple CSAM says these are the same images...
doge.png

iteration_28000.png

At least these two images generate the same neural hash (https://github.com/greentfrapp/apple-neuralhash-attack) and therefore a match.
Many more such examples available... with all the research out there on adversarial attacks people still it's a good idea to use neural hashes for CSAM...
200.gif
You’re right while Apple CSAM doesn’t say they’re the same image, using any hashing method, you can artificially create a match of two images that look visibly different. In practice, though, if, instead of carefully crafting an image to MATCH the content matching hash, one instead tries to find a content match in the wild, they’d have to go through hundreds of millions of photos (taken by the cameras, downloaded from the web, sent via iMessage) to try to find a match. And, it’s exceedingly likely that they still wouldn’t find a match.
 
One of the biggest problems of mass surveillance is the ammount of data collected and the cost of processing this data. Now with this new wonderful invention Apple has decided the processing will be done client side with the devices we have purchased with our own money. I’ll bet every single intelligence/security agency in the world are beyond excited about Apple’s CSAM client side scanning possibilities.
With the advent of cloud solutions and pre-built AI structures, processing massive amounts of data in the cloud is trivially simple for anyone that wants to. And, if there’s anyone that’s NOT in favor of Apple’s client side content matching, it’s the intelligence/security agencies you mention. With Apple’s solution in place, a person with no CP images would have their entire iCloud image store encrypted with a key that Apple WOULDN’T have. They much prefer the current state where they can, via process, ask Apple for the images of a user and Apple being able to decrypt and hand them over.

I wouldn’t be surprised if the voluminous misinformation was derived from those same agencies. These WERE the same type of folks publicly indicating they NEED Apple to unlock a phone when, in actuality, they’d already unlocked it.
 
In practice, though, if, instead of carefully crafting an image to MATCH the content matching hash, one instead tries to find a content match in the wild, they’d have to go through hundreds of millions of photos (taken by the cameras, downloaded from the web, sent via iMessage) to try to find a match. And, it’s exceedingly likely that they still wouldn’t find a match.
It's easier than you think. In research, we find these all the time. Mostly non-intentional by chance for many different applications. But that is not the point, while there is a chance a random image matches, there's also the danger of people intentionally creating images with neural hashes that matches CSAM images and spread those via internet, messaging services, etc. All it takes is your co-worker who doesn't like you sending you a bunch of images, work related or not, and you syncing all your images to iCloud. This won't stick in the end, but it is enough to get you in trouble and a manual review process. There are more issues with the neural hash approach, which I have described in other threads before, so I won't do it again here. This is a poor solution, there are better approaches for this. But in the end, it's Apples choice, they can do whatever they want and people can react to it however they want and handle accordingly.
 
Last edited:
If I ever find out this has been surreptitiously added without my knowledge then I will sell every Apple device I own and never buy another. Anyone who doesn’t have an issue with this has no clue of the concept of mission creep. If these systems are allowed to exist then it’s only a matter of time before the Feds batter your door in for having a [insert future dictator here] meme in your iCloud library. The road to hell is paved with good intentions.

you will never know since it is closed source software
 
The database is provided by the National Center for Missing and Exploited Children. They’re not some mysterious no name.
They kinda are, actually. NCMEC is funded by the US government, but they are a nonprofit, private organization. They have agency-like powers - by law, companies must report CSAM findings to NCMEC (and only to NCMEC), and NCMEC has the authority to order providers to take down websites (and the providers must obey without mentioning NCMEC). Yet as a private organization NCMEC is not subject to oversight and transparency requirements like a real agency.

There is no independent auditing of their database or their decisions. Given the legal situation and the nature of the material they are dealing with, it is virtually impossible to check on them without committing a serious crime. However, NCMEC provides notifications of detected CSAM even to international law enforcement, and according to the Swiss federal police, 90% of the notifications they get from NCMEC are false positives. Hence the quality of NCMEC's data and work seems rather dubious. Combined with their shielding from oversight, that makes them an extremely shady organization in my book.
 


Apple has quietly nixed all mentions of CSAM from its Child Safety webpage, suggesting its controversial plan to detect child sexual abuse images on iPhones and iPads may hang in the balance following significant criticism of its methods.

Child-Safety-Feature-yellow.jpg

Apple in August announced a planned suite of new child safety features, including scanning users' iCloud Photos libraries for Child Sexual Abuse Material (CSAM), Communication Safety to warn children and their parents when receiving or sending sexually explicit photos, and expanded CSAM guidance in Siri and Search.

Following their announcement, the features were criticized by a wide range of individuals and organizations, including security researchers, the privacy whistleblower Edward Snowden, the Electronic Frontier Foundation (EFF), Facebook's former security chief, politicians, policy groups, university researchers, and even some Apple employees.

The majority of criticism was leveled at Apple's planned on-device CSAM detection, which was lambasted by researchers for relying on dangerous technology that bordered on surveillance, and derided for being ineffective at identifying images of child sexual abuse.

Apple initially attempted to dispel some misunderstandings and reassure users by releasing detailed information, sharing FAQs, various new documents, interviews with company executives, and more, in order to allay concerns.

However, despite Apple's efforts, the controversy didn't go away. Apple eventually went ahead with the Communication Safety features rollout for Messages, which went live earlier this week with the release of iOS 15.2, but Apple decided to delay the rollout of CSAM following the torrent of criticism that it clearly hadn't anticipated.

Apple said its decision to delay was "based on feedback from customers, advocacy groups, researchers and others... we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features."

The above statement was added to Apple's Child Safety page, but it has now gone, along with all mentions of CSAM, which raises the possibility that Apple could have kicked it into the long grass and abandoned the plan altogether. We've reached out to Apple for comment and will update this article if we hear back.

Update: Apple spokesperson Shane Bauer told The Verge that though the CSAM detection feature is no longer mentioned on its website, plans for CSAM detection have not changed since September, which means CSAM detection is still coming in the future.

"Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features," Apple said in September.

Article Link: Apple Removes All References to Controversial CSAM Scanning Feature From Its Child Safety Webpage [Updated]
It has not been pulled. Just sidelined for the time being. When they do add it, you wont be warned.
 
  • Like
Reactions: mrex


Apple has quietly nixed all mentions of CSAM from its Child Safety webpage, suggesting its controversial plan to detect child sexual abuse images on iPhones and iPads may hang in the balance following significant criticism of its methods.

Child-Safety-Feature-yellow.jpg

Apple in August announced a planned suite of new child safety features, including scanning users' iCloud Photos libraries for Child Sexual Abuse Material (CSAM), Communication Safety to warn children and their parents when receiving or sending sexually explicit photos, and expanded CSAM guidance in Siri and Search.

Following their announcement, the features were criticized by a wide range of individuals and organizations, including security researchers, the privacy whistleblower Edward Snowden, the Electronic Frontier Foundation (EFF), Facebook's former security chief, politicians, policy groups, university researchers, and even some Apple employees.

The majority of criticism was leveled at Apple's planned on-device CSAM detection, which was lambasted by researchers for relying on dangerous technology that bordered on surveillance, and derided for being ineffective at identifying images of child sexual abuse.

Apple initially attempted to dispel some misunderstandings and reassure users by releasing detailed information, sharing FAQs, various new documents, interviews with company executives, and more, in order to allay concerns.

However, despite Apple's efforts, the controversy didn't go away. Apple eventually went ahead with the Communication Safety features rollout for Messages, which went live earlier this week with the release of iOS 15.2, but Apple decided to delay the rollout of CSAM following the torrent of criticism that it clearly hadn't anticipated.

Apple said its decision to delay was "based on feedback from customers, advocacy groups, researchers and others... we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features."

The above statement was added to Apple's Child Safety page, but it has now gone, along with all mentions of CSAM, which raises the possibility that Apple could have kicked it into the long grass and abandoned the plan altogether. We've reached out to Apple for comment and will update this article if we hear back.

Update: Apple spokesperson Shane Bauer told The Verge that though the CSAM detection feature is no longer mentioned on its website, plans for CSAM detection have not changed since September, which means CSAM detection is still coming in the future.

"Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features," Apple said in September.

Article Link: Apple Removes All References to Controversial CSAM Scanning Feature From Its Child Safety Webpage [Updated]
Apple spokesperson Shane Bauer told The Verge that though the CSAM detection feature is no longer mentioned on its website, plans for CSAM detection have not changed since September, which means CSAM detection is still coming in the future.
 
  • Like
Reactions: crymimefireworks
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.