Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This I agree with. I want to see the creators of these sick images be in jail or worse. Some sick individual that just comes across these images on a website or something and saves it (while sick -- I keep saying that), doesn't do anything about the actual abuse or creation of the image. Joe Someone that just is a sick individual that found something on some random website (or 30 somethings) and saved them gets in jail while Jim Someone that actually abused the kid is free to do so again with another kid.
Worse. Jane Someone sees what happens to Joe and decides to be more careful. Now Jim abuses kids more frequently to meet Jane’s need for fresh content which she perceives as safer.
 
  • Like
Reactions: Ethosik and sog1927
That is the thing though. How will Apple's manual review determine if something is legal or not other than the extreme "kid" variety? How will this protect 16 year olds (illegal and still child where I live) when they look like they are 25? What will prevent 25 year olds that look 16 from being falsely accused?

You took this out of context. Follow the conversation and you’ll realize that I’m saying that CSAM-criminals will not save csam-pictures, from the internet, in their camera roll but instead hide it (criminals usually hide..)
This implementation will only catch the stupidest of the stupid criminals. These would have been caught way earlier by publicly harrasing children at Walmart etc.

This will not save the children. And at a cost of decreased privacy for the average user.
Lose-Lose.
 
Last edited:
No one seriously believes this won't be abused. It will be abused by tech corporations and governments alike, although they are nearly one and the same at this point. The “save the children” angle is just to get a foot in the door. This kind of thing will lead to the total eradication of privacy and, eventually, free thought.
 
Is that some big legal loophole here going on then? Government can just get private companies to break the 4th amendment.
No the government can’t engage private entities to do unconstitutional work.

See post #626
 
Apple kisses the ring of every government official it can. Tim Cook’s not going to risk his connections in China or other authoritarian countries that demand Apple search for more. Apple, as large as it is, cannot overcome an authoritarian nation-state, and more importantly, they’ve never bothered to try. They’ve given authoritarian governments whatever they have asked for without question, you know, “complying with local laws”

Apple is only concerned about privacy within liberal democracies’ borders, and even then, only until a law gets passed that gives them cover.

Apple can’t have those great earnings calls every quarter without the Chinese market. The moment China says you do what we say If you want to sell in our Market, I guarantee there is not a single Apple executive that would have the integrity to walk away, especially the bean counter at the helm.
 
Last edited:
  • Like
Reactions: BurgDog
This was one of Apple's stupidest ideas ever – I am waiting for WOZ to say something about this.
And hopefully students equipped with iPads from their high schools will stand up and boycott iOS 15.
 
  • Like
Reactions: 09872738
Then why move it to on-device? Leave it on iCloud. This just screams government pressure. And I am not the conspiracy type, but I would NOT be surprised if they are scanning for something else but just publicly stating its for CSAM. We have no way of verifying after all. National security might mean Apple cannot say ONE THING about the true intentions.
If Craig is to be believed, moving this to the device increases privacy, in that prior to this method, all of your actual photos were being accessed and scanned directly by Apple (and any government agency who got access to it). With on-device matching to cross-verified through multiple jurisdiction CSAM hashes, only those photos matching verified CSAM will be reviewed by anyone. This preserves user privacy. Also, US national security letters cannot get data, for ref: https://en.wikipedia.org/wiki/National_security_letter
By law, NSLs can request only non-content information, for example, transactional records and phone numbers dialed, but never the content of telephone calls or e-mails.
In other words, they can check if you called a child pornographer, but cannot know what you spoke about or about the thousands of pedo-porn you have, unless you upload it to iCloud.
 
It seems many here think that routinely scanning all of your iCloud photos is somehow less invasive than the statistically improbable event in which Apple reviews a subset of derivatives of your photos that seem to match many copies of unique instances of child porn verified by multiple jurisdictions to be a true hash of real child porn (and not some nation-specific-hunt photo).

It seems this stems from the thinking that "on-device scanning" implies it is Apple itself scanning, rather than your device, which has already self-scanned for over a decade. Alternatively, it seems many think that this opens up some pandora's box/starts a slippery-slope of sorts, in which anything can be searched for if forced by local jurisdictions to do so, even though the technology, as described, would be incapable of doing this even with significant engineering efforts.

And all of this is coming from an echo chamber of many who used to think Apple had their backs in terms of privacy. The ease of the swaying of the collective mind is certainly interesting.
 
How would the police get the photos the murderer took so they could create the hashes and send to Apple?
How would you use this system if the murderer took no photos during the crime?

Neural hasing is based on similarities not exact matches (it’s even on Apples own white papers). For example, police might be looking for anyone who has picture of victim, suspect or any person of interest. Add location data and access all the documents (not just pictures) and you have very nice mass surveillance network. That would be Prism 2.0 right there. One could build whole case with that data.
 
Apple already stated that the system can be tailored to per country basis. Let's say, a country has a certain law, or just want to censor certain things. They could simply go to Apple, "Hey Apple, here is our hashes for child safety made by our own." Would Apple deny it? That country can go to the press and claim Apple doesn't love children.

But the system is there to detect exact matches (and derivatives).

What would an image of child safety look like?

So if they provided hashes to that to Apple they would only catch people who had the images the government had of a particular child wearing seatbelts or similar.

The system can't find images of similar nature.
 
That is the other thing. What about this scenario.

Newly married couple in their early 20s. They both look young, look 16 or so. They spend most of their time long distance and like to share these adult images with each other via iMessage. How will the person at Apple determine that the wife/husband is of legal age? Do they have access to everyone's Driver's License to make that determination? Likewise, how will they product 16 year olds that look like they are 25?
unless the photos are in the database there will be no match, in other words, if you make a childporn video or series of photos and store those photos on your phone and they are uploaded to icloud they will pass through just fine

the system does nothing about NEW childporn/csam images, only matching know images stored in the csam database, in fact the images must be on at least 2 databases to qualify to get in the database

the married couple presumably would be in no databases at all
 
Last edited:
But how will you be better off?

Are you going for worse privacy out of spite?

With Google Workspace you have better control over privacy than with Apple products (especially after iOS 15). Sure, you end up paying over $100 a year to get that privacy with Google but no one said it would be free or even cheap.
 
Neural hasing is based on similarities not exact matches (it’s even on Apples own white papers). For example, police might be looking for anyone who has picture of victim, suspect or any person of interest. Add location data and access all the documents (not just pictures) and you have very nice mass surveillance network. That would be Prism 2.0 right there. One could build whole case with that data.

No, you haven't understood how NeuralHash works.

It's trying to find exact photos and derivates from that exact photo, not similar photos. It is optimised to not find similar photos.

Let's say you are killed by a friend. This (previous) friend has a picture of you drinking a beer in his iCloud Photo Library.
The police gets pictures of you from family members and others who knew you. Maybe they get 100 images of you. The police forces apple to NeuralHash these 100 pictures and apply them to iOS.

NeuralHash will not catch, with a high probability, your killer because those 100 pictures are not exact matches (or derivates with small changes) of the picture your killer has on his phone.

Part of NeuralHash is to use a neural network (machine learning) to also teach the system to not recognise images which are not exact or derivatives.

That's why this system is so useless in many cases to misuse.
 
  • Like
Reactions: januarydrive7
In the case the 10th Circuit decided that information, in this instance scanned, seized and sent by AOL to NCMEC and then reviewed by NCMEC was an unlawful search by a government agency.

Yes, but in a newer decision they said the search was legal because of the good-faith exception.

Also this case also involved AOL sending over additional information which they didn't know where child pornography. Apple will only send over images they know with a very high probability is illegal.
 
Or the US, or Europe,etc,etc
Freedom is under attack everywhere.

The only thing this would be good at and be allowed in the US and EU, is for copyright violations of images.

But how many pirates images and keep them in their personal photo library?
 
Again, why put this on device if iCloud is already doing this? And this ONLY applies to iCloud photos. It makes no sense to suddenly put it on device.
Becuase

  1. If on Device, only the device (not Apple) knows what it is scanning and Apple is not sent any information on the user unless the user is a pedophile.
  2. On Cloud, each scan is done with Apple knowing who owns the scanned image hash.
 
This is different. Why move it from iCloud to all of our devices if it is already doing it?
Because when doing it on client side you have infinite scalability. The client literally pays for the device which is doing the scanning and heavy lifting. Cloudflare has stated that one of the biggest hurdles of using sophisticated algorithms and AI in general to scan for child porn are the processor requirements. Also once you scan photos on device you can easily develop the system to scan other types of data also.
 
  • Like
Reactions: BurgDog
You mean on the iCloud servers? If so,then they're already preventing storage and distribution, no? Why did they only reported 265 pictures to the NCMEC last year? Did they detect CSAM without reporting it to the NCMEC?

My suspicion is they only scan emails.
 
Because when doing it on client side you have infinite scalability. The client literally pays for the device which is doing the scanning and heavy lifting. Cloudflare has stated that one of the biggest hurdles of using sophisticated algorithms and AI in general to scan for child porn are the processor requirements.
This is one reason. Another might be Apple doesn't want to host CSAM at all to begin with.
Also once you scan photos on device you can easily develop the system to scan other types of data also.
False --- this is in direct contrast to how the system was actually designed to work. It would take a complete removal of NeuralHash and a replacement with a more generalized neural detector, which wouldn't be necessary. They could just use the existing scanning they have been doing for over a decade for face detection, in which case why waste engineering hours to develop NeuralHash at all?
 
But the FBI does.
What will Apple do when they receive a national security letter with a gag order ordering them to report certain pictures?

Are you just using your imagination trying to find unlikely scenarios?

A National Security Letters (NSL) may be challenged in federal court although it's not easy. Also NSLs can only be used for transactional records and other non-content, so Apple is not required to change their system to give more data to the FBI.

The people the FBI would be after would need to have on the order of 30 of these exact images (or derivates from these exact images). Can you describe what sort of images and cases this would be useful for?

Also, the system won't scale. Every time FBI served a new NSL and provided new hashes to Apple, Apple would have to create a new version of iOS and deploy that. If they did that too often, people would become suspicious.
 
  • Like
Reactions: citysnaps
How does any layman know what's in the database? Who decides? Are Sally Mann's photos in there? If so, how many of them?
No layman knows. If Apple is to be believed, the database contains hashes of CSAM that has been verified as actual CSAM (i.e., not propaganda, etc.) from multiple separate non-cooperating governments. It's supposed to have only actual CSAM, and the multiple jurisdiction is meant to thwart a nefarious actor from injecting material to track local citizens in some way.
 
But the FBI does.
What will Apple do when they receive a national security letter with a gag order ordering them to report certain pictures?
Have they promised not to scan your phone or MacBook for anything but CSAM in the future? (or even the next x years?)
No!
And they're not going to tell us that because they know we won't like the answer.
This wouldn't be possible, unless they convinced other governments to also include those photos in their local CSAM databases.
 
No, you haven't understood how NeuralHash works.

It's trying to find exact photos and derivates from that exact photo, not similar photos. It is optimised to not find similar photos.

Let's say you are killed by a friend. This (previous) friend has a picture of you drinking a beer in his iCloud Photo Library.
The police gets pictures of you from family members and others who knew you. Maybe they get 100 images of you. The police forces apple to NeuralHash these 100 pictures and apply them to iOS.

NeuralHash will not catch, with a high probability, your killer because those 100 pictures are not exact matches (or derivates with small changes) of the picture your killer has on his phone.

Part of NeuralHash is to use a neural network (machine learning) to also teach the system to not recognise images which are not exact or derivatives.

That's why this system is so useless in many cases to misuse.

You are wrong. Derivative and similar in grand scale of things is the same when source material number is high enough. In Apple white paper they are literately showing how they are matching the single colour image to grayscale image. This information is used in AI training process which consist of aprox. 200 000 images. You do understand that the whole CSAM database which Interpol is using has over 2.7 million images. You honestly believe that a) Apple is only looking for 200 000 images because those are the images that were used to train the AI. Tell me why are they only looking for such a small number of images and leaving out 2.5 million? B) Or after 200 000 child porn images the AI has enough info to recognise also the other 2.5 million images? C) This means the AI can recognise and correctly identify material outside the teaching material. D) Why wouldn’t they search child porn outside the base set? This might actually save children. With adjustments of Neural hash threshold this can be easily achieved.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.