Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Isn’t this detecting feature already on their servers for iCloud? When this controversy came up I got to understand that they wanted to move the detecting process to the customers devices, so pictures on iCloud are being already monitorised.
I don't think they can scan your images when you enable advanced data protection. that would only work if they scanned the images on device before they are uploaded, which is precisely what they said they wont do.

I think the article you quoted is outdated.
 
Even ignoring the risks of this feature, it would not have been all that useful in accomplishing its desired goal. It would only be triggered when someone would download an image that was listed in the database of bad images, and then upload that image to iCloud (likely using Photos). Anything stored on-device would not be scanned at all.
Another person that didn’t read the white paper properly.
 
Honestly, just reading the comments here with the words toddlers, and babies in the present context makes me nauseous. I don't even want to image, even in the most abstract form, what these sick people do.

Seriously.
 
Honestly, just reading the comments here with the words toddlers, and babies in the present context makes me nauseous. I don't even want to image, even in the most abstract form, what these sick people do.

Seriously.
One person incorrectly assumed that this system would have reported photos of one’s own kids in the bathtub or otherwise in a non-sexual state of undress — it would not have — and nobody mentioned babies.
 
Glad Apple is taking a stand to protect privacy and to push back against governments who are trying to wield their control over technology and tech privacy in this modern day. I think Apple should have put its foot down earlier and with something else— the EU compulsory USB-C plug. Apple should not have complied and instead, just stop selling iPhones to that region when the law goes into effect. Watch how soon the citizens of those areas demand from their politicians that they backpedal on that law. Government should not get to dictate technological to the creators, manufacturers, marketers, and patent holders of technology. It will shortly lead to anti-progress. We are all going to be hearing a lot over the next few years about people breaking the USB-C pin strip in their iPhones and only being able to charge via Qi/MagSafe. Lightning is a more durable design than USB-C.
 
Whats strange is this was all known from Apple when they defied the FBI request to unlock the San Bernadino iPhones years before. This isn't a new stance at all, it was more that the plan they had for inspecting photos went against their apparent stance.

Here’s the thing.

Google and Microsoft can run all the checks they want and swallow the cost because they own the servers that their cloud infrastructure runs on.

iCloud runs on Google servers for which Apple already pays at least thirty million dollars a month just for storage.

Now if they also needed to pay to run apps to check every single file uploaded by every single user – the cost would be beyond astronomical.

So Apple hit on a really clever solution: transfer the cost of running the check to their customers’ devices. So now Apple customers would shoulder the burden in terms of processor time and reduced battery life.

This was never about protecting the children. It was really about getting their customers to pay for CSAM checking.
 
Too bad. In a perfect world, this kind of material could be privately scanned for without any of these ancillary concerns.
 
Another person that didn’t read the white paper properly.
What part of what I said is wrong? CSAM scanning only recognizes known images. The actual CSAM scanning happens on device (which I didn't mention), but the release explained that Apple was only scanning images once they upload to iCloud.

There was also a feature that warns when receiving potentially inappropriate images. This is separate from CSAM and would detect new images, but that's separate from this feature.
 
No.

The slippery slope commences. Child abuse, so easy to put in place the infrastructure for censoring.
Then came the clamor for "hate speech". Then Thought Crime.

No. Child abuse is illegal and only a very few people are involved. The wider privacy issue is far more important.

Just read 1984 (the Stephen Fry audiobook is best) to see the end result which we can already see happening.
It's worth noting that "Slippery Slope" is a literal logical fallacy. Just sayin'.
 
This could have been a good thing but a bunch of complainers who didn't even realize before that Apple is already scanning their iCloud email for child porn suddenly went "but my privacy!"

Even ignoring the risks of this feature, it would not have been all that useful in accomplishing its desired goal. It would only be triggered when someone would download an image that was listed in the database of bad images

As I understood it (and I did read Apple's overview document at the time) the scanning happened on the iDevice before upload - the whole idea being that you could still have end-to-end encryption and Apple still couldn't see your original pictures. All Apple got was a hash that they could compare against a list of hashes from known CSAM material.

The first problem was simply crossing the line and setting foot on a slippery slope by scanning before the images were uploaded and sharing the results with Apple. Makes it very easy to take the next step and expand that to all images on the device.

The second problem was that Apple were reliant on "the authorities" for supplying the database of "CSAM hashes" and actually had no way of knowing what was being declared as "bad".

Then there was a lot of smoke and mirrors about what a "hash" meant. "Hash" is a very general term in computing. One type of hash uses a particular, well-defined algorithm to generate an as-good-as-unique ID for a particular set of data, that will change in response to the slightest alteration to the data: that's ths sort of hash you use for cryptography and code signing. That would be useless for CSAM detection - change a couple of pixels in the image, let alone crop, re-size or adjust the colors and the hash would no longer match the "bad" fingerprint. The sort of hash we're talking about here is designed to produce the same hash for similar images so it won't be fooled by cropping, resampling, recoloring etc. and is usually generated using machine-learning-type techniques which make it difficult to explain which features of the image are leading to the result (not impossible - there are analysis techniques - but not something you'd want to explain to a jury or a CEO). With that, comes the inevitability of false positives. That's the sort of hash used for CSAM detection and Apple's report was full of praise for how it could defeat the wiley paedophiles who tried cropping and posterising their wares.

Trouble is, their solution to the false positive problem seemed to be pure "Prosecutor's fallacy" - one match = false, ten matches = porno filth! - i.e. assuming that false positives were random and uncorrelated, whereas in reality the photos on one individual's camera will have dozens of photos of the same subjects or places - possibly including whatever triggered the false match.

If you dug down into the really technical papers, would it turn out that they'd have thought of that possibility and either investigated and refuted it, or found a clever solution? Maybe, but its a pretty crucial point, and a solution or refutation would be something to sing about in the executive summary.

Almost the only way of testing a system like that and finding out the true false positive rate would be a massive trial on real-life data with human confirmation of each match (and a comprehensive after-care program for the poor so-and-sos doing the comparisons). Nothing else would be representative.

There was also "reassurance" that matches would be checked by Apple staff before taking action - but according to their own description of the system, the only thing that Apple could possibly check was that, yes, the hash sent by your phone (based on an image we can't see) matches the blacklist (generated by the authorities from images that it would be illegal for us to see).
 
Thats nice for icloud, but doesn‘t address the csam hash-checking function running locally on machines with mac OS 10.15 & newer. If your computer comes with a built in quiet little snoop that doesn't alert you of the presence of illegal material, but instead just alerts the federal police and wrecks your life, thats probably something everyone with teenagers should consider very deeply.
Even if mac os had that, (which, it doesn't?)... why would everyone with teenagers need to worry? Do you think trading child abuse material is something that teens are doing all the time?
 
Sounded good on the surface, would've been a threat to privacy in practice.

When the infastructure is put in place to look at user's cloud data, the government might want some of that...

A better way would be to try to shut down the sites on the dark web that have this sick content. And to give the chair to those involved in spreading this content.
 
Last edited:
  • Like
Reactions: gusmula
Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types.
Is there a deep hidden meaning to this? Basically is Apple saying in not so many words that if they were to create software to scan for CSAM in icloud that it could fall into the hands of data thieves who would exploit it's use and thus to prevent such a thing happening they have no intention of creating the software in the first place? Much like the argument Apple use for not making security backdoors into iphones because they are worried making such a thing will fall into the hands of criminals who would exploit it's use and therefore it is better to say no such thing exists.

Not particularly criminals, but governments for instance, who demand it or use flaws in the software, to scan for whatever they are up to, because their legislation, they even might adopt to comply with that feature, asks for this type of scan…
 
  • Like
Reactions: gusmula
Good on Apple!

First they came for the socialists, and I did not speak out—
Because I was not a socialist.

Then they came for the trade unionists, and I did not speak out—
Because I was not a trade unionist.

Then they came for the Jews, and I did not speak out—
Because I was not a Jew.

Then they came for me—and there was no one left to speak for me.



With the rise of populist driven one-trick-pony political movements, it is truly great to see Apple's stance. Privacy is vital as is the right to free speech.
Yes, you have the right to store child pornography on your iPhone and on iCloud. You also have the right to exploit children with it unless you get caught. Good for you! Have at it i guess.
 
Well, the way all of this played out leads me to believe Apple did have the best of intentions going into this.

First, I'm going to say in the course of being a LEO for 25+ years I've seen multiple instances of CASM in both the physical and digital form in the course of my official duties. I remember every single time I saw it. The person we seized it from, where we seized it from, what the weather was like that day, etc. To say this garbage seers the brain of those involuntarily subjected to it is an understatement. The most horrific horrible stuff.

So I understand the desire to keep it at bay as much as possible. On the other side of every CASM is a victim or multiple victims and the thought of it breaks my heart. Trying to reduce the number of victims of this horror is a laudable effort, even if the methodology is flawed.


On the other hand, Apple should know better by now. This is a company that has a reputation for outside the box thinking and projecting the consequences of bad decision-making. Apple should have known going in the potential of this to be by governments to use it to track dissidents and suppress free speech. Any technology that can track can be used for good or for bad. The unintended consequences should have been patently obvious.

Right now, we live in a world where for the most part, tech companies operate hand in glove with the government. We've seen that with social media platforms during the pandemic and last election cycle. Regardless of your political affiliation, that happened.

It used to be one could make the argument that what a private company does (in the US) in so much as searching user content and filtering the freedom of expression was not a Constitutional violation because private companies have that right.

But there's no denying that some of the companies that provide these services are often operating as shadow subdivisions of the government.

And if you're OK with that, just remember that one of two things will happen. The government you support in that effort today will either eventually fall out of power, opening the door to their opponents having the opportunity to wield said power in a way you don't agree with. Or, said government will take total power, and wield that power against everyone it sees fit in order to keep that power. Friend or foe alike.
 
“Ultimately, saying that you don’t care about privacy because you have nothing to hide is no different from saying you don’t care about freedom of speech because you have nothing to say. Or that you don’t care about freedom of the press because you don’t like to read. Or that you don’t care about freedom of religion because you don’t believe in God. Or that you don’t care about the freedom to peaceably assemble because you’re a lazy, antisocial agoraphobe. Just because this or that freedom might not have meaning to you today doesn’t mean that that it doesn’t or won’t have meaning tomorrow, to you, or to your neighbor – or to the crowds of principled dissidents I was following on my phone who were protesting halfway across the planet, hoping to gain just a fraction of the freedom that my country was busily dismantling.”
Ed Snowden, Permanent Record, pp. 208–209
 
Thats nice for icloud, but doesn‘t address the csam hash-checking function running locally on machines with mac OS 10.15 & newer. If your computer comes with a built in quiet little snoop that doesn't alert you of the presence of illegal material, but instead just alerts the federal police and wrecks your life, thats probably something everyone with teenagers should consider very deeply.
It is clear you do not understand how this feature works or even what hash-checking means.

These systems work like antivirus software. They scan files for matching hashes against a database of known child abuse material compiled by law enforcement agencies.

A child having explicit photos of girl/boy-friends is not going to be flagged because it is not CSAM being circulated within known pedophile rings online.

Let’s at least get our facts straight before arguing pros and cons of systems such as these.
 
  • Like
Reactions: jonblatho
Apple just need to pray that criminals that are involved with child abuse and exploitation do not use icloud for their ill gotten gains because if the police catch the criminals and find images on Apple's icloud, the crap will hit the fan because Apple are saying their current systems for finding child abuse media on icloud are already robust and thus they do not need to build CSAM detection into icloud but if the police were to find images it would prove Apple's stance on the issue is baseless and cause a huge backlash against Apple because many would then be saying if Apple had implemented CSAM like they was asked to, the images the police find would not have been there.
Total nonsense
 
  • Like
Reactions: Greezeg and VulchR
What worried me about all this is times when you shot a photo of your toddlers in the bath (or something else just as similarly innocuous) and ten minutes later your pad was being raided by cops. AI isn't smart enough yet to know the difference.
The system was never designed to be able to do that at all. It would have compared the pictures you have with a CSAM database, and even if there was a false positive, you would have needed to reach a specific threshold of CSAM content matches, followed by a human review of those matches before being reported to law enforcement. So the picture of your toddler in the bath never was even remotely close to be reported under this system.

I’m glad Apple got rid of it because of the backdoor it creates on other content-matching scanning. But the amount of disinformation I saw on this forum on how this scanning would work was astonishing
 
  • Like
Reactions: MajorFubar
Whats strange is this was all known from Apple when they defied the FBI request to unlock the San Bernadino iPhones years before. This isn't a new stance at all, it was more that the plan they had for inspecting photos went against their apparent stance.
Apple has no more ability to unlock your phone than anyone else.
 
Yes, you have the right to store child pornography on your iPhone and on iCloud. You also have the right to exploit children with it unless you get caught. Good for you! Have at it i guess.
"Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say"
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.