Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I obviously don’t know, but I suspect this would have played out differently if Steve Jobs was still the CEO. He probably would have opposed the idea himself and therefore we’d have never had to worry about it. But if not he would have at least foreseen the backlash and not floated the idea.
 
  • Like
Reactions: Apple Fan 2008
They stopped implementation because the ACLU and various other civil liberties organizations including the EFF are dead ****ing silent while the NSA/CIA/GCHQ and various other 4th Amendment/Governement Privacy Violators in the SIGINT space hoovers up all your personal data and thinks Facebook and Google's abuse of their monopolies to hoard and exploit your personal information and then sell it to the NSA/CIA/GCHQ is just ****ing dandy.

But Apple wants to stop pedophiles and it's OVER THE LINE!
 
Again, yeah. But how similar? similar in what ways? Is there an example of an especially egregious false-positive found in testing? Was the hashing method updated as a result?
Exactly. We don’t know but the white paper basically stating that photoshop manipulation will not fool it. You know how much you can do in photoshop?
 
"It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types."

This is the right answer. CSAM needs to be dealt with, nobody disagrees with that. What they were proposing was not how to do it.

Just glad so many were willing to oppose it instead of rolling over like many members on here with the "I have nothing to hide" response.
 
  • Like
Reactions: gusmula
As I understood it (and I did read Apple's overview document at the time) the scanning happened on the iDevice before upload - the whole idea being that you could still have end-to-end encryption and Apple still couldn't see your original pictures. All Apple got was a hash that they could compare against a list of hashes from known CSAM material.

The first problem was simply crossing the line and setting foot on a slippery slope by scanning before the images were uploaded and sharing the results with Apple. Makes it very easy to take the next step and expand that to all images on the device.

The second problem was that Apple were reliant on "the authorities" for supplying the database of "CSAM hashes" and actually had no way of knowing what was being declared as "bad".

Then there was a lot of smoke and mirrors about what a "hash" meant. "Hash" is a very general term in computing. One type of hash uses a particular, well-defined algorithm to generate an as-good-as-unique ID for a particular set of data, that will change in response to the slightest alteration to the data: that's ths sort of hash you use for cryptography and code signing. That would be useless for CSAM detection - change a couple of pixels in the image, let alone crop, re-size or adjust the colors and the hash would no longer match the "bad" fingerprint. The sort of hash we're talking about here is designed to produce the same hash for similar images so it won't be fooled by cropping, resampling, recoloring etc. and is usually generated using machine-learning-type techniques which make it difficult to explain which features of the image are leading to the result (not impossible - there are analysis techniques - but not something you'd want to explain to a jury or a CEO). With that, comes the inevitability of false positives. That's the sort of hash used for CSAM detection and Apple's report was full of praise for how it could defeat the wiley paedophiles who tried cropping and posterising their wares.

Trouble is, their solution to the false positive problem seemed to be pure "Prosecutor's fallacy" - one match = false, ten matches = porno filth! - i.e. assuming that false positives were random and uncorrelated, whereas in reality the photos on one individual's camera will have dozens of photos of the same subjects or places - possibly including whatever triggered the false match.

If you dug down into the really technical papers, would it turn out that they'd have thought of that possibility and either investigated and refuted it, or found a clever solution? Maybe, but its a pretty crucial point, and a solution or refutation would be something to sing about in the executive summary.

Almost the only way of testing a system like that and finding out the true false positive rate would be a massive trial on real-life data with human confirmation of each match (and a comprehensive after-care program for the poor so-and-sos doing the comparisons). Nothing else would be representative.

There was also "reassurance" that matches would be checked by Apple staff before taking action - but according to their own description of the system, the only thing that Apple could possibly check was that, yes, the hash sent by your phone (based on an image we can't see) matches the blacklist (generated by the authorities from images that it would be illegal for us to see).
not buying "slippery slope". since it's on device, white hat can check if apple adjusted the detection for more than CSAM
 
So far as I am aware Apple and other companies are required only to alert the authorities if they discover illegal images on the servers. They are not obliged to search for them. However, the iCloud user license precludes putting illegal content on their servers, so they do screen for that. That's fine by me - their servers, their rules.

The issue was about installing on-device surveillance without the permission of the owner of the device. And such a system could be used not only to detect illegal CSAM images, but authoritarian regimes could use it to scan for faces of dissidents, forbidden flags and symbols, political slogans, religious text and pictures, and even faces of certain ethnic identities. To make matters worse, when this was pointed out, Apple published a technical paper outlining the system, giving authoritarian regimes an outline of how to do this even if it isn't a part of iOS. We'll see whether a scheme like this will get embedded into authoritarian regimes' surveillance of their citizens. It seems many failed to understand the ramifications of this.

Getting back to the news article:
First, those of us who objected were told we didn't understand the technical aspects of the system. We did.
Then we were told the system performance was excellent. Then, it became clear there would be false positives (hence the need for human review on Apple's end). Then it was established the system could be circumvented by minor modifications to images.
And now we are told 'sorry, bad idea because the system might be misused', which completely misses the point that the intended use of the system was to search without a warrant, without probable cause, without judicial review, and without explicit permission by the owner of the device to do so.

This is what happens when you let engineers run amok without ethical, legal and social review. Not Apple's finest hour.
i don't buy what "could happen"

white hat can check if apple suddenly is checking for more photos and the biggest newstorm of apple controversies can break into mainstream news about how apple betrayed its customers
 
  • Disagree
Reactions: VulchR
I understand why people were complaining about this CSAM detection; but I never did understand why they complained about that and not object detection via AI/ML too...

Well, yes, we probably should have been more worried when (e.g.) iPhoto started tagging people's faces in images - what could possibly go wrong? Its always a lottery as to what new developments hit the news and what slips under the radar.

However, there are some subtleties here.

When you upload a photo to FaceBook/Instagram/YouTube/Twitter (sorry, X!) etc. you're asking them to publish it for you (something people should be more aware of!), and they're actively helping you reach an audience. They're not the government, they've no obligation to uphold free speech, and it's not unreasonable for them to impose content restrictions (any publisher would). Don't like their T&Cs? You can run your own website for 10 bucks a month. If they want to reject images that trigger their gun/nudity/profanity/whatever, that's their prerogative. Whether the social media giants are so dominant now that they should have an obligation to uphold freedom of speech (as opposed to freedom to agree with their CEO - ahem!) is a whole other debate (and diametrically opposite to imposing an obligation to censor).

Services like iCloud, DropBox, OneDrive etc. - not to mention email services and ISPs - put a slightly different complexion on it. They're keeping your files for you and mostly offer end-to-end encryption so that they can't "publish" them or monetise the contents. They're acting as the Post Office, a self-storage warehouse or - increasingly - the SD card or USB stick that you used to carry in your camera bag, or the backup disk you had locked in a cupboard. Things that wouldn't previously have been subject to arbitrary search and seizure by the authorities without some sort of probable cause (frequently abused, of course - which just adds grease to the slippery slope of technology that lets them search your stuff easily and invisibly).

The Apple CSAM scheme - which probably had the good intentions of trying to head off attempts to ban end-to-end-encryption - still amounted to routinely searching your files for anything resembling an item on a (secret) list of proscribed images every time you stored them to your private (even if online) storage. (lots of apologists throwing up technical FUD about 'its just hashes' etc, that doesn't actually change that fact) That crossed a lot of lines that FaceOGram refusing to post naughty pictures to your timeline didn't.

...and, of course, mass scanning of that kind (there are billions of phones in use - how many photos a year is that?) means that even a one-in-a-billion chance of a false positive is a serious issue.
 
  • Like
Reactions: VulchR and gusmula
The feature is only as good as its (privacy focused) implementation. There is zero reason to assume authoritarian would be able to gain unlimited access to the tool to scan whatever they want.

But if more and more regimes come up with baffling demands (like the U.K.) then it’s the right call to never release the tool.
No, you have to have a principle in the first place and stand on it. Apple talked a big privacy game as if a core principle, then created this.
 
not buying "slippery slope". since it's on device, white hat can check if apple adjusted the detection for more than CSAM
No they can't.

It generates hashes of your photos and compares them against a list of hashes of "CSAM" photos provided by government (or, at least, government-sanctioned) agencies. Only those agencies - not even Apple - have access to the actual CSAM photos (because they're CSAM images and illegal to possess - D'uh).

So it all comes down to "do you trust the government not to expand the scope of these blacklists" - and you hardly need a tinfoil hat to suspect that because (certainly in the UK) politicians are quite up front about wanting a backdoor to encryption (which is what this amounts to) so that they can gather intelligence on crime, terrorism, drugs, littering...
 
  • Like
Reactions: VulchR
The big wigs at Apple and the owner of this website should be extremely ashamed and embarrassed that there are Apple customers and MR members who would rather see the continuing of child abuse and child exploitation rather than have an effective system put into place to catch such behavior because that is what happens every time a vile image is uploaded to the icloud, a child continuing to be abused.
The cognitive dissonance of some posts are off the chart.
CSAM would be able to flag the image, Apple would be able to report to law enforcement where the image was uploaded from, the time and date. This would allow law enforcement to then carry out investigations with the eventual result of the person who uploaded the image arrested. That person could then disclose where they got the image from leading to a domino effect of arresting one after the other in the chain of abuse eventually leading up to the ring leader who abused children and created the image in the first place.
We’ll yes that’s the way the system was supposed to work.
But none of that will happen because many Apple customers and many MR members value their privacy far too much.
How many illogical hoops does one have to go through to come up with this false conclusion?
What is the well known saying 'If it saves one child life then it is worth it'. Not for many MR members is does not. Hell would have to freeze over before they agree to any encroachment into their privacy.
So only you have the correct balance in life between broad rules and their encroachment onto diminishing personal freedoms? I think not.
 
"Scanning every user's privately stored iCloud data would create new threat vectors for data thieves to find and exploit," Neuenschwander wrote. "It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types."

As I recall, all these points were brought up at the time they announced their CSAM detection system and Apple went to great lengths to explain why these things wouldn't happen. So why are they agreeing with critics now?

-kp
 
  • Like
Reactions: VulchR and gusmula
No they can't.

It generates hashes of your photos and compares them against a list of hashes of "CSAM" photos provided by government (or, at least, government-sanctioned) agencies. Only those agencies - not even Apple - have access to the actual CSAM photos (because they're CSAM images and illegal to possess - D'uh).

So it all comes down to "do you trust the government not to expand the scope of these blacklists" - and you hardly need a tinfoil hat to suspect that because (certainly in the UK) politicians are quite up front about wanting a backdoor to encryption (which is what this amounts to) so that they can gather intelligence on crime, terrorism, drugs, littering...
You're misunderstanding.

CSAM hash is of known photos.

If you're worried about the gov suddenly searching if you have photos of your own, say, weapons or guns, the algorithm will change and white hat hackers will know.

Apple isn't going to know the hash of pics of your own weapons. Algorithm will need to change if they want to search for something more than known photos.

What, are you worried they might start searching for everyone's libraries of swastikas? That would be pointless as someone could be working on a paper for college about history of Germany.
 
  • Disagree
Reactions: VulchR
What worried me about all this is times when you shot a photo of your toddlers in the bath (or something else just as similarly innocuous) and ten minutes later your pad was being raided by cops. AI isn't smart enough yet to know the difference.

Yes, it was.

The CSAM Detection Tools was not trained to detect nudity or explicit pictures at all.
In fact, it was trained not to detect normal nudity.
 
Glad Apple is taking a stand to protect privacy and to push back against governments who are trying to wield their control over technology and tech privacy in this modern day. I think Apple should have put its foot down earlier and with something else— the EU compulsory USB-C plug. Apple should not have complied and instead, just stop selling iPhones to that region when the law goes into effect. Watch how soon the citizens of those areas demand from their politicians that they backpedal on that law. Government should not get to dictate technological to the creators, manufacturers, marketers, and patent holders of technology. It will shortly lead to anti-progress. We are all going to be hearing a lot over the next few years about people breaking the USB-C pin strip in their iPhones and only being able to charge via Qi/MagSafe. Lightning is a more durable design than USB-C.
Honestly I also think that Lightning is superior over USB C because of that pin design, but I didn’t see any widespread issue with it on internet. Also there is no room to bend it, so as long as you don’t really mess with it, you will have no issues.

But about government dictating over companies, it’s a need. You now may see it as something bad, but the governments are also the ones that pushed warranties, availability of parts for repair, health measures, quality standards…
EG: Apple now has to offer 3 years of warranty in UE, thing that wouldn’t happen if it wasnt a government measure and AFAIK that’s a pro consumer movement, instead lf the 1 year that US citizens get
 
CSAM hash is of known photos.
Which could just as easily be known photos of terrorist leaders, posters with terrorist slogans, known photos of drug paraphernalia, scans of subversive pamphlets... No, it shouldn't be able to detect your own photos (unless its a false positive) but there's no reason that the on-device algorithm needs to be modified in order to generate a hash from any photo which would match the hash of whatever images, on whatever subject, were on the list.

In any case, since when was this algorithm going to be open source and available for inspection?
 
  • Like
Reactions: VulchR and gusmula
There should be no expectation of privacy for illegal activities.
We agree that when something illegal happens, proportionate loss of privacy is acceptable. But here the sequence is the opposite: disproportionate loss of privacy for all in order to potentially identify some illegal activities by a few.

The initial idea was that Apple approach would avoid the loss of privacy, this is unfortunately not the case, this is why Apple abandoned the concept.
 
Honestly I also think that Lightning is superior over USB C because of that pin design, but I didn’t see any widespread issue with it on internet. Also there is no room to bend it, so as long as you don’t really mess with it, you will have no issues.

But about government dictating over companies, it’s a need. You now may see it as something bad, but the governments are also the ones that pushed warranties, availability of parts for repair, health measures, quality standards…
EG: Apple now has to offer 3 years of warranty in UE, thing that wouldn’t happen if it wasnt a government measure and AFAIK that’s a pro consumer movement, instead lf the 1 year that US citizens get
The things you are mentioned are apples and oranges. What I’m talking about specially is hardware. Compelling tech hardware standards like USB-C will inevitably slow down progress. If you disagree, you are myopic. Imagine if the EU did the same thing 20 years ago with USB-B connectors. We’d be stuck on inferior tech due to government mandate. Some day (probably not too far from now) a tech company or consortium will come up with a connector that is vastly superior to USB-C. But that will get stifled when governments compel things to be static.
 
  • Like
Reactions: gusmula
I obviously don’t know, but I suspect this would have played out differently if Steve Jobs was still the CEO. He probably would have opposed the idea himself and therefore we’d have never had to worry about it. But if not he would have at least foreseen the backlash and not floated the idea.

If Jobs was still the CEO, nothing that requires his sign off would happen. He's dead.

When alive, I'm not sure he'd have opposed it. It was a clever solution to a hard problem and Jobs loved clever solutions to hard problems.
 
Its a feature with neutral intent at best.

The propaganda munching rubes are precisely the useful idiots that think a company has good intent and isnt a soulless profit machine merely avoiding liability and shoveling off server cycles to our devices instead.
They didn’t have to implement this feature.
The things you are mentioned are apples and oranges. What I’m talking about specially is hardware. Compelling tech hardware standards like USB-C will inevitably slow down progress. If you disagree, you are myopic. Imagine if the EU did the same thing 20 years ago with USB-B connectors. We’d be stuck on inferior tech due to government mandate. Some day (probably not too far from now) a tech company or consortium will come up with a connector that is vastly superior to USB-C. But that will get stifled when governments compel things to be static.
That’s fundamentally untrue. The EU tried for years to get tech companies to work together and come up with a solution on a voluntary basis. That failed. Multiple times.

So they had their chance to form a consortium and propose standard(s) on their own term. They refused. Only then the EU started the process to make mandatory legalisation.
 
We all read it. It detected some level of photo manipulation and photoshop work so there is some leeway involved. It’s not a perfect pixel by pixel comparison.
Try reading what they wrote, what I replied and reading the white paper again. Then come back with a sensible response.
 
IMG_8066.jpeg

What part of what I said is wrong? CSAM scanning only recognizes known images. The actual CSAM scanning happens on device (which I didn't mention), but the release explained that Apple was only scanning images once they upload to iCloud.

There was also a feature that warns when receiving potentially inappropriate images. This is separate from CSAM and would detect new images, but that's separate from this feature.
“Anything stored on-device would not be scanned at all.”

Literally from the white paper:above
 
Knives over a certain size (or using a specific method of operation like switchblades) are totally illegal. Because they’re primarily designed to hurt people.

Where I am, knives of any size or method of operation (including switchblades) are totally legal.
 
  • Like
Reactions: uller6
wild to think back to when apple announced they were gonna open this pandora’s box to appease foaming at the mouth reactionaries who just started talking about protecting kids for the first time ever like 3 years ago, but only support means of “protecting” kids that either directly attack the lgbtq community, take autonomy away from kids, or further surveillance like this.

so glad they stopped this, the end goal of this is so transparently not to help kids.
 
  • Like
Reactions: VulchR
The big wigs at Apple and the owner of this website should be extremely ashamed and embarrassed that there are Apple customers and MR members who would rather see the continuing of child abuse and child exploitation rather than have an effective system put into place to catch such behavior because that is what happens every time a vile image is uploaded to the icloud, a child continuing to be abused. CSAM would be able to flag the image, Apple would be able to report to law enforcement where the image was uploaded from, the time and date. This would allow law enforcement to then carry out investigations with the eventual result of the person who uploaded the image arrested. That person could then disclose where they got the image from leading to a domino effect of arresting one after the other in the chain of abuse eventually leading up to the ring leader who abused children and created the image in the first place. But none of that will happen because many Apple customers and many MR members value their privacy far to much. What is the well known saying 'If it saves one child life then it is worth it'. Not for many MR members is does not. Hell would have to freeze over before they agree to any encroachment into their privacy.
oh yeah, anyone who disagrees with you is totally cool with child abuse. what a reasonable position. please. 🙄

also, what’s stopping law enforcement from catching all the abusers now? following your line of thinking, they would just need to catch a single abuser and the same domino effect would happen. is it because they need total access to everyone’s data to catch even a single abuser? sounds like they’re the ones failing here, not the people who don’t want to forfeit their privacy.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.