Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
For my computer I run Linux and have used that as my main operating system for 13 years. I currently own an Apple iPhone SE (original) that I had been planning to upgrade to an iPhone 13 of larger size. I also own an iPad (2017).

However, after what Apple has announced recently, I'm may hang onto my cell phone for a while longer to see what happens with Apple. I might move to an Android phone and de-google it. When I replace my iPad, I may well go to a Microsoft Surface and install Pop!_OS Linux on it.

Linux still requires a certain degree of geekiness that surpasses so many people. So does de-Googling any Android phone. This might help you, but the rest of less tech-savvy people will still update to iOS/iPad OS 15 and Monterey. A better strategy, IMO, would be bringing this issue up with your friends and if they see a problem with it, they do not update. If the update numbers are significantly lower than usual, Apple will hear your message. Otherwise, they are too powerful and comfortable by now to bother and there was not enough media buzz as yet for most people to notice.

The whole thing puzzles me - if Apple felt so strongly about it all along, why not announce it coming during the WWDC? Why push the CSAM database onto users’ devices instead of scanning in the iCloud? It is still an image and not self-spreading virus after all. I would like these questions to be addressed more in mass media, as it was the case with Apple Map’s launch a while back. Back then, however, it was Apple’s competitors (Google et al.) poking fun at Apple. This time it is Apple downgrading its privacy score all by themselves with the Android crew scoring much worse anyway. It is a tricky situation.
 
  • Like
Reactions: BurgDog
It might lead to a mismatch though. And if you sent these photos to someone (for instance via mail), they might well be in the CSAM database.

In their testing they got 3 false positives out of 100 million images.
 
My main point is apple is starting out with child porn but your mistaken if you think it ends there or if you will even be aware of what they are looking for once the system is in place…. People who think this program will not expand are ignorant of history and the fallacy of the human condition in general. This is a redline…. But even if millions leave Apple over it, it’s just a rounding error to them at this point, they won’t reverse course imho
 
In their testing they got 3 false positives out of 100 million images.
Can you even imagine how devestating would Be if you were one of those 3 false positives, not to even mention the new viruses that could be developed to plant images in to unsuspecting innocent people
 
I'm sure p0rnhub would love it if people were that addictited to 'normal' porn they would make billions, your facts seem highly distorted. Links to this rabid addiction of any porn at all?

Isn't there tons of free porn on p0rnhub? I'm sure they rake in plenty of ad revenue. You can google "porn addiction" and find tons of info. Seems like a lot of you are making an issue of the word "addiction." I thought this was common knowledge, but apparently not (and of course many addicts can't admit they're addicts). Let me put it this way. If someone is into porn enough to download videos or images of it to their computer, do you think they have less than 30 videos/images or 30 or more? That's all I'm saying - that Apple's 30 image threshold is WAY conservative and is simply a safeguard to dramatically decrease the chances of accounts being falsely flagged.
 
Last edited:
  • Like
Reactions: hans1972
Can you even imagine how devestating would Be if you were one of those 3 false positives, not to even mention the new viruses that could be developed to plant images in to unsuspecting innocent people

Why would it be devastating? All matches are manually reviewed for accuracy, and if they are false positives, they'd be immediately dismissed and nothing would happen. Also, you would need 30 false positives before Apple would even investigate anything, and they say the chances of that happening are less than 1 in 1 trillion per year.
 
Isn't there tons of free porn on p0rnhub? I'm sure they rake in plenty of ad revenue. You can google "porn addiction" and find tons of info. Seems like a lot of you are making an issue of the word "addiction." I thought this was common knowledge, but apparently not. Let me put it this way. If someone is into porn enough to download videos or images of it to their computer, do you think they have less than 30 videos/images or 30 or more? That's all I'm saying - that Apple's 30 image threshold is WAY conservative and is simply a safeguard to dramatically decrease the chances of accounts being falsely flagged.
1 false positive is too many for such a severe accusation…. If it flagged you then nothing else matters, your toast, some crimes are just as bad to be accused of as being convicted, the media would eat you for lunch
 
This CSAM scan can’t rely on pixel perfect analysis otherwise it’s going to be weak. As for “category” of pictures, with enough slices, it is still possible to recognise object categories in the said photo. Does require more local storage to store category learning data but that’s mostly it imo.

The CSAM detection system is built to avoid this. When Apple tested 500 000 porn images they got zero matches.

The algorithms in Photos and the new nudities avoidance in Messages is much better at doing this.
 
Why would it be devastating? All matches are manually reviewed for accuracy, and if they are false positives, they'd be immediately dismissed and nothing would happen. Also, you would need 30 false positives before Apple would even investigate anything, and they say the chances of that happening are less than 1 in 1 trillion per year.
And your not even addressing nefarious hackers that could add images to your account, hackers can still take photos from careless users, no doubt they can add too…. Might even be a service to do this to your enemies for a price
 
That is the other thing. What about this scenario.

Newly married couple in their early 20s. They both look young, look 16 or so. They spend most of their time long distance and like to share these adult images with each other via iMessage. How will the person at Apple determine that the wife/husband is of legal age? Do they have access to everyone's Driver's License to make that determination? Likewise, how will they product 16 year olds that look like they are 25?

Apple tested their system against 500 000 porn images with zero false positives. The probability they will have on the order of 30 matches seems pretty low.

Also Apple knows things about them. If the credit card used for their account is in their own name, they're probably over 18. You provide a birthday to Apple with your Apple ID. No young person lies about the age downwards.
 
The hacking community could probably show everyone just how easily they can photobomb accts…. That happening right now would be the best thing to put a stop to this, the public would turn on Apple overnight if someone demonstrated that and the media picked it up
 
  • Like
Reactions: So@So@So
With Google Workspace you have better control over privacy than with Apple products (especially after iOS 15). Sure, you end up paying over $100 a year to get that privacy with Google but no one said it would be free or even cheap.

Are you saying with Google Workspace they don't scan your Gmail, Google Drive and Google Photos for CSAM?
 
And your not even addressing nefarious hackers that could add images to your account, hackers can still take photos from careless users, no doubt they can add too…. Might even be a service to do this to your enemies for a price

You're REALLY reaching to try to have a problem with this. Besides, they're already scanning the photos in iCloud currently, so why has this not been a rampant problem already if your fear is so credible?
 
Last edited:
I just think some stuff is creepy. Imagine a 12 yo gay boy seeing a nude on internet and their parents getting notified about that. About the CSAM, even if 1 person gets wrongfully tagged in 100 years still would be too much. I would sue Apple if I knew I was tagged.

It only works if the parents turned the feature on and they look at such images in Messages which meant someone sent it to them. It won't catch this boy surfing the Internet for nudity or pornography.

If you have bad parents, you will suffer for it.
 
That is a policy restriction, not a technical limitation. While I fully expect that policy to be maintained for the foreseeable future in the United States (incurable optimist that I am), I expect Apple to fold like a cheap suit when they're pressured by the CCP, Saudi Arabia, or repressive-regime-of-your-choice. This policy will be different there. We know this is true based on Apple's past capitulations to such regimes.

A technical limitation isn't set in stone either. Apple controls the code and can in many circumstances easily just change technical limitations also.

You really need to have a large degree of trust towards Apple when they are designing and controlling the whole system from CPU, secure enclave, other hardware, firmware and the operating system.
 
  • Like
Reactions: BurgDog
The hacking community could probably show everyone just how easily they can photobomb accts…. That happening right now would be the best thing to put a stop to this, the public would turn on Apple overnight if someone demonstrated that and the media picked it up
Then there will be a million people blaming the victim failing to physically secure the device or downloading pirated contents somewhere or whatever, rather than blaming the system. Ironically, they kind of have some ground on this.
 
Can you even imagine how devestating would Be if you were one of those 3 false positives, not to even mention the new viruses that could be developed to plant images in to unsuspecting innocent people
You would have to have 30 false positives to even get flagged.
 
Apple tested their system against 500 000 porn images with zero false positives. The probability they will have on the order of 30 matches seems pretty low.

Also Apple knows things about them. If the credit card used for their account is in their own name, they're probably over 18. You provide a birthday to Apple with your Apple ID. No young person lies about the age downwards.
Admittedly not a cyber security expert, but why does Apple decide to allow people in possession of CSAM 30 opportunities of child pornography? I would assume if the system is so perfect, then just one example of CSAM should be reported, no?
 
  • Like
Reactions: BurgDog
It only works if the parents turned the feature on and they look at such images in Messages which meant someone sent it to them. It won't catch this boy surfing the Internet for nudity or pornography.

If you have bad parents, you will suffer for it.

Wouldn't this mean that ultimately Apple, or its software is ultimately able to look in the content and attachments of emails. EVEN if Apple is not notified, this would seem to me that effectively "end to end" is dead....Whether today, tomorrow or next year. The ability is now on the phone to "sneak and peek".
 
  • Like
Reactions: clunkmess
Hamming distance is one of the more common implementations for edge perceptual hashing. It’s is a dumb way to do it, but it’s simple and efficient. The real problem is, Apple isn’t saying and you’re just guessing based on words they’ve altered and tweaked over the last week. We just have to trust them. I’d love to see the real implementation layer out and verified merely for edification.

In any case, needing 30 matches, plus a secondary on server algorithm to hit their 1 in a trillion false positive target is not confidence inspiring.

Apple says they are using vectors in a two-dimensional plane and if the cosine of the angle between two vectors is close to 1 they are the same image.

Apple tested this with 100 million regular images and got 3 false positives.

To me this leads credence to the '1 in 1 trillion accounts per year' number.
 
Contravention of your Fourth Amendment? Reasonable searching and probable cause, or are we all treated as guilty from the onset? I don't see how Tim Cooke will ever be able to face an audience and talk privacy again; handing iCloud data centers over to the Chinese and now thrashing through innocent users' phones to find the one in a billion offenders. The hypocrisy is outstanding...
Apple is a private entity they can voluntarily give any or all of your records they have access to to law enforcement. Even if you had a “contract” where they promised not to it would be void. The Fourth Amendment only prevents the government from forcing an entity that resists to give them records without a court order. An organization can always volunteer the information.
 
  • Like
Reactions: hans1972
Weird, because I haven’t seen a single coherent explanation of why it’s a problem if your own device scans your photos for child porn, and only does so if you are trying to upload onto apple’s servers, and only produces information to Apple if you have at least thirty child porn photos that you are trying to upload.
1. It's doing scanning without the user's expressed permission to do so.
2. It does not matter if it's 1, or 30 or any number of photos. Police warrants exist for a reason. Apple is clearly trying to bypass the accountability checks that warrants have.
3. It does not matter that Apple is only searching hashes and not actual photo data (well not initially at least). The fact Apple is searching at all is the issue and that violates point 2.
4. This, what Apple has put into our phones is spyware.
5. Even if we do not use icloud photos and turn all that off. The spyware is still there on the device. It's just not being triggered into action.
6. This whole mess opens up to two very real issues. The first that Apple might not stop here. They could add more things their spyware searches for. Secondly this existing is a precedent that others might use (or expand upon) in the future.

We are all against crimes against children. However, using our hatred of crime as an excuse to sneak spyware into our iOS devices is morally (and possibly legally) wrong. I hope someone sues Apple in court for this spyware mess.
 
Apple tested their system against 500 000 porn images with zero false positives. The probability they will have on the order of 30 matches seems pretty low.
There are about 1 billion iPhone users.
Apple could easily be scanning 500,000 images a day once this rolls out.

It's also very easy to make the false assumption that, if the chance of one of your photos being a false match is (say) 1 in 1 million, then the chance of two of your photos matching is one in a trillion, as if they were independent events like throwing a fair dice.

False matches are not going to be independent: Apple's own documents explain that their "NeuralHash" system is designed to produce the same hash for visually similar images so that it won't be fooled by cropping, scaling, changes in image quality etc. So, let's say, by sheer bad fortune one of your photos triggers a false match because some element of it matches a known CSAM image. Given that one of your photos contains something that triggers a match, it is highly likely that you have other photos in your collection containing the same/similar visual element: even before digital made it free, a keen amateur photographer might use most of a 36 exposure film trying to get a single shot. Or maybe the false trigger is a poster on your living room wall... So that "one in a million" chance of a single random match in a random sample can easily turn into 'more likely than not' for subsequent matches.

Now, Apple ought to know that. Unfortunately, there are too many examples of people who "ought to have known that" making this common stats error, with sometime disastrous consequences. Possibly the worst - different context, same mathematical error - being this:


Or, a slightly different, but more general fallacy when it comes to confusing "1 in X chance of a random match" with "1 in X chance of a false accusation" is:


Does this prove that Apple are evil? No, but it means that people should be very critical and ask questions, and that Apple need to be very transparent about questions like "how was the 30 matches = 1-in-a-trillion calculated?" when they wouldn't be the first people to get such a calculation wrong. Also - exactly how is the human checking going to work, and will it go beyond rubber stamping the fact that the computer has found 30 matches (are the checkers even going to have the original CSAM images that were supposedly matched?)

Also, PSA, I really wish people would stop talking about "hashes" as if they were something fundamentally safe, secure and anonymous and somehow completely different from (e.g.) tagging faces in photos. "Hash" is a general computing term that covers a multitude of techniques and applications. Cryptographic hashes - the sort people are most likely to have encountered in password checking, or verifying downloaded documents, or in connection with cryptocurrency - are designed so that the slightest change in the source, even an imperceptible one, will give a different hash - and would be virtually useless because changing a single pixel in an image would change the hash and prevent detection . The "NeuralHash" system Apple is using for CSAM is a "perceptual hash" that is designed to produce the same hash for images that are "visually similar" but might have been e.g. cropped, scaled or re-compressed (and Apple's document says 'e.g.' so that's not an exhaustive list). Does that prove it is unreliable? No, but it is night-and-day different from other types of "hash" and could be just as validly described as "image recognition", having more in common with face tagging than (say) using a crypto hash to validate a file.
 
For the umpteenth time, some posters here are trying to use the red herring of the technicalities of Apple's intended software. IT IS NOT ABOUT THAT.

The child abuse is a very convenient excuse which will not aid fighting child abuse or child pornography.

This is about SURVEILLANCE.

Some posting here give the distinct impression they want to run down the route of discussing the merits of hashes/child abuse/child pornography using that situation where I suspect 99.999% of the population agree, but its obfuscation as this is not about that, its about Apple software on OUR EQUIPMENT where we have no choice, because no doubt at the first security flaw, some would not be covered by insurance if they didn't upgrade and within that upgrade is surveillance, and no about of oscillation or prevarication changes that, nor citing stats about the error rate of CSAM, as its a complete red herring based on what we are told is intended NOW, but where even Apple employees have clear reservations about the system being abused.

Those seeking to argue its all ok, have not yet suggested why then Apple reduce their overheads by having ONE set of software on its own servers, rather than having a billion plus APPLE users being effectively forced to have potentially intrusive software on devices they own, using processor capacity and even power.

What is the cost of 1,000,000,000 users in terms of electricity usage alone, in even downloading the software? Once on our systems Apple can modify and where history shows us the road to hell is paved with good intentions.

Keep it on the server, where Apple have the legal right to do so. There is no need or rational reason to put this software on OUR devices unless the suspicion is that it will receive future modifications.

Within our systems albeit we are told what the software is currently designed to do, once in our systems modifications avoid System Integration Protocols giving carte blanche for more sinister use whereas safety wise I and suggest many would prefer it not to be on our systems at all.

Now when Apple introduce features on iPhone, its possible through setting to switch it on or off, but not this proposed software, and even if we switch to another cloud provider, even if its via VPN embedded software in our systems remains there along with any capability it has at the start, or any capability it finally ends up with.

There is no skirting around that this is surveillance, and constant posts arguing about it being safe surveillance or necessary surveillance doesn't alter it.

Apple should put out PR announcing they will withdraw the hardware idea and do it like all the other companies do via off our hardware, on companies own servers removing the suspicion.

Apple have had to change their account of this several times now.

Not even newspapers are buying Apple's explanation


likewise the idea that this software o our own devices confers more privacy makes no sense as Apple could use the neural hash via server
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.