Thanks for the honesty. I do see what you're saying, but I don't agree that we can divorce those two things (technology and principle) if we're to have a meaningful discussion about this. Apple's proposed process and technology has checks and balances which address many of your concerns, if you would take the time to understand them. I know there are people who do understand the technology and still oppose this, and that's fine. A lot of the discussion on MR however is highly emotional and low on facts. (What's new.)
No. Both of those things would be a violation of my privacy. If that was the technology Apple had built here, I'd be objecting to it just as strongly as you. But Apple's so called 'CSAM scanning' does no analysis of your photo content, no analysis of your on-device behaviour, and no 'reading' of your messages. Are you saying we should judge Apple for something they are not doing? Guilty until proven innocent? Now there's an interesting 'precedent'!
I understand what the word precedent means. I'm not sure what you mean by it though. The precedent of doing image processing on your device? The precedent of referring criminal activity to law enforcement? You realise that tech companies already do these things right? That they actually have a legal duty to report these kinds of images in their possession? Apple, by doing the processing on-device before the files are uploaded to iCloud opens the way for stronger privacy measure like end-to-end encryption to iCloud drive, something they can't do if they process (hash) the images remotely.
When you say 'back door' what do you actually mean? I see people parroting those words, but it's often not clear what they are talking about.
I don't find any of this funny. And sorry to be blunt here, but this is more evidence why you need a basic understanding of the process and technology before getting caught up in all the emotion and outrage. No one is looking at any of your personal photos. For it to get to the stage where a human is involved, you must have uploaded a collection of known child porn images, and if you do, these are the only images the human reviewer sees—low-res versions just to confirm that yes, you are an offender, so no one gets falsely reported to police. How could any responsible reporting process not have a human reviewer at the end? Please explain this to me.
You have addressed me throughout as though I don't understand or care about personal privacy online, which is quite untrue. Online security and privacy are of paramount importance to me. It's one of the reasons I refuse to install Facebook apps on my phone, and eventually deleted Facebook altogether, despite the personal cost of doing that. I'm an advocate for online privacy and security, and teach friends and family about these issues when I can.
But what I'm seeing here is a lot of anger from people, most of whom don't appear to have taken the time to educate themselves about the technology they're so angry about. This doesn't surprise me, as I've seen what social media does to people—how it stirs them up by feeding incomplete or false information. (Another reason I left Facebook.) I mostly avoid commenting on MacRumors, as I don't find it a positive place for open discussion anymore, but this is an important issue and one that deserves better than getting buried by countless pages of mostly one-sided anger and ignorance.
I think the main difference between the two camps is that your camp argues the technical aspects of NeuralHash and how it won't invade your privacy, while my camp argues that this is dangerous regardless of the checks and balances. I don't think we have to divorce principle and technology and the reason this is a problem for me is that technology can be changed, tweaked. Some praise Apple for designing this complex system, and so it stands to reason they can design something else entirely, something more intrusive and less complex. This hasn't changed and this is the technical side that I'm taking into account - that tech can later be tweaked and used for different purposes.
What has changed, though, and what is the actual problem for me is that Apple has now shown willingness to install this on my phone. That's the problem and the precedent I'm talking about - the fact that iPhones will now have scanning software that reports to the outside. Basically, what was yesterday sacrosanct, now isn't. I think if this goes ahead, it's over in the sense that iPhone will be fair game from now on. You may applaud Apple for how this software was designed, and you may well be right that this is impressive, but the technical quality of a solution doesn't necessarily mean it's all going to be good and tamper-proof. What is of utmost importance, far more important than the checks and balances Apple implemented, is Apple's willingness or lackthereof to protect the privacy and security of the device from all outside pressure and inspection.
And since you like to emphasize the technical aspects of this solution, I haven't even mentioned the probability of false positives. Whether it's a million to one or a trillion to one, I don't know so I won't speculate, but what I can say is that we are supposed to trust Apple with the reliability and integrity of this software? I'm sorry, but Apple can't get their own apps to function without bugs and iOS has been a demonstration of Apple's lack of competence for the last couple of years. Apple now routinely has bugs in new devices and new features, recurring bugs that somehow return after being fixed and bugs that are there for the last few years without getting a fix. I am not comfortable with Apple guaranteeing that a system like this will be as tamper-proof and as fail-proof as possible.
About the human reviewer part, I'll explain the issue. You focus on the outcome of a review, saying that if the account is flagged, a human will see the photos and if there's a violation, only then will there be prosecution. You are talking about prosecution and I am talking about privacy. I'll repeat what I said in my last post - when someone, a human reviewer, gets to see the photos, that is a violation of privacy. I can't say this more clearly. That reviewer is doing their review based on the account being flagged, which will inevitably happen to people, and inevitably some of those will be false positives. The moment a human reviewer sees the photos, the user's privacy will have been violated because there is no warrant saying a court of law decided that your right to privacy is less important than the interest of the community to prosecute a specific offense that law enforcement has probable cause for. Just out of curiosity - in your opinion, what is the acceptable probability of a false positive? Is one in a trillion good enough and is that a good estimate? With this being a new scanning system, how will Apple gauge this?
We can also talk about the actual human reviewer who will be doing this review. Who exactly will this be and with what background? How will this person have the right to decide whether a user should be reported? I think this will not be a reviewer singular, but plural, because it's a tough job and a lot of unpleasant responsibility for one person. So in all likelihood, there will be several people inspecting the photos that are exactly the ones a user doesn't want other people to see, and then they will have the burden of deciding whether a potential crime should be forwarded to NCMEC for further review or not. Also, what exactly is the point of Apple's human reviewer? NCMEC can do that review as well and you'd think they'd be more competent at it. So why does Apple have their own people in this chain, when no investigation can begin and no charges can be brought forward without humans reviewing the case and deciding if it's CP? This makes sense for Apple only if they expect a good amount of false positives, because otherwise if the system is solid and false hash matches will be almost non-existant, then their human reviewer seems like an unnecessary part of the process and the review will in any case be done by NCMEC anyway.
So, you see a responsible reporting process, and I see a reporting process where Apple inserted their human reviewer in an effort to calm suspicions that their software will inevitably produce false positives. It assures me less because it tells me that they don't expect the software to be reliable, and the same goes for the fact they'll need something like 30 matches to flag the account. If this is true, why aren't people talking about the fact that a person who has 25 CSAM-category photos will not be flagged? If this is for the children, isn't that too high a threshold and why are you praising Apple's system instead of wondering why so many CSAM photos will go undetected? Again, this makes sense only if they expect the system to work poorly and produce a good number of false positives. I fail to see how any of this is reassuring.
About the back door, I don't remember mentioning it (I don't see the post now, it's pages back) and if I did, it was done loosely. Some have said it's a back door, others have said otherwise, I don't know if it should technically be considered a back door (to my understanding, not) but I think the term is mostly used as a figure of speech, to reference a way into finding out the contents of a device rather than a way into the device itself.
Lastly, this is already too long sorry, there is emotion here and I think that's normal. This is a sensitive topic. I don't use any social media at all, not even LinkedIn which is stupid of me because I own my private business and I don't advertise there.
Those odds are nonsense. There are people on Reddit who have already interacted with this feature from a programming perspective and have fooled it into falsely alerting to completely innocuous images.
That’s just it - there is nothing about catching the people who view this stuff that stops the people who make it. That’s like saying going after heroin users will stop cartels in Asia that are producing heroin. They‘re not related activities other than they‘re both about heroin (or child porn).
This is important. We don't know the odds on this, but what we can reasonably expect is for Apple to misrepresent the odds somewhat to strengthen their argument, because they are the ones who will be coming up with the number. This will be the same as when they write, for any hardware or software bug that gains enough traction that they had to respond, "a very small number of users experienced..."
About the second paragraph, true. There is so much to unpack here and those who support this system don't address any of this. Why is the threshold at 30 and how come they're good with the fact the system will let those who possess a smaller quantity of CP go under the radar? Is it legal to possess less than 30 CSAM photos, but 30 is where you cross the line into a crime? And why does this focus on users of the material intead of the creators?