@zkap: Hey, first of all, thanks for the respectful and thoughtful discussion. I appreciate it.
Again there are really two issues here, the ‘install this on my phone’ part, and the ‘reports to the outside’ part. I’ll start with the latter. When someone uploads content to iCloud, that content is in Apple’s possession. It is illegal to possess CSAM, and the issue is deemed so serious, that mandatory reporting of child abuse overrules other rights that a person would ordinarily have, such as doctor-patient confidentiality (at least in some countries). Now I’m no lawyer, so I can’t really say what Apple’s legal responsibilities are here, but they have been very poor reporters up until now, and part of the problem is a technical one. How do you scan for abusive content when it’s all encrypted? This brings us to the first issue. The obvious solution is to assess an image
before it gets encrypted. By doing this on the device, Apple can actually improve privacy and security for all its law abiding customers by introducing end-to-end encryption for iCloud uploads. That’s only speculation at this point, but it seems perfectly reasonable.
Look, I know I’m talking technology again, and this is where we get stuck. I simply can’t see how we can discuss the pros and cons of this without talking about the way it’s been designed and why. It’s technology that both threatens and protects our digital privacy—we really can’t get away from that.
That’s really not a new phenomenon. I’ve been around for a long time now, and Apple has always had its issues. Snow Leopard was probably the pinnacle of reliability for me, and I would love to see that restored, but I also acknowledge that their OSes are far more complex than they used to be, and that is going to come with some issues, programmers being people and people being fallible.
It’s a valid concern though. Apple does have its work cut out convincing us all that they have got this one right.
Yes, it
is a violation of privacy. You’re quite right. If I knew (and I may never know) that someone at Apple saw 30 of my private photos, I would be unhappy about that. But would my life be significantly impacted by it? No. If I knew that the technology had significantly reduced the spread of CSAM, I would accept that personal cost. (See also my previous comment to another commenter on the same topic.)
Good question. I think if innocent users were regularly getting flagged, that would be a serious failure of the system, and one Apple should be forced to address or else shut it down. I guess we can agree on that?
What would be acceptable to me personally... A single digit figure per year maybe? Ideally though, I think it should be less than one per year on average.
I do think Apple should be accountable and transparent here, and report on false positives if and when they happen.
Well, either way, a human has to get involved at some point, even if you have a layer of AI before that. I don’t think it’s unreasonable for Apple to want to take responsibility for the initial review, but you make a fair argument for them handing it over too.
People need to understand that no hash function is perfect. Some hash collisions are unavoidable. Did you see my earlier comment about that? So you do need to set a threshold for the number of matches. But yes, I agree that 30 seems too high, as I also said earlier. This tells me Apple are erring on the side of privacy, whereas you see a premature admission of failure. A case where they’re damned if they do and damned if they don’t??
You’re right. Some emotion on both sides is perfectly understandable. It’s when emotion overtakes rational debate that there is a problem. People set fire to 5G towers because they are angry, but it’s an anger based on fear based on misinformation. I could use many other examples which would take us into the realm of politics, but the moderators will start deleting posts if we go there. (Believe me, I know! 🙂)
Yeah, likewise, man. I think the discussion improves with the back and forth, as you can get a better picture of the other side's arguments after several replies.
About the technical side of things, I think there are two sides of the coin. I agree we obviously can't ignore the technical aspect of the issue as we're talking about technology. I don't focus on the technical aspect for two reasons: 1. I don't have knowledge in that field so I can't get into it, and 2. because any tech is made by people and this is why some of us worry, we know that Apple can just create a different system for other purposes. I don't object that strongly to
this particular scanning system for
this particular purpose. I mean, I object to it because don't like it and I think we can poke holes in it and discuss the many questions that need answering, but that in and of itself is not the core of the problem. For the most part I object to the path we're taking, because I am convinced they won't leave it at that. In my opinion, there is absolutely no way they won't come up with a different scanning system for a different purpose later, and this is an introduction to a new paradigm where iPhone isn't what it was.
They are easing us into this and it's why people say "slippery slope," this is why Snowden said "If they can scan for this today, they can scan for something else tomorrow." I believe the core message is the same. Be it a tweak in the CSAM technology or something else, the conviction is that this will be developed further. Apple even said so themselves, it's in the announcement that this technology will be improved and expanded over time. Imagine how concerning those words can be if you are wary of this tech. This is a discussion to be had. What are they going to expand the technology into? How else can they do it? Does this mean a better system later (which doesn't bode well for their confidence in the current one) or do they plan to include other aspects of the iPhone other than just photos?
I believe the only real chance we have, if there is any at all given that it's Apple we're talking about, of resisting this (at least for some time still) is if we do it now, when they first begin to roll it out. If it goes through now, resisting it becomes hopeless because next time there will be even less opposition, until it becomes business as usual. When you ask the Chinese what they think of WeChat, they'll say it's perfectly fine if you don't do something illegal. This is why technology, while obviously a big part of it, for me isn't at the core of the discussion, but rather Apple's position and willingness to compromise on this. So, your views on it from a technical standpoint may be completely correct, it may actually be praiseworthy what Apple have come up with in a technical sense, I don't know. Even if you can convince me that the system has as little shortcomings as can be reasonably expected for a piece of tech, it wouldn't calm me on the other aspect of the issue because I absolutely do not see any way that governments and agencies will resist tampering with the iPhone now.
About bugs, you're right that they're always there, they are an unavoidable part of tech, but you can still have levels to that. One manufacturer will have some bugs that do not impact any significant function while others will have crippling problems. Apple is somewhere in between and I have to say their software performance lately doesn't inspire me with confidence. Allow me to unpack something here which doesn't really concern my main gripe described above, but rather some details of how this may actually be implemented. As
@smoking monkey says below, certain settings can be turned on and off with a software update. I've had this (as far as I know) happen with Contacts in iCloud (switched off without me doing anything) and automatic iOS updates (switched on instead of being off as always). You can see where I'm going with this.
What happens when someone realizes their iCloud settings have Photos turned on when they didn't want them turned on? Remember that Apple attempted to calm people by saying you can opt out if you turn off iCloud Photos, as is regularly emphasized in this thread. Reasonably assuming that Apple can't eliminate all bugs regarding iCloud activation, one way I see around this is to make iCloud Photos activation require authentication, biometry or passcode. That way, the system won't allow a bug to upload Photos to iCloud and you'd need to opt in by confirming your identity. But then, what does Apple write in the prompt? Let's assume that Apple does the logical thing and writes the actual reason why the prompt is there to begin with. What paedo would then go ahead with the upload? Any person who possesses CSAM will either cancel the activation, or they are too stupid to use any tech to begin with. Imagine Apple saying "You can opt out of this by choosing not to activate iCloud Photos, unless a software bug changes your setting for you." That's a disaster. On the other hand, include a prompt and you're basically telling paedos who can read that they shouldn't tap on Activate.
So this brings me to the next question, and it ties in with your argument about reducing CSAM - who are we catching with this tech? It seems to me we are not catching creators, only users of the content, and not just any users - only the stupidest ones. This is why I don't see a reduction in CSAM, because the more we talk about it, the more traction it gets and the smaller the chance of actually achieving something. To address your point, I agree, I would consent to a reduction in my privacy if I thought the following: 1. there was reasonable chance of reducing CSAM even slightly, and 2. that I knew we aren't going anywhere else with this and that my consent will not open the door for other things. Like I said, you may be right that this is tech-wise an impressive feat by Apple, but everything else concerning this roll-out is most definitely a dumpster fire and it's why I have very low confidence on both points.
About false positives, I'm gonna do an exercise in logic. I may be wrong, but if Apple says you need 30 matches to be flagged... doesn't this mean that you need 30 matches to get an average of 1.00 or more CSAM images? I mean, usually you have these margins for a reason, like the +-4% statistical error in polling, or +-10 km/h error in measuring driving speed. This is a result of empirical data. So when Apple says 30 matches are needed, this is chilling to me. As far as I can tell, it says one of two things: 1. either Apple is fine with less than 30 CSAM images (you have CP, but you did not cross our randomly set threshold where we would think you have A LOT of CP so we'll let it slide) or 2. they determined through testing that, on average, 30 matches mean someone will have one or more CSAM images. The no. 1. option cannot possibly be true, so while I'm risking false dichotomy, this leaves us with no. 2. And following through with the logic, doesn't this mean the hashing system cannot be trusted because this seems like an unreasonably high error rate? Because ultimately that's what it is, 29 images is the error rate. What else could it be (really asking)?
This is already long, most of this isn't even in reply to something you said, just putting thoughts into text because there's a lot to unpack on this subject. Two more things.
Regarding the human reviewer, I read in one of the links posted here (or in the other thread) about the guy who tested the hashing system and found vulnerabilities, that he said the point of the human reviewer on Apple's side is that it eliminates the need for a warrant on the government's side. Basically, if Apple as a private entity conducts a search while providing a service for the user, it wouldn't be enough that the hashing system turns up a match, but a person would have to review the match and if they do it that way (human reviewer on the private entity's side), then the government would just repeat the same exercise and would not need a warrant as they'd just be verifying the findings of the private entity who made the report (as they should). Otherwise, if Apple just forwards the info that a match has come up, the government would still need a warrant. I have no idea if this is true, they didn't cite a source and this is a legal question whereas the rest of the posts concerned the technical aspects so I don't know how reliable it is. I live in the EU so I don't know US law, but if this is actually accurate, it would explain some things.
Lastly, when I read "5G," I thought to myself "Could this be my 5G? Am I going to the same loony school as the 5G nuts, just taking a different class?" I think and hope not, because at least I didn't focus on the technical side of things...
They've done that before! I reckon a couple of times Apple have turned on icloud photos on me after a major update. I hate that Apple reset settings after a major update. I def know that photo stream was turned on for me recently after an update when I 100 percent didn't have it on. I will say though, that it seems much better than it was several years ago.
In saying that, I'm not disagreeing with you. I reckon this is a really complicated issue that is way above my 2nd rate, PI pay grade. Honestly though, I'm more concerned with the fact I'm two months behind on the rent on this lousy, third floor office in a building at the ass end of town. So I pour myself a drink, it's 10:53am, almost midday. The first drink of the day always tastes the best. It's like your mind and body forget you're an alcoholic and you can actually taste the bourbon instead of craving it. I flip cards in a game of solitaire. Neat lines of red and black. Just like my life used to be, all my ducks in a row. But now my life is messy, so I pull the drawer and linger on the only clean thing in this office, my iPhone. I contemplate the easy way out, slam the drawer shut and get an Android, but I got unfinished business. So I grab the iPhone from the drawer, turn it on and marry myself to another day in purgatory.
I think it's a bug, the setting sometimes change, but rarely.
About the 11 am drink, how did it progress over the years? 5 pm, then 3 pm, then you stop minding the clock or what?