Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You seem to be assuming that a corporation must only ever act in its own selfish interests and the interests of its shareholders. I have challenged that very narrow view of capitalism on MR before. Apple is run by people, and people do sometimes have social consciences!

[...]

I didn’t know that. Maybe they just want to make the world a better place? How naive of me to even suggest that possibility!
Bahahahahahahahaha, uhm, okay.

In saying that, I'm not disagreeing with you. I reckon this is a really complicated issue that is way above my 2nd rate, PI pay grade. Honestly though, I'm more concerned with the fact I'm two months behind on the rent on this lousy, third floor office in a building at the ass end of town. So I pour myself a drink, it's 10:53am, almost midday. The first drink of the day always tastes the best. It's like your mind and body forget you're an alcoholic and you can actually taste the bourbon instead of craving it. I flip cards in a game of solitaire. Neat lines of red and black. Just like my life used to be, all my ducks in a row. But now my life is messy, so I pull the drawer and linger on the only clean thing in this office, my iPhone. I contemplate the easy way out, slam the drawer shut and get an Android, but I got unfinished business. So I grab the iPhone from the drawer, turn it on and marry myself to another day in purgatory.
I'm not a hard-boiled gumshoe, but I feel ya. When I did my first startup and was in my early 20s I noticed that all this stress and pressure magically went away and didn't matter after I had drinks with dinner. Being a logical and rational person, I thought to myself that obviously everything would be perfect forever if I just started drinking when I woke up! This worked out great, until it didn't, and then I became an alcoholic, but now I'm not. Life is full of experiences, and you too can enjoy all of them if you just show your papers and prove you've been vaccinated at least 19times and your iPhone indicates you're not participating in any wrongthink. Just... have a positive attitude like @kalsta and enjoy the all singing, all dancing, crap of the world ... you have an iPhone! It's just like happiness, only different. Your iPhone is alive -- albeit it lacks Samsung's waifu 'cuz Apple is uptight -- it's watching you, listening to you, following you... it's like your special friend, helping protect you from yourself; don't think different. It's not allowed.

Siri says, you should go to a 12-step meeting and would like to provide you with a helpful list of therapists nearby.
 
Last edited:
  • Like
Reactions: smoking monkey
Okay, gotcha. What you meant was, people can opt out of iCloud Photos and therefore the CSAM hash matching.


It’s the same loaded question again. Again I say, why not now? If not now, when would be a good time?

You seem to be assuming that a corporation must only ever act in its own selfish interests and the interests of its shareholders. I have challenged that very narrow view of capitalism on MR before. Apple is run by people, and people do sometimes have social consciences!


Apple has published a technical summary and responded to criticism with a FAQ document as well as interview comments. Maybe they haven’t answered your presumptuous ‘why now’ question, but they’ve hardly been ’silent’.


Right. And as we just clarified above, this opt-in is still 100% there with the proposed technology.


I didn’t know that. Maybe they just want to make the world a better place? How naive of me to even suggest that possibility!

While good answers you appear to be stuck on “They have what looks like a good technical solution so why not?”.
That still leaves all the other questions open.
I read the papers Apple put out however they don’t connect. The only way their solution works is if they are leveraging the AI bult into photos. There are gaps and Apple is silent.

You and I are not going to agree. I see the legal and ethical side of this and frankly it leaves me very concerned. The why now and why this solution when no on else will touch it is a big gaping hole in this solution. Where’s the ROI? What is the end design?
 
  • Like
Reactions: Pummers and Schismz
Unlikely ;)

That example fails on two points: First: Children are not free. They are (or used to be) constrained in their freedom by parents and guardians as they're taught the limits of freedom and the rights and responsibilities attendant with exercising freedom. Secondly: Freedom doesn't mean do whatever you want to do, regardless of the outcome or the impact of your actions upon others.

Huh. The Founding Fathers of the U.S. didn't find the concept a gross oversimplification.

This example also fails on two points: First, it's a fallacious argument (straw man) because it clearly involves one individual infringing upon another individual's freedom (and in a most egregious way). Plus it's false equivalence. My not having my images scanned does not infringe on anybody else's freedoms.

You guessed wrong :)

No, what Apple wants to do would be more like Chevy wanting to put passenger scanners in all their cars to preempt the possibility of them being used to kidnap children.

Ok, so: I will agree with you that for a society to function there must be limits on freedom. People are imperfect. Many are callous, selfish, have no conscience, etc. Others are simply unable to foresee the consequences their actions. So we make laws. Laws are prior restraint. Prior restraint is generally considered a bad thing, but it's an unfortunate necessity. Being antithesis to freedom, I submit that all prior restraint must be carefully considered lest we find ourselves throwing the baby out with the bath water. That we're employing a cure worse than the cold. In judicial parlance it's called "strict scrutiny."
Wow, all that effort to maintain the appearance of an argument while agreeing with the exact point I was making all along! You could have saved yourself a lot of work by finding common ground several comments ago and moving on from there… but I suspect you enjoy the verbal tussle, and perhaps the sound of your own voice just a little bit. Tell me I’m wrong. ;)

No. I'm never "ok" with anybody pawing through my stuff, at any time, or for any reason, without cause. (In legal terms: Probable cause, or at least reasonable articulable suspicion [RAS].)
Good point. The presumption of innocence is a core principle of our legal system, and this is quite likely the real heart of the issue for many people here. I agree with this principle by the way, but see my response to your next point.

If, by "monitoring," you mean "scanning users' private files and data for it": Yes. I believe their duty is to provide products and services to their customers in exchange for fees. My moral and ethical code would demand I report it if I saw it, but I wouldn't go looking for it and I will not impose my moral code on Apple or its other customers.
Okay… So, ethics would ‘demand’ one take action only if and when one happens to stumble over ‘it’. In other words, (and do please rebuke me if I’m putting words in your mouth), one’s moral duty only extends as far as one’s immediate line of site. You know what image comes to mind when I ponder that?

29_1-800-1.jpg

It’s much easier to turn a blind-eye to evil than it is to see it and then speak up or take action. But again, we’re talking about a complex subject with many shades of gray. There’s burying one’s heads in the sand at one extreme, and Orwellian surveillance at the other, and a very blurry and contentious line somewhere in-between.

Let’s be honest here—Apple knows that CSAM is proliferating on their servers. And up till now, they’ve been playing the ‘wise monkey’ who knows it’s much easier to sell a nice simple marketing message like ‘what happens on your iPhone, stays on your iPhone’ than to publicly admit they have a problem and a responsibility to address it.

So they know it’s there, and they know the ethical thing is to remove it and report it. But how do they do this unless they look?

A bricks-and-mortar retail store has staff, and each of those staff have eyes, and you understand that when you enter the store, those eyes will probably see you if you try and walk out with arm fulls of stuff without paying for it. Is that a presumption of guilt? Most people would say no. But then, some stores also check people’s bags on the way out (at least they do in my country). That comes closer to a presumption of guilt, but most people are happy to comply because they understand that collectively, people do steal, and it’s unfair that the store should bear the cost of that. Again, that line is a blurry one.

Apple has proposed a technology here which is more like the shop door scanners. Everyone passes through, but only those carrying a tagged item set off the alarm and come to the attention of staff. Now I fully realise that I’m shooting myself in the foot with that example. Who hasn’t had those bloody alarms go off because an item you paid for wasn’t properly de-tagged or deactivated? We must, of course, hold Apple accountable for the reliability of their technology. If it works as promised though, offenders will be reported while your privacy will be protected (and very-likely enhanced in the future). Seems like they got the balance pretty right to me, in principle anyway. (I’d like to see that number much lower than 30, but that’s a technical detail.)
 
Perhaps there is a reason that [NCMEC] don't want really technical people looking at PhotoDNA. Microsoft says that the "PhotoDNA hash is not reversible". That's not true. PhotoDNA hashes can be projected into a 26x26 grayscale image that is only a little blurry. 26x26 is larger than most desktop icons; it's enough detail to recognize people and objects. Reversing a PhotoDNA hash is no more complicated than solving a 26x26 Sudoku puzzle; a task well-suited for computers.
Wow. A cryptographic hash should never be reversible. You could potentially brute force one, which becomes easy with only 26 x 26 (676) pixels and 8 bit grayscale. That’s only 256 x 676 possible images to generate the entire hash table! Shockingly bad if that’s true.

So that is Microsoft’s hash function, not Apple’s. If Apple’s could be reversed like that, I would agree with you 100%—it would not be fit for purpose.

As a side note, Dr Neal Krawetz is obviously a really smart guy who knows more about cryptographic hash functions than I do, but even he makes a really basic error in that same article when he writes that Apple would need ‘1 trillion pictures for testing’ to validate their claim that there is a ’one in one trillion chance per year of incorrectly flagging a given account’. Apple never said one in a trillion chance of a single false positive. Such a basic lapse in logic, yet he runs away on quite the rant about it. It just shows that you need to question assumptions no matter who you’re listening to. Even the experts make mistakes sometimes.
 
Last edited:
This is why technology, while obviously a big part of it, for me isn't at the core of the discussion, but rather Apple's position and willingness to compromise on this.
But what is it that they’re compromising on exactly? The willingness to match hashes on your device instead of their own server? See I don’t see that as the monumental shift that opponents of this technology do.

When Siri does her stuff on-device, we all applaud because that’s one less bit of personal data going up into the cloud. Yes, this technology is different in that it is designed to report criminals, and for that to happen, incriminating data needs to move from the personal device to the server at some point. But shifting the first part of the process from the cloud to the device can mean one less bit of personal data going up just the same. If you can get beyond this mental hurdle of saying ‘me iPhone mine! Apple iCloud not mine!’, I think you can find a small win for privacy here.

What happens when someone realizes their iCloud settings have Photos turned on when they didn't want them turned on? Remember that Apple attempted to calm people by saying you can opt out if you turn off iCloud Photos, as is regularly emphasized in this thread. Reasonably assuming that Apple can't eliminate all bugs regarding iCloud activation, one way I see around this is to make iCloud Photos activation require authentication, biometry or passcode. That way, the system won't allow a bug to upload Photos to iCloud and you'd need to opt in by confirming your identity.
Yes, I agree. Personal admission: I’ve always been a little fearful of the cloud. While I now embrace it in many ways, I still refuse to switch on some features including iCloud Keychain, iCloud Photos and My Photo Stream. So yeah, it bugs the hell out of me to constantly get asked by the OS to enable iCloud Photos, or worse, to have it get switched on without my consent. (I think this must have happened to me once—how else can I explain my paranoid checking of Settings > AppleID > iCloud > Photos every so often!)

But then, what does Apple write in the prompt? Let's assume that Apple does the logical thing and writes the actual reason why the prompt is there to begin with. What paedo would then go ahead with the upload? Any person who possesses CSAM will either cancel the activation, or they are too stupid to use any tech to begin with.
No, they don’t need a long-winded prompt—just those little blue dudes shaking hands, prompting you to click if you want more details.

About false positives, I'm gonna do an exercise in logic. I may be wrong, but if Apple says you need 30 matches to be flagged... doesn't this mean that you need 30 matches to get an average of 1.00 or more CSAM images?
… or 2. they determined through testing that, on average, 30 matches mean someone will have one or more CSAM images. The no. 1. option cannot possibly be true, so while I'm risking false dichotomy, this leaves us with no. 2. And following through with the logic, doesn't this mean the hashing system cannot be trusted because this seems like an unreasonably high error rate? Because ultimately that's what it is, 29 images is the error rate. What else could it be (really asking)?
You’ve got some big assumptions and flawed logic there. If the error rate were anywhere near that high, it wouldn’t be all neatly distributed such that every flagged account has one CSAM image and 29 innocent ones. You’d be getting thousands upon thousands of totally innocent accounts hitting that threshold regularly. Apple have estimated that the odds of an incorrectly flagged account in any year is one in one trillion. Maybe they set the threshold to 30 just so they could hit that magic number and boast about it. Who knows. But it certainly doesn’t imply such a high error rate, or anything close to it.
 
For all the technical jargon some on here push in Apple’s favor for this solution it still beggars the question why this solution? In comparison to server side scanning or scanning off device via .. relay server(?), this is inefficient and requires more front end work.
 
Last edited:
  • Like
Reactions: Pummers
You and I are not going to agree. I see the legal and ethical side of this and frankly it leaves me very concerned. The why now and why this solution when no on else will touch it is a big gaping hole in this solution. Where’s the ROI? What is the end design?
I could totally be convinced that it’s a bad idea if you presented me with some compelling arguments. Instead you just keep repeating that tired old rhetorical question, ‘why now’, as though you think the answer is self-evident and damning.

The ‘return’ or ‘end’ or whatever you want to call it is bleeding obvious: to remove illegal and harmful CSAM images from circulation and to allow the law to deal with offenders. If you can’t accept that much at face value, I can only assume you’ve bought into some kind of conspiracy theory or you assume that Apple leaders (Cook, Federighi, et al) don’t really care about children. Good luck to you if that’s what you believe—I just don’t think it adds much to the conversation.
 
  • Sad
Reactions: dk001
While good answers you appear to be stuck on “They have what looks like a good technical solution so why not?”.
That still leaves all the other questions open.
I read the papers Apple put out however they don’t connect. The only way their solution works is if they are leveraging the AI bult into photos. There are gaps and Apple is silent.

You and I are not going to agree. I see the legal and ethical side of this and frankly it leaves me very concerned. The why now and why this solution when no on else will touch it is a big gaping hole in this solution. Where’s the ROI? What is the end design?
Where is the carrot to make people accept this stick? Saying, “all of our customers could possibly be shifting kiddie abuse and porn pics” is a big bridge to cross. Where’s the carrot to incentivize everyone?
 
  • Like
Reactions: dk001
I could totally be convinced that it’s a bad idea if you presented me with some compelling arguments. Instead you just keep repeating that tired old rhetorical question, ‘why now’, as though you think the answer is self-evident and damning.

The ‘return’ or ‘end’ or whatever you want to call it is bleeding obvious: to remove illegal and harmful CSAM images from circulation and to allow the law to deal with offenders. If you can’t accept that much at face value, I can only assume you’ve bought into some kind of conspiracy theory or you assume that Apple leaders (Cook, Federighi, et al) don’t really care about children. Good luck to you if that’s what you believe—I just don’t think it adds much to the conversation.
That is the problem; lack of in formation. I have a couple of basic questions and there is deafening silence. I have been asking since this was first announced, it is also a couple of the basic question you have for any project:
- why this solution?
- what is driving this design?
- - why on device?

You and others are stuck at “CSAM” while I am looking at “Why?”. It is a very simple set of questions yet you and others, including Apple, are either playing quiet or ducking and dodging.

Perhaps instead of lauding the technical design you might try answering or attempting to answer mine. Your claim I and others are playing the “conspiracy theory card” doesn’t hold water when a simple answer from Apple could lay these concerns to rest. Concerns of many here and a lot of proffessionals. Silence and “we won’t allow that” as answers from Apple don’t cut the mustard and should be very concerning unless you are willing to grant Apple carte blanche to do what it wants on your devices.
 
That is the problem; lack of in formation. I have a couple of basic questions and there is deafening silence. I have been asking since this was first announced, it is also a couple of the basic question you have for any project:
- why this solution?
- what is driving this design?
- - why on device?

You and others are stuck at “CSAM” while I am looking at “Why?”. It is a very simple set of questions yet you and others, including Apple, are either playing quiet or ducking and dodging.

Perhaps instead of lauding the technical design you might try answering or attempting to answer mine. Your claim I and others are playing the “conspiracy theory card” doesn’t hold water when a simple answer from Apple could lay these concerns to rest. Concerns of many here and a lot of proffessionals. Silence and “we won’t allow that” as answers from Apple don’t cut the mustard and should be very concerning unless you are willing to grant Apple carte blanche to do what it wants on your devices.
We can't answer "why?" right now.

Apple hasn't announced E2EE Photos, so I can't say it's that.

I could guess and speculate about what their plans might be, but that doesn't really help.

The only thing we can do is wait. iOS 15 isn't even out yet, there could be something coming in September with an announcement that they don't want to spoil right now.

My advice would be wait for Apple to answer that question as we here do not have an answer that will satisfy you.
 
  • Like
Reactions: dk001
- why this solution?
- what is driving this design?
- - why on device?

You and others are stuck at “CSAM” while I am looking at “Why?”. It is a very simple set of questions yet you and others, including Apple, are either playing quiet or ducking and dodging.
They’re all the same question more or less, and I’ve stated, a number of times already, why I think they’re doing it on-device. You’re either not paying attention or you really have cooked up a darker theory of your own that you’re being intentionally vague about.

I’ll say it again, one last time… Doing it on-device means that your personal photos can be encrypted to, from, and during their stay on Apple’s server, without Apple ever needing to look at them. This potentially paves the way for true end-to-end encryption for iCloud photos.

While Apple (to my knowledge) hasn’t committed to E2EE for iCloud photos, they have said this:

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching …
Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images.
This innovative new technology allows Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM. And it does so while providing significant privacy benefits over existing techniques since Apple only learns about users’ photos if they have a collection of known CSAM in their iCloud Photos account. Even in these cases, Apple only learns about images that match known CSAM.
Read at face value, they are telling you in pretty straight-forward language, why they designed it this way—they do not want to look at your personal photos. They are telling you that they won’t decrypt and scan your photos in the cloud, unlike the others. And if they do go totally E2EE, they’ll be able to add a dozen exclamation marks to that statement, because they wouldn’t be able to, even if they wanted to.

I’m not saying you have to believe them. You can believe whatever you want. Why don’t you tell us what your theory is? What’s the big secret they are hiding from us?
 
Last edited:
  • Like
Reactions: MozMan68
I’m not saying you have to believe them. You can believe whatever you want. Why don’t you tell us what your theory is? What’s the big secret they are hiding from us?

They are obviously doing this so they can scan all on device pics for images related to “trumpers”, images of Winnie the Pooh for the Chinese government and all d-pics for Tim Cook’s personal review.…obviously…. /s
 
But what is it that they’re compromising on exactly? The willingness to match hashes on your device instead of their own server? See I don’t see that as the monumental shift that opponents of this technology do.

You’ve got some big assumptions and flawed logic there. If the error rate were anywhere near that high, it wouldn’t be all neatly distributed such that every flagged account has one CSAM image and 29 innocent ones. You’d be getting thousands upon thousands of totally innocent accounts hitting that threshold regularly. Apple have estimated that the odds of an incorrectly flagged account in any year is one in one trillion. Maybe they set the threshold to 30 just so they could hit that magic number and boast about it. Who knows. But it certainly doesn’t imply such a high error rate, or anything close to it.

What are they hashing exactly? The photos themselves or? Because Apple said the software is designed so that it cannot be circumvented by making alterations to the image. There was a video posted here that explained that the software is using AI to analyze the image and that analysis is hashed. If this is so, they are not comparing image hashes, but hashes of the AI's representation of the images. Doesn't this mean that different photos can have the same hash, if they look too much alike to the AI? If so, this would explain the number of matches they need to flag the account.

About the error rate, my logic may be flawed, but what you wrote doesn't really say anything. They did it so they could boast about the magic number? Seriously? This is a reasonable assumption to you? If that were true, it would be ridiculous. The software can be more or less reliable, and to me the need for 30 matches to flag an account seems very high. I don't see them setting that number randomly, which begs the question how they arrived there. You mention conspiracy theories and wonder why some don't trust Apple, and at the same time I wonder how you can accept this tech so calmly and have confidence in Apple when clearly you don't know how it works, either. If you did, you'd be able to offer a reasonable answer here, and to me it seems like the explanation how they got to 30 is pretty important as it... wait for it... speaks to the technical side of the issue.

Doesn't the fact you don't know how they arrived at the magic number 30 bother you? And you don't have any idea how they determined it's 30? Again, to me that number is a bad sign regarding the reliability of the software. My logic may well be flawed, which is why I proposed a question, but you did not offer any logic at all, because assuming they just picked that number to boast about it is incredible. I won't even get into how that number defeats the purpose of this software.
 
  • Like
Reactions: dk001
They are obviously doing this so they can scan all on device pics for images related to “trumpers”, images of Winnie the Pooh for the Chinese government and all d-pics for Tim Cook’s personal review.…obviously…. /s
Oh I’d be in full support of them tracking down and terrorizing Trumpers. Nothing too obvious… They could make the battery degrade 8% faster… Cause Safari to ‘randomly’ reload the page just as they’re finishing typing up a long social media post… that sort of thing.
 
  • Sad
Reactions: icanhazmac
What are they hashing exactly? The photos themselves or? Because Apple said the software is designed so that it cannot be circumvented by making alterations to the image. There was a video posted here that explained that the software is using AI to analyze the image and that analysis is hashed. If this is so, they are not comparing image hashes, but hashes of the AI's representation of the images.
I haven't seen that video. Do you have a link? I had assumed that machine learning was somehow involved in creating a sophisticated algorithm that could allow for very minor alterations to an image. You could call that AI, yes, but there can't be any ongoing machine learning on the device—after all, you can't have each person's phone 'learning' a different way to hash the images, right? I'd be interested to watch that video and learn more though.

Doesn't this mean that different photos can have the same hash, if they look too much alike to the AI? If so, this would explain the number of matches they need to flag the account.
Yes of course, it's possible the system could break down there. I imagine it would be quite hard to do well. Not sure what you want me to say apart from what I've said before… If the system performs poorly, Apple should certainly be transparent about that and shut the thing down if they have to.

About the error rate, my logic may be flawed, but what you wrote doesn't really say anything. They did it so they could boast about the magic number? Seriously? This is a reasonable assumption to you? If that were true, it would be ridiculous. The software can be more or less reliable, and to me the need for 30 matches to flag an account seems very high. I don't see them setting that number randomly, which begs the question how they arrived there.
I already agreed that the threshold of 30 seems too high. Like you (or someone) said earlier, no one should be able to upload 25 CSAM images and slip under the radar. Again, what do you want me to say? I'm not on the design/development team for this project—how on earth could I know how they arrived at that exact number?

What I do know is the mathematical principle… As that number increases, the odds of incorrectly flagging an account decrease exponentially. So it's not an unreasonable assumption at all to say they set the number high to avoid flagging the accounts of innocent users.

Doesn't the fact you don't know how they arrived at the magic number 30 bother you? And you don't have any idea how they determined it's 30? Again, to me that number is a bad sign regarding the reliability of the software. My logic may well be flawed, which is why I proposed a question, but you did not offer any logic at all, because assuming they just picked that number to boast about it is incredible.
'One in one trillion' is a nice round, catchy set of odds to be able to include in the marketing and try to sell this whole thing, so again, it's not an unreasonable guess that this might have factored into their decision. But again, I'm only guessing. You ask me questions about details I couldn't possibly know about, and then you're surprised when I can only make semi-educated guesses? Is that your point… that Apple should tell us exactly how they derived these numbers? Sure, why not.

In Apple's defence I would say this… They can run tests and simulations, but not even they will know exactly how many false positives they're going to get until the system is up and running. It may be that the 30 threshold proves to be way too conservative, and they can safely lower the number over time without flagging innocent users. One would certainly hope so.
 
Wow, all that effort to maintain the appearance of an argument while agreeing with the exact point I was making all along!
I don't think so. Either you stated your original point unclearly; I understood it imperfectly; or you're cherry-picking my responses, removing qualifiers and nuances, to make it appear as if I was agreeing with you.

Bottom line: Freedom means allowing others to do what they want, even if it's something you don't appreciate or with which you don't agree. (Paraphrased from something I read on Vox, years ago.) It is properly constrained only by "Your right to swing your fist stops at my nose." Thus I'm free to pursue my interests as I see fit, so long as they don't infringe on the rights of others to do the same.

We do agree that prior restraint is an unfortunate necessity, "because people." Where we diverge, drastically, is in the end justifying the means.

... I suspect you enjoy the verbal tussle, ...
I do!

... and perhaps the sound of your own voice just a little bit. Tell me I’m wrong. ;)
*tsk* Shall we do a word count of our respective posts to this thread to see which of us has the higher average word count? ;)

Good point. The presumption of innocence is a core principle of our legal system, and this is quite likely the real heart of the issue for many people here. I agree with this principle by the way, but see my response to your next point.
The point I made to which you're responding does not speak at all to the presumption of innocence. It speaks to the right of one to be "secure in their persons, houses, papers, and effects, against unreasonable searches," regardless of who wishes to conduct the searching. More succinctly: The right to be left alone.

Okay… So, ethics would ‘demand’ one take action only if and when one happens to stumble over ‘it’.
I did not claim that at all. I clearly wrote my ethics demand I don't go poking around in other peoples' stuff to see if maybe they're doing something with which I disagree.

An analogy. This may be before your time, but only as recently as a couple decades ago or so, homosexuality was still widely-regarded as an aberration, immoral, etc. in most of the U.S., and engaging in homosexual... uh... "relationships" illegal in most U.S. jurisdictions. Civil libertarians (among which I count myself) took the position of "What happens between two consenting adults in the privacy of their own bedrooms is neither the government's nor my business." That did not mean all civil libertarians necessarily had any particular opinion on the morality of homosexuality. It meant, simply, they felt it none of their business.

There’s burying one’s heads in the sand at one extreme, and Orwellian surveillance at the other, and a very blurry and contentious line somewhere in-between.
I take issue with "burying one's head in the sand" as a characterization of people who do not wish to have demanded of them their papers on the off chance they don't have a right to be where they are.

Let’s be honest here—Apple knows that CSAM is proliferating on their servers.
No, they believe or suspect it, but they cannot know it. Likewise, certain U.S. legislators, the NCMEC, and other activists believe or suspect it, but they cannot know it. What they want is for Apple to go on a fishing expedition to validate their assumptions.

In law, believing or suspecting a thing does not rise even to the level of reasonable articulable suspicion, much less probable cause. There are reams of U.S. Court decisions affirming this.

Now you may be inclined to argue "But that applies to law enforcement. Apple is not law enforcement." To which I would respond with two arguments: 1. If Apple is being coerced into the action by elements of the U.S. government, as has been reported (for whatever the hell that's worth), I'd argue it's a distinction without a difference. 2. Is the NCMEC not essentially an agent of the U.S. government?

From Wikipedia:
The National Center for Missing & Exploited Children (NCMEC) is a private, nonprofit organization established in 1984 by the United States Congress. In September 2013, the United States House of Representatives, United States Senate, and the President of the United States reauthorized the allocation of $40 million in funding...

And up till now, they’ve been playing the ‘wise monkey’ who knows it’s much easier to sell a nice simple marketing message like ‘what happens on your iPhone, stays on your iPhone’ than to publicly admit they have a problem and a responsibility to address it.
And, again, I'll argue: It's no more Apple's responsibility to address my use of the products I've purchased from them than it is a manufacturer of any other product.

An analogy: Driving while impaired by alcohol is a known problem. Would you be comfortable with all automobile manufacturers being coerced by your government into installing breathalyzers in all automobiles to prevent their owners from operating them until they've proven they're not impaired?

A bricks-and-mortar retail store has staff, and each of those staff have eyes, and you understand that when you enter the store, ...
False equivalency. For one: You're on their property, not yours. Secondly (and I concede this is an argument based on assumption, therefore a logical fallacy): They're interested in preventing loss-of-revenue due to theft, more than actual crime, per se.

Apple has proposed a technology here which is more like the shop door scanners.
No, because they're placing the scanner on my property, not theirs. If they were doing the scanning on their servers your analogy would hold.

[N.B.: Very, very poor analogy follows]

There are people who strongly object to the practices employed at Costco and Sam's Club of checking shoppers' carts when they're on the way out. Some of these people object so strongly they refuse to give these stores their custom. What Apple wants to do would be roughly equivalent to stationing these people on customers doorsteps or in their cars.

In both scenarios: If I don't like the scanners on their property, I can avoid them by the simple expedient of staying off their property. But, if they place their scanners on my property, I cannot assuredly avoid them.

(Please don't argue "But if you don't..." That would be begging the question. Which would, in turn, oblige me to make a slippery slope argument. Which would, in turn... I don't think we need to go there again, do we?)
 
We can't answer "why?" right now.

Apple hasn't announced E2EE Photos, so I can't say it's that.

I could guess and speculate about what their plans might be, but that doesn't really help.

The only thing we can do is wait. iOS 15 isn't even out yet, there could be something coming in September with an announcement that they don't want to spoil right now.

My advice would be wait for Apple to answer that question as we here do not have an answer that will satisfy you.

I am hoping we see additional information prior to go live with this.
 
I haven't seen that video. Do you have a link? I had assumed that machine learning was somehow involved in creating a sophisticated algorithm that could allow for very minor alterations to an image. You could call that AI, yes, but there can't be any ongoing machine learning on the device—after all, you can't have each person's phone 'learning' a different way to hash the images, right? I'd be interested to watch that video and learn more though.
Apple says it's not Machine Learning https://www.apple.com/child-safety/...del_Review_of_Apple_Child_Safety_Features.pdf page 10
 
They’re all the same question more or less, and I’ve stated, a number of times already, why I think they’re doing it on-device. You’re either not paying attention or you really have cooked up a darker theory of your own that you’re being intentionally vague about.

I’ll say it again, one last time… Doing it on-device means that your personal photos can be encrypted to, from, and during their stay on Apple’s server, without Apple ever needing to look at them. This potentially paves the way for true end-to-end encryption for iCloud photos.

While Apple (to my knowledge) hasn’t committed to E2EE for iCloud photos, they have said this:




Read at face value, they are telling you in pretty straight-forward language, why they designed it this way—they do not want to look at your personal photos. They are telling you that they won’t decrypt and scan your photos in the cloud, unlike the others. And if they do go totally E2EE, they’ll be able to add a dozen exclamation marks to that statement, because they wouldn’t be able to, even if they wanted to.

I’m not saying you have to believe them. You can believe whatever you want. Why don’t you tell us what your theory is? What’s the big secret they are hiding from us?

While I appreciate your taking the time to give your opinion, factual information on these from Apple is sparse at best. Thanks for taking the time.
End of day I think we can agree Apple did build a good technical object. Where we disagree is the other aspects.

For me there is no "Big Secret". I have questions regarding this solution design for which there are no answers as of yet. Basic design / implementation questions.

Thanks.
 
They are obviously doing this so they can scan all on device pics for images related to “trumpers”, images of Winnie the Pooh for the Chinese government and all d-pics for Tim Cook’s personal review.…obviously…. /s

For many of us we are not looking for "OMG!!! Bad!!!! " from this. Basic design / implementation questions.
Once I understand all the pieces, then I can make a decision for me.
 
For many of us we are not looking for "OMG!!! Bad!!!! " from this. Basic design / implementation questions.
Once I understand all the pieces, then I can make a decision for me.
Define "many"?? :p

I would say "many" on here are jumping to conclusions not based on any facts whatsoever and are simply stating what "could" happen no matter how outrageous... a lot of "the sky is falling" type of sentiment.

I'm glad there are a "few" on here that understand what COULD happen, but are erring on the side of trusting the system to work as Apple has stated while also taking into account the tech, what is currently on the phone and Apple's own history around this type of data/tech.
 
  • Like
Reactions: kalsta
I don't understand why people would want to lose the privacy associated with the scanning happening on their phone as opposed to Apple's servers for the reason 'it's my phone!', blah blah blah...'....because that's really all I'm hearing from that side. Well, that and paranoia and conspiracy.
 
Last edited:
I'm glad there are a "few" on here that understand what COULD happen, but are erring on the side of trusting the system to work as Apple has stated while also taking into account the tech, what is currently on the phone and Apple's own history around this type of data/tech.
This guy gets it. As of yet, there is no reason to believe this will be expanded on in the future. I'll worry about that if it happens, but for right now all we can do is go by what they say.

I still don't understand why it would be so much better if they unencrypted all of your photos in the cloud and scanned them there... on a system that can't be seen or controlled. You don't know what tech they're using or what database they're comparing to or anything.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.