Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If you point a telescope at a neighbor’s window, but never look through it, is it spying?

Depends on the country and burden of proof.

In the US, legally no. Civily it could be.

Better example would be to put a baby monitor in your neighbors house but only turn it on if … (place example here).
It’s not a matter of legality, rather whether or not the person intruded upon wanted the monitor in there at all.
 
If you point a telescope at a neighbor’s window, but never look through it, is it spying?
Apples CSAM is not like pointing a telescope at a neighbors window, it's more like placing a camera inside your bathroom while "promising" to not look at your nude wife, only if hashes collide.
 
  • Like
Reactions: Mega ST
You seem to miss the point again. It's the attitude wanting to enter your device and inspecting your private stuff that draws criticism. Plus later options to expand what is looked for.
It's not about people in fear that their forbidden pictures finally get known by the police.

It's odd that this has to be explained, even at this stage of the discussion and after so many pages of various threads.

If you point a telescope at a neighbor’s window, but never look through it, is it spying?

No. Go tell the neighbor that the telescope is just there and pointed in the direction of their bedroom, but promise that you'll totally never ever going to use it to spy on them or anything and you guarantee that nobody else will ever look through the telescope (even though some friends have asked, but you're firm in your stance to deny those requests)... unless it happens by chance. Then explain why the telescope must be placed in that spot and pointed in their direction. Be surprised when they take issue with the situation and explain it better to them.
 
It's odd that this has to be explained, even at this stage of the discussion and after so many pages of various threads.



No. Go tell the neighbor that the telescope is just there and pointed in the direction of their bedroom, but promise that you'll totally never ever going to use it to spy on them or anything and you guarantee that nobody else will ever look through the telescope (even though some friends have asked, but you're firm in your stance to deny those requests)... unless it happens by chance. Then explain why the telescope must be placed in that spot and pointed in their direction. Be surprised when they take issue with the situation and explain it better to them.

That’s just where it is…I point it at the sky when I want to look through it, but it always settles into that position pointed at the window…I swear…nothing to see here….
 
"Entitled Apple users?" Srsly? People who are concerned about their personal security and privacy are now "entitled?"
I’m concerned about security and privacy too—more-so than most people I know—so consider my little rant at the end there a form of self-deprecation if you will—or hyperbole. I was just trying to put things in perspective. The risk to me of a single personal photo of mine ever being seen by anyone at Apple is so remotely small—and even if a reviewer did see a photo, my life would go on much as it always had. You know, I can live with that risk, if this technology has the potential to help in the battle against the abuse of children.

Freedom is actually pretty easy and not "nuanced" at all. Claiming freedom is "nuanced" is code talk from a statist for "Here's this delightful new way we're going to infringe on your freedom, but you shouldn't be upset, because it's for the good of society." (Which, in turn, is really code meaning for their good.)
Give me a break. When you were a young child, and you wanted the toy that another kid was playing with, you had to learn a hard truth—that the other kid’s freedoms sometimes encroached on yours. That inconvenient truth doesn’t magically disappear as the issues get more complex.

Waitaminute. Just above you claimed "some" Apple users. Now it's an overwhelming tide of Apple users. Which is it?
Sigh. I’m too old to engage in word games and hairsplitting with you.

And don't look now, but it's a lot more than "some entitled Apple users." Near as I've been able to find, it's been reviled by every security and privacy entity in the world. That is: Unless you know of one that has actually come out in support of Apple's spyware?
I’m aware that the tide of opinion is against Apple on this one. Mostly, it seems to come down to an underlying philosophy that the contents of one’s private possessions are sacrosanct. It’s a gross oversimplification in my opinion.

Consider the house where you live. It’s your private space, a safe place where no one has any right to invade, right? You are free to enjoy this privacy and safety, and the law protects that right. But let’s say you decide what you really want to do with your freedom is have a slave in your basement. Does your right to privacy overrule your slave’s right to freedom? Of course not. So things get messy because people are messed up. I can guess what you’ll say next... That Apple’s technology would be like the government putting 24 hour video surveillance in all our homes. And I would argue that it’s nothing of the sort. But at that point it’s an argument about specific technologies and processes, about finding the right balance of freedom and safety for everyone. The discussion simply can’t live in the world of simplistic black-and-white ideologies forever. It has to grow up and face the world’s ugly realities.
 
Last edited:
  • Like
  • Haha
Reactions: MozMan68 and dk001
….

Consider the house where you live. It’s your private space, a safe place where no one has any right to invade, right? You are free to enjoy this privacy and safety, and the law protects that right. But let’s say you decide what you really want to do with your freedom is have a slave in your basement. Does your right to privacy overrule your slave’s right to freedom? Of course not. So things get messy because people are messed up. I can guess what you’ll say next... That Apple’s technology would be like the government putting 24 hour video surveillance in all our homes. And I would argue that it’s nothing of the sort. But at that point it’s an argument about specific technologies and processes, about finding the right balance of freedom and safety for everyone. The discussion simply can’t live in the world of simplistic black-and-white ideologies forever. It has to grow up and face the world’s ugly realities.

You are using this as an example? You are assuming illegalities. Why?
This is like putting something not yours in your house that monitors you in certain situationns whether you want it there or not and no legal reason for it to even be there.
 
  • Like
Reactions: pdoherty
Been doing some followup on items I found early.
Here are a couple worth readding:

The organizations and countries listed is impressive.
 
@zkap: Hey, first of all, thanks for the respectful and thoughtful discussion. I appreciate it.

What has changed, though, and what is the actual problem for me is that Apple has now shown willingness to install this on my phone. That's the problem and the precedent I'm talking about - the fact that iPhones will now have scanning software that reports to the outside. Basically, what was yesterday sacrosanct, now isn't. I think if this goes ahead, it's over in the sense that iPhone will be fair game from now on. ... What is of utmost importance, far more important than the checks and balances Apple implemented, is Apple's willingness or lackthereof to protect the privacy and security of the device from all outside pressure and inspection.

Again there are really two issues here, the ‘install this on my phone’ part, and the ‘reports to the outside’ part. I’ll start with the latter. When someone uploads content to iCloud, that content is in Apple’s possession. It is illegal to possess CSAM, and the issue is deemed so serious, that mandatory reporting of child abuse overrules other rights that a person would ordinarily have, such as doctor-patient confidentiality (at least in some countries). Now I’m no lawyer, so I can’t really say what Apple’s legal responsibilities are here, but they have been very poor reporters up until now, and part of the problem is a technical one. How do you scan for abusive content when it’s all encrypted? This brings us to the first issue. The obvious solution is to assess an image before it gets encrypted. By doing this on the device, Apple can actually improve privacy and security for all its law abiding customers by introducing end-to-end encryption for iCloud uploads. That’s only speculation at this point, but it seems perfectly reasonable.

Look, I know I’m talking technology again, and this is where we get stuck. I simply can’t see how we can discuss the pros and cons of this without talking about the way it’s been designed and why. It’s technology that both threatens and protects our digital privacy—we really can’t get away from that.

what I can say is that we are supposed to trust Apple with the reliability and integrity of this software? I'm sorry, but Apple can't get their own apps to function without bugs and iOS has been a demonstration of Apple's lack of competence for the last couple of years. Apple now routinely has bugs in new devices and new features, recurring bugs that somehow return after being fixed and bugs that are there for the last few years without getting a fix. I am not comfortable with Apple guaranteeing that a system like this will be as tamper-proof and as fail-proof as possible.
That’s really not a new phenomenon. I’ve been around for a long time now, and Apple has always had its issues. Snow Leopard was probably the pinnacle of reliability for me, and I would love to see that restored, but I also acknowledge that their OSes are far more complex than they used to be, and that is going to come with some issues, programmers being people and people being fallible.

It’s a valid concern though. Apple does have its work cut out convincing us all that they have got this one right.

About the human reviewer part, I'll explain the issue. You focus on the outcome of a review, saying that if the account is flagged, a human will see the photos and if there's a violation, only then will there be prosecution. You are talking about prosecution and I am talking about privacy. I'll repeat what I said in my last post - when someone, a human reviewer, gets to see the photos, that is a violation of privacy. I can't say this more clearly. That reviewer is doing their review based on the account being flagged, which will inevitably happen to people, and inevitably some of those will be false positives. The moment a human reviewer sees the photos, the user's privacy will have been violated because there is no warrant saying a court of law decided that your right to privacy is less important than the interest of the community to prosecute a specific offense that law enforcement has probable cause for.
Yes, it is a violation of privacy. You’re quite right. If I knew (and I may never know) that someone at Apple saw 30 of my private photos, I would be unhappy about that. But would my life be significantly impacted by it? No. If I knew that the technology had significantly reduced the spread of CSAM, I would accept that personal cost. (See also my previous comment to another commenter on the same topic.)

Just out of curiosity - in your opinion, what is the acceptable probability of a false positive? Is one in a trillion good enough and is that a good estimate? With this being a new scanning system, how will Apple gauge this?
Good question. I think if innocent users were regularly getting flagged, that would be a serious failure of the system, and one Apple should be forced to address or else shut it down. I guess we can agree on that?

What would be acceptable to me personally... A single digit figure per year maybe? Ideally though, I think it should be less than one per year on average.

I do think Apple should be accountable and transparent here, and report on false positives if and when they happen.

We can also talk about the actual human reviewer who will be doing this review. Who exactly will this be and with what background? How will this person have the right to decide whether a user should be reported? ... Also, what exactly is the point of Apple's human reviewer? NCMEC can do that review as well and you'd think they'd be more competent at it. So why does Apple have their own people in this chain ...? This makes sense for Apple only if they expect a good amount of false positives, because otherwise if the system is solid and false hash matches will be almost non-existant, then their human reviewer seems like an unnecessary part of the process and the review will in any case be done by NCMEC anyway.
Well, either way, a human has to get involved at some point, even if you have a layer of AI before that. I don’t think it’s unreasonable for Apple to want to take responsibility for the initial review, but you make a fair argument for them handing it over too.

So, you see a responsible reporting process, and I see a reporting process where Apple inserted their human reviewer in an effort to calm suspicions that their software will inevitably produce false positives. It assures me less because it tells me that they don't expect the software to be reliable, and the same goes for the fact they'll need something like 30 matches to flag the account. If this is true, why aren't people talking about the fact that a person who has 25 CSAM-category photos will not be flagged? If this is for the children, isn't that too high a threshold and why are you praising Apple's system instead of wondering why so many CSAM photos will go undetected? Again, this makes sense only if they expect the system to work poorly and produce a good number of false positives. I fail to see how any of this is reassuring.
People need to understand that no hash function is perfect. Some hash collisions are unavoidable. Did you see my earlier comment about that? So you do need to set a threshold for the number of matches. But yes, I agree that 30 seems too high, as I also said earlier. This tells me Apple are erring on the side of privacy, whereas you see a premature admission of failure. A case where they’re damned if they do and damned if they don’t??

Lastly, this is already too long sorry, there is emotion here and I think that's normal. This is a sensitive topic. I don't use any social media at all, not even LinkedIn which is stupid of me because I own my private business and I don't advertise there.
You’re right. Some emotion on both sides is perfectly understandable. It’s when emotion overtakes rational debate that there is a problem. People set fire to 5G towers because they are angry, but it’s an anger based on fear based on misinformation. I could use many other examples which would take us into the realm of politics, but the moderators will start deleting posts if we go there. (Believe me, I know! 🙂)
 
While you make a good point, IMO it misses on two, for me, very critical points:
1. Why is Apple doing this on device instead of post device? It is very simple to bypass in its current form.
I (and others) have already discussed this at length. Firstly, by embedding the hash database in the OS (which Apple controls) it’s less prone to tampering. Why do you say this is simple to bypass? Secondly, doing the initial hash matching on the device opens the way for end-to-end encryption between your device and iCloud, which would be an overall win for privacy-conscious users if that’s where Apple is going and if the technology works as promised.

2. Why now? What is the driver behind using this solution?
That seems like a loaded question. One might just as well ask, why not now?
 
You are using this as an example? You are assuming illegalities. Why?
This is like putting something not yours in your house that monitors you in certain situationns whether you want it there or not and no legal reason for it to even be there.
I predicted this kind of response in the very text you quoted. Did you not understand the actual point I was making?

Also, when you choose a laughing emoji as your response to a serious and carefully considered point of view, I consider that offensive, just as it would be offensive if the discussion were face to face. It’s what MR (and social media generally) has become, but I’ll opt out if that’s your ongoing manner of communication.
 
Okay, honest question to those of you whose primary objection is that the hash database and matching software lives on your device…

Would you be okay with Apple checking your photos after they are uploaded to iCloud? Even if this means you can never have end-to-end encryption with iCloud? Or do you believe that Apple (and other service providers) should have no duty whatsoever to monitor its servers for CSAM?
 
Okay, honest question to those of you whose primary objection is that the hash database and matching software lives on your device…

Would you be okay with Apple checking your photos after they are uploaded to iCloud? Even if this means you can never have end-to-end encryption with iCloud? Or do you believe that Apple (and other service providers) should have no duty whatsoever to monitor its servers for CSAM?
Most everyone has already said this. Check the photos on the cloud, it's their property do what they want with it. Not on the local phone.

Google, MS, Facebook, they all do it that way already. What was so wrong about that solution? Apple was a laggard and has already failed the children, but to invade the boundary of the user's phone is the bright line and doesn't make it any better than existing solutions.

You might be OK with all the of the precautions Apple has stated they are taking but why is there such a rush? If they got this so right, where is the full transparency? Where is the extended beta test? Why haven't they even tested this on their own employees?

There is a legal and technical minefield with crossing this rubicon and the arrogance that Apple has displayed gives me no good feelings that they really stopped to consider how well this would be taken, or that they even planned it all correctly.

Why didn't they consult with the EFF and the ACLU before launching this?

The rampant amount of cheerleading that goes into this is troubling. Yes CSAM is bad, it's evil, but let's also be cognizant of what society does to people labelled as child molesters and predators.

1. There is very little research done on Child Predators and molesters. Treatments available to them are almost nil. Why? Because CSAM is bad, m'kay?
As a society, if we (the West) cannot and will not look for effective ways to reduce negative outcomes with child predators, we may as well just put them on an island to die or put them out of their misery. Additionally, many people who have issues with CSAM were victims of sexual predators when they were children as well. How can any legitimate clinician or researcher even conduct research into CSAM if everything about it is flagged and reported?

2. The definition of CSAM is not universal. In Canada depictions of CSAM can include drawings or illustrations, including anime and manga. The United States started with a similar definition of CSAM but it was struck down by the Supreme Court. However, that doesn't mean it couldn't be expanded again.

3. The NCMEC is a quasi-governmental organization which provides the hashes that Apple will use. Is there any oversight or review of this database? Like a no-fly watchlist, getting a hash on there might be difficult to remove, but also, how will this hash be interpreted in other countries? Apple said the hash is part of the operating system and will not be querying the NCMEC for updates. Yet Apple also deploys the universal OS image across many countries. Will Canada and Australia accept NCMEC hashes or roll their own database? How will this be interpreted? Is China going to say we have our own version of the NCMEC database and Apple must deploy it or else? How could Apple refuse such a 'reasonable' request?

4. The NCMEC image hashes can be reversed (claimed) to provide small greyscale thumbnails. So in effect, every iOS 15 image is carrying around a few thousand CSAM images?

5. Apple claims that the hashing collisions are 1 in a trillion, yet they offer 30 images to be positively hit before they do something about it. That really doesn't sound right. By law, ONE CSAM image should be reported to the authorities, not 30. Holding until 29 more CSAM images before reporting is a violation of the law. How generous that they are allowing 30 images, it almost seems like they have no faith that their system is going to work well on day 1.

6. What happens when an image hash is positive? If it's under the 30 image limit, does it get published anyway? Does it get held?

7. The scanner as it is enabled is NOT working on behalf of the owner. I could understand if a positive hash is identified and it told the owner, at least they could choose not to upload it or to ask Apple/NCMEC to review said image, but the system doesn't do that. If it silently increments the counter until it hits 30, the owner of the phone has no way of knowing or necessarily stopping the process. The scanner isn't designed to audit a library for the benefit of the user so they can remove any illegal images or see if there is a collision. It's just designed to report on the owner to Apple and then the NCMEC and then the police.
A phone's owner should be able to review and stop such a process in order to have some level of control over potentially bad outcomes like this.

8. The scanning process does not alert you at all about the counter or positive hits. The only feedback you'll probably get is the cops knocking on your door. After that you'll be required to defend yourself in a court of law, but Apple nor the NCMEC will make themselves available for cross examination. This is highly dangerous for any defendant. Getting the system 'right' is paramount, and rushing is not a good idea.

9. CSAM images are supposed to be reported to NCMEC or Law Enforcement. If a positive hashed image is given to Apple for review, are they expecting to see CSAM? Technically that's illegal, they aren't Law enforcement officers. The way Facebook etc are allowed to review this is because they aren't expecting CSAM, they are reviewing images wholesale... kind or problematic there too, unless Apple is assuming the role of government now.

This is in addition to all the technical concerns about attack surface, hijacking, forced collisions, API abuse, use of the phone camera while the phone is locked, foreign government spying, high level phone hacks that have all been said as well.

What is the need for this rush and surprise (not announcing it in WWDC) and limited access? Apple has sat on their hands for years about this and now this invasive system going to be deployed in a month or two?

Here's an example of why deploying systems without extended review and testing is not a good idea:
That system had 28-29 levels of security to pass too....
 
Last edited:
Check the photos on the cloud, it's their property do what they want with it. Not on the local phone.
Wait, so now Apple can do whatever they want with our private images, just because they own the server? I'm sure that's not what you meant, but then I'm still unsure why you think there's a night-and-day difference between checking them before they leave your phone and checking them after they hit the iCloud server a moment later??

Google, MS, Facebook, they all do it that way already. What was so wrong about that solution?
I won't comment on Facebook—if you think they are a model of privacy and security that Apple should look to, you need to seek help. This is what Google says it does to identify CSAM on its platform:
We invest heavily in fighting child sexual exploitation online and use technology to deter, detect, and remove CSAM from our platforms. This includes automated detection and human review, in addition to relying on reports submitted by our users and third parties such as NGOs, to detect, remove, and report CSAM on our platforms. We deploy hash matching, including YouTube’s CSAI Match, to detect known CSAM. We also deploy machine learning classifiers to discover never-before-seen CSAM, which is then confirmed by our specialist review teams.
Things like user and third-party reports obviously don't work for private images. And machine-based detection doesn't work if the images are encrypted. So what about your private files on Google Drive? Google does encrypt data, but they hold the keys and can decrypt your files if and when they want to.

Apple offers end-to-end encryption for Messages, keychain, health data, and a bunch of other services that you can see here. Photos are encrypted in transit and in storage, but it's not yet end-to-end, which means that Apple could see your photos if they wanted to. What's intriguing about this recent announcement is that it potentially paves the way for Photos to be end-to-end encrypted too, while still allowing CSAM to get reported.

Why didn't they consult with the EFF and the ACLU before launching this?
I don't know. I did read the EFF statement and I wasn't impressed by it. But you can judge for yourself.

1. There is very little research done on Child Predators and molesters. Treatments available to them are almost nil. Why? … How can any legitimate clinician or researcher even conduct research into CSAM if everything about it is flagged and reported?
Your comments here show empathy and I can agree with much of what you said. It goes beyond the scope of this discussion, but yes, there need to be pathways to treatment and maybe less judgement for those who chose to come forward. Having said that, the protection of vulnerable children has to come first, which is why reporting of abuse is mandatory.

3. The NCMEC is a quasi-governmental organization … Is China going to say we have our own version of the NCMEC database and Apple must deploy it or else? How could Apple refuse such a 'reasonable' request?
Apple have categorically stated that they would deny all such requests. If they are lying, then we shouldn't really be trusting them with any of our data, period.

4. The NCMEC image hashes can be reversed (claimed) to provide small greyscale thumbnails. So in effect, every iOS 15 image is carrying around a few thousand CSAM images?
Where on earth did you read that?

5. Apple claims that the hashing collisions are 1 in a trillion, yet they offer 30 images to be positively hit before they do something about it. That really doesn't sound right. By law, ONE CSAM image should be reported to the authorities, not 30. Holding until 29 more CSAM images before reporting is a violation of the law. How generous that they are allowing 30 images, it almost seems like they have no faith that their system is going to work well on day 1.
Yes, I agree it sounds too high. I already discussed this here.

Re your points 7 and 8, if you tip offenders off before any report is made, what's the point? You may as well not bother with any of it. Am I missing something?

I can't commend on 9, not being a lawyer.

Thanks for taking the time to answer my question and share all those points. Sorry if I haven't replied to every point that you made… there was a lot there, but I did my best.

Yes, maybe Apple needs to slow down and make sure they get this right before deployment.
 
  • Like
Reactions: dk001 and MozMan68
I (and others) have already discussed this at length. Firstly, by embedding the hash database in the OS (which Apple controls) it’s less prone to tampering. Why do you say this is simple to bypass? Secondly, doing the initial hash matching on the device opens the way for end-to-end encryption between your device and iCloud, which would be an overall win for privacy-conscious users if that’s where Apple is going and if the technology works as promised.


That seems like a loaded question. One might just as well ask, why not now?

Bypass as in turn off Backup Photo to iCloud and do it manually. This functionality is buying us what?
I realize the tech is well done. That still doesn’t answer the question why now and why this solution.

This technically clever solution doesn’t change that. What’s the ROI?
 
I predicted this kind of response in the very text you quoted. Did you not understand the actual point I was making?

Also, when you choose a laughing emoji as your response to a serious and carefully considered point of view, I consider that offensive, just as it would be offensive if the discussion were face to face. It’s what MR (and social media generally) has become, but I’ll opt out if that’s your ongoing manner of communication.

That is the problem with written words like this - taken out of context.
You are sticking to the Apple technical soluition, what we know of it, while I am looking at the legal, moral and ethical aspect of it.
The more I learn of the technical side, and I came into this knowing very little about hashing, has not changed my concern. I still feel their are serious gaps in how this really works, why Apple is doing this, and what is the driver behind this ”solution”.

We have a lot of guess work, some of likely very good, and a lot of questions with Apple being silent.

The laughing emoji can be used for laughing at, laughing about, and laughing at the general situation.
 
Last edited:
Okay, honest question to those of you whose primary objection is that the hash database and matching software lives on your device…

Would you be okay with Apple checking your photos after they are uploaded to iCloud? Even if this means you can never have end-to-end encryption with iCloud? Or do you believe that Apple (and other service providers) should have no duty whatsoever to monitor its servers for CSAM?

Yes. Most Cloud providers do this today in an effort, I assume, to keep this data off their platforms. When a user elects to use a Cloud, we are using a service that we are opting into.

At this stage, from a legal perspective, we know that these providers have nol responsibility to scan for CSAM. That they do so is likely due to pressure from several angles. To help with scanning there is technolgy like Cloudflare and others that market to this.
 
Last edited:
Probably not where you wanted to go ;) If my wife, neighbor, or cops observe my telescope (I don't own one, btw) pointed at the female neighbor's bedroom window do you think they'd believe for a moment I hadn't been?
If you told your neighbor that you have a telescope pointed at her window just to make sure nobody is going to assault her and she agrees that it's okay, then it's perfectly fine.

Also, she has the ability to close her blinds (aka, turn off iCloud Photos).
 
…there was a lot there, but I did my best.

Yes, maybe Apple needs to slow down and make sure they get this right before deployment.

Your last statement is spot on and was an initial thought that has been one raised since anouncement. Many of us have asked just that.
Yet we get silence from Apple.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.