Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What we still don't know is WHAT Apple is doing during the review process. Do they have the raw images of the CSAM and can actively compare flagged images to CSAM images? How else can they determine false positives? Are they just sending ANY picture that has kids in them to further review to those that DO have access to CSAM? Again, if its the latter would they be making judgement calls on 15 year olds or 25 year olds?

Do you understand that false positives are random chance events, and that an image falsely matched could be a picture of absolutely anything?
 
Sorry but its clear you have no idea of what even says this procedure will do or how it works. The software on YOUR is operational whether you intend to go to iCloud or not, so your 'innocent' pictures' still go through the process, and the same thing would happen if it were on iCloud? There's no additional safety in having it on your hardware at all, no extra anonymity as Apple state its Anonymised? So having it on your system confers no benefits but much more extra potential risk.
Sorry but you have no idea what you’re talking about and you are hard to read.

The additional privacy/safety of Apple’s NeuroHash is not (necessarily) intrinsic to the hash itself but in the fact that my pics stay (generally, most of the time, unless they’re subpoenaed) encrypted while on iCloud. You keep saying this is not a thing but this is a thing and a glaring advantage compared to on-server dragnet mass scanning. No Apple employee has access to my phone. No government official has (easy) access to my biometrically locked phone. They have access to servers. Cryptographically air gapped on-device scan is simply better than on-server scan. Whatever abuse you may imagine on-device, it could be done ten-fold on-server.
 
The best description of gibberish I've seen: "A scan is not a scan if its output is cryptographic gibberish"
A scan is a scan ...simples.
Absolutely nothing is “simple” about this.
That’s the problem with hot takes full of assumptions and inaccuracies.
 
Absolutely nothing is “simple” about this.
That’s the problem with hot takes full of assumptions and inaccuracies.
If someone has already called it a scan in their post, then yes it is simples. Its a scan!

I wonder if that "A scan is not a scan if its output is cryptographic gibberish" has been edited out?

Let's face it wasn't me described it as a scan. Backtracking can be such fun.
 
While throwing out your scary trigger words ('Infect!' :rolleyes:) you continue to minimise the obvious problem with having that software on Apple's servers - it requires Apple servers to be able to see the contents of your Photos. That means you need 100% trust in them not to scan them for all kinds of material, or store your photos in perpetuity to be scanned at some future year when the political winds change, and to prevent other agencies hacking into them and doing those those things (again). A server with 1,000,000,000+ users' private data is a tempting target for spies and when it's breached to target one user we are all exposed.

If the software runs on our phones, security researchers can see the code running and raise the alarm if it appears to do something its not supposed to be doing, or if it appears to be changing suspiciously, or if its different on some phones in China or Hungary, or if bad actors appear to be targeting it with viruses or trojans. If somebody wants to corrupt the code to spy on us all, they would need to do it out in the open and succeed 1,000,000,000+ times and be caught 0 times. I haven't seen any credible researcher who doesn't concede that advantage.

And please, the idea Apple is doing it to save electricity is just silly. You could make the same complaint right now about where Photos generates thumbnails and low-res previews, or any other process happening 'on device' for privacy reasons.
That's the point though, Apple are selling this as being more private, but not because its on your hardware, but because of NeuralHash, where others all use servers to make these 'checks'. Now what you are suggesting about 100% trusting them seems rather a contradiction under the circumstances where they have surveillance tools on your hardware and the ramifications of that are far greater than on their server.

So you think a server having software that is downloaded over a billion times is not much energy, let alone processing time, install time etc. etc. where they could just load that on their own servers once ? God help as energy use for the Internet is expected to use 20% of all world energy by 2025.

"it requires Apple servers to be able to see the contents of your Photos"

Exactly the same as if its on your hardware, except on your hardware the potential exists for a far more intrusive surveillance, as on their server the tools would only operate when and IF you decided to use iCloud unlike potentially a more serious situation where modifications could quite easily interrogate everything on an individuals private device that they own, hence Apple employees raising the same concerns!

And all the other IT organisations, all the media coverage, as well as the employees expressing their concern is wrong is it?





How many independent experts have expressed how pleased they are with the prospect of our hardware being used like this, and many point out it should be via iCloud server based tools.

Above is literally a fraction of the PR disaster Apple has wreaked havoc on itself.
 
Last edited:
That's the point though, Apple are selling this as being more private, but not because its on your hardware, but because of NeuralHash, where others all use servers to make these 'checks'. Now what you are suggesting about 100% trusting them seems rather a contradiction under the circumstances where they have surveillance tools on your hardware and the ramifications of that are far greater than on their server.

Not because of NeuralHash. NeuralHash is just a fingerprinting technology, one of many, nothing much special about this one. The interesting part is Private Set Insersection which is a method for testing if one set of numbers contains another set of numbers without the software performing the test learning of the result of the test, and without anyone, even the architects of the system learning of small number of positive matches OR EVEN if there have been zero matches. That is what is private about it, you are not even 'cleared' by a so called scan. Nothing is knowable solely from the result of the 'scan' on your phone. Those indecipherable results are then analysed on iCloud servers where Apple has additional keys, but not the entire key. Your phone never knows the scan result, the iCloud component never sees your photos, and will likely never see the result of the 'scan'. A bad actor on your phone obtains zero information. A bad actor on iCloud obtains zero information, unless you have a child porn collection and 30 of your images pop open. The absolute bare minumum of information is being exchanged in order to detect these bad photos.

In the status quo, which you and the FBI would love to preserve. Your entire Photo collection is knowable on the cloud. All of it, just sitting there, readable. How is that not worse?

So you think a server having software that is downloaded over a billion times is not much energy, let alone processing time, install time etc. etc. where they could just load that on their own servers once ? God help as energy use for the Internet is expected to use 20% of all world energy by 2025.

You've dowloaded more data reading MacRumors this week. Processing time is irrelevant because it will be happening somewhere and anyway, the face recognition and other classifiers running in the Photos app on every photo you take are doing a massive amount more work than a hashing algorithm. You're just throwing out stuff now to try to generate bad feeling. Climate change is real. Climate change caused by iOS updates? Let's see the figures.

"it requires Apple servers to be able to see the contents of your Photos"

Exactly the same as if its on your hardware, except on your hardware the potential exists for a far more intrusive surveillance, as on their server the tools would only operate when and IF you decided to use iCloud unlike potentially a more serious situation where modifications could quite easily interrogate everything on an individuals private device that they own, hence Apple employees raising the same concerns!

As explained above it is not exactly the same. A maliciously modified process on iCloud is never going to be caught by anybody other than Apple or a whistleblower aligned with the bad actors. A rogue process, running on your phone could be caught be you, if you have the skill, or any of a number of researchers and hackers. There are massive incentives for anyone to detect that and expose it and plenty of real world examples of this happening. And yet again, you are ignoring the fact that this process doesn't not scan through files wantonly, and does not perform arbitrary interpretation of data, and does not have privileged access to anything unusual. It has no published capabilities that would be of use to an attacker intent on access local data. Your deliberate use of language like 'scan', and 'interrogate' and 'surveillance' is just an attempt to fool others and yourself into oversimplifying what this process is equipped with. If somebody wanted to scan your phone for files, CSAM detection offers zero useful features. The attacker could more easily exploit the Files app, or Photos, or autocorrect. These things actually do scan, actually classify and interpret content, and actually do produce accessible results.

And all the other IT organisations, all the media coverage, as well as the employees expressing their concern is wrong is it?
I see a lot of 'it could lead to', which is just scaremongering. If somebody has a genuinely disturbing attack vector that Apple haven't already considered and addressed, lets see it. But if its just a story that relies on Apple suddenly shifting from tediously 'woke' into hunting down gays in Hungary, excuse me if I don't reply with more than an emoji.
 
Lots of discussion about how you can keep the software from running since I visited last…. It’s kinda funny to See people try to alleviate people’s concerns by stating how The spyware works in its current iteration. I hope they are all geared up to keep tabs on what Code changed after every update for ever and ever and ever. It remains a Trojan horse, once your running ios15 you have opened the gate. That’s the main point people need to know, how that code is implemented today is kinda irrelevant to the big picture. It will be interesting to see how many end up understanding and refusing the ios15 update and how it affects iphone 13 sales…. I personally think most won’t even blink ….most would not blink no Matter how nefarious it was. Anyway… no ios15 for me or my family, that only cost Apple about 20 grand over 5 years though so doubtful they will care much
 
If there's no CSAM on your device, then how would Apple ever see any of your images? I don't feel bad for anyone that has that kind of stuff in their possession. This fact makes this system 100% private to those that don't own CSAM.
I think that you're missing the point. Once you have agreed to letting Apple scan all of your photos and match it against a CSAM database, you've opened pandora's box.

Let's say now that the Government (not necessarily ours, say China, but I'll use ours as an example) decides that they need to find all KKK members. It's would be a very simple thing to change the hash for known Klan images, and scan for those instead. You already agreed to let Apple scan your photos.

And it may not even be Apple's doing; they may be ordered by whatever government agency to do this, without you ever knowing it.

Do I see this happening in the US? Well, it's less likely than in China, where the Chinese government has already taken the iCloud authority away from Apple so that they can keep it for themselves. Political insurgent? Anti-communist propaganda? Think of all of the ways that this can be abused. Nobody is defending pedos, but this sets a really bad precedence of "voluntary monitoring" that puts everyone's privacy at risk.

... and for that matter, it's not even "voluntary monitoring". Apple isn't giving you a choice of opting out; they're just saying that they're going to do it. For those of you saying "I just won't install iOS15", we know how that works. Your apps will stop working eventually, insisting that you upgrade for them to continue working. Your phone will be vulnerable to attacks and bugs because you haven't upgraded. Stomping your feet and saying "I just won't install it" isn't realistic.
 
Yes I do understand how it works. I am asking about the HUMAN REVIEW process, not the MATCHING PROCESS. What all goes in the human review process. Will they be checking drivers licenses for the subject to determine if they are under age or not? I DON'T KNOW THAT DETAIL. Which is why......I am asking? Seriously, you have had this attitude towards me for weeks when all I am doing is asking questions so I can learn the process. I am not saying I know everything. Which is why I am posting asking these questions so I do know.
You still don't understand how it works. If there's a match, it's a match with a KNOWN CHILD ABUSE image. The human review at that point is just a visual confirmation.

The silly thing of this is, who the hell stores downloaded internet porn in their camera roll, then syncs it with iCloud? This is a complete waste of time, and an invasion of privacy "for the children".
 
  • Like
Reactions: BurgDog
I think that you're missing the point. Once you have agreed to letting Apple scan all of your photos and match it against a CSAM database, you've opened pandora's box.

Let's say now that the Government (not necessarily ours, say China, but I'll use ours as an example) decides that they need to find all KKK members. It's would be a very simple thing to change the hash for known Klan images, and scan for those instead. You already agreed to let Apple scan your photos.

And it may not even be Apple's doing; they may be ordered by whatever government agency to do this, without you ever knowing it.

Do I see this happening in the US? Well, it's less likely than in China, where the Chinese government has already taken the iCloud authority away from Apple so that they can keep it for themselves. Political insurgent? Anti-communist propaganda? Think of all of the ways that this can be abused. Nobody is defending pedos, but this sets a really bad precedence of "voluntary monitoring" that puts everyone's privacy at risk.

... and for that matter, it's not even "voluntary monitoring". Apple isn't giving you a choice of opting out; they're just saying that they're going to do it. For those of you saying "I just won't install iOS15", we know how that works. Your apps will stop working eventually, insisting that you upgrade for them to continue working. Your phone will be vulnerable to attacks and bugs because you haven't upgraded. Stomping your feet and saying "I just won't install it" isn't realistic.
Your right, you can only run ios14 for so long, seems inevitable this will separate you from the Apple ecosystem, but we have time to decide the next move. My iPhone mini will probably last at least a couple of years….its Kinda unclear whether others will follow apples path on this, As bad as Google is, so far they do their Objectionable stuff on the web so since they currently are not installing spyware in the android code that we know of then it might be an option. One thing is clear though, Apple is no longer an option without a course correction
 
i don’t think so, it starts with your first upload after you install os15, the csam database sits on the phone and when a photo is added to photos and then uploaded it is hashed and compared against csam database, it doesn’t work on photos already in the cloud since it only is only used upon upload

like, if you now have csam sitting in icloud, even when you download and install os15, they won’t be seen or hashed and even if you upload brand new csam that ncmec or other agencies haven’t ever seen, those won’t be flagged either
That's assuming Apple doesn't scan in the cloud too -- they never said that. And I suspect they will, since an Apple device isn't the only way a pic can get added to the cloud.
 
  • Like
Reactions: BurgDog
That's assuming Apple doesn't scan in the cloud too -- they never said that. And I suspect they will, since an Apple device isn't the only way a pic can get added to the cloud.
I don’t care if they scan the cloud…. That’s acceptable, we have choices where we upload content….. the issue remains the installing the software on our devices…. They reverse that and keep it in the cloud then all is good, if not then only choice is to accept or leave
 
Not because of NeuralHash. NeuralHash is just a fingerprinting technology, one of many, nothing much special about this one. The interesting part is Private Set Insersection which is a method for testing if one set of numbers contains another set of numbers without the software performing the test learning of the result of the test, and without anyone, even the architects of the system learning of small number of positive matches OR EVEN if there have been zero matches. That is what is private about it, you are not even 'cleared' by a so called scan. Nothing is knowable solely from the result of the 'scan' on your phone. Those indecipherable results are then analysed on iCloud servers where Apple has additional keys, but not the entire key. Your phone never knows the scan result, the iCloud component never sees your photos, and will likely never see the result of the 'scan'. A bad actor on your phone obtains zero information. A bad actor on iCloud obtains zero information, unless you have a child porn collection and 30 of your images pop open. The absolute bare minumum of information is being exchanged in order to detect these bad photos.

In the status quo, which you and the FBI would love to preserve. Your entire Photo collection is knowable on the cloud. All of it, just sitting there, readable. How is that not worse?



You've dowloaded more data reading MacRumors this week. Processing time is irrelevant because it will be happening somewhere and anyway, the face recognition and other classifiers running in the Photos app on every photo you take are doing a massive amount more work than a hashing algorithm. You're just throwing out stuff now to try to generate bad feeling. Climate change is real. Climate change caused by iOS updates? Let's see the figures.



As explained above it is not exactly the same. A maliciously modified process on iCloud is never going to be caught by anybody other than Apple or a whistleblower aligned with the bad actors. A rogue process, running on your phone could be caught be you, if you have the skill, or any of a number of researchers and hackers. There are massive incentives for anyone to detect that and expose it and plenty of real world examples of this happening. And yet again, you are ignoring the fact that this process doesn't not scan through files wantonly, and does not perform arbitrary interpretation of data, and does not have privileged access to anything unusual. It has no published capabilities that would be of use to an attacker intent on access local data. Your deliberate use of language like 'scan', and 'interrogate' and 'surveillance' is just an attempt to fool others and yourself into oversimplifying what this process is equipped with. If somebody wanted to scan your phone for files, CSAM detection offers zero useful features. The attacker could more easily exploit the Files app, or Photos, or autocorrect. These things actually do scan, actually classify and interpret content, and actually do produce accessible results.


I see a lot of 'it could lead to', which is just scaremongering. If somebody has a genuinely disturbing attack vector that Apple haven't already considered and addressed, lets see it. But if its just a story that relies on Apple suddenly shifting from tediously 'woke' into hunting down gays in Hungary, excuse me if I don't reply with more than an emoji.
Some very poor assumptions in your post? "A bad actor on iCloud obtains zero information". Then of course you mention about iCloud having access to all your photos? NO THEY DON't. not if you CHOOSE not to use iPhone to store photos? But see the important word there is CHOOSE.

I get no choice if its on my hardware, let alone any modifications that could take place on my hardware.

You seem to think iCloud can check up on more than on my hardware? That's just factually wrong on so many levels, let alone making the assumption I use iCloud for photo storage? There is a choice with using iCloud for photos, but no choice in what happens with that software on individuals hardware?

As for comments about climate change: Who made them...Oh yes, it was Apple? Same as the eco stance they take, so it is clearly relevant to mention 1,000,000,000+ downloads etc. etc.

Again you obfuscate because I'd accept the software doing the job on iCloud, the same software, but not on my hardware, where you seem to be using the 'I see no ships' mentality, when if you've looked at professionals views, IT views, and even from Apple employees, you will see they have the same concerns, and I suggest when Apple employees do so, they have their finger on the button more than we do.

Been on Apple user since October 1976 (and its verifiable) used to be on developer list let alone assisting Apple on many occasions (again verifiable), so anyone who has the idea I'm anti Apple must be crazy. Its because I actually care about Apple that I'm concerned, and that concern seems to be shared by many many other respectable organisations, IT specialists, Media and even Apple employees themselves.

So all the experts, all the media, all the critics have got it wrong and you have got it right. Sorry, I don't think so.

You are entitled to your view, and I'd fight for your right to them....which is why I'm so concerned about the implementation of these tools on OUR hardware, a view I'm pleased to say seems shared with many. But let's hope you always have the right to your view, because surveillance can be a short step away from acting as final arbiters on those freedoms.
 
Last edited:
  • Like
Reactions: BurgDog
I don’t care if they scan the cloud…. That’s acceptable, we have choices where we upload content….. the issue remains the installing the software on our devices…. They reverse that and keep it in the cloud then all is good, if not then only choice is to accept or leave
That's exactly my thoughts as well, though I'm not sure I'd ever again trust Apple as much as I did a couple of weeks ago.
 
What I know is irrelevant. As I wrote, there is specific data readily available on the limits of what humans can know.

It would be really embarrassing, if instead of responding ad nasseum to me several times, you could have googled something, and found a variety of completely mainstream virology and microbiology materials that explicitly state, or very obviously imply, that “human science still lacks a reliable method for viral observation”.

Are you familiar with Fort Detrick and its reputation for virology? It’s relationship with the NIH/CDC? Wonder what kind of nuts they are trying to crack, maybe to improve viral TEM & STEM observations and techniques, at scales even magnitudes larger than those of current popular human concern?

Maybe not…. because if I understand correctly from your reply, you have faith and that’s good enough for you. It is indeed a popular belief among humans that there are benefits just from the power of belief alone.

Well like they say: “Don’t eat that apple, Eve!”

Have a great day!
 
Some very poor assumptions in your post? "A bad actor on iCloud obtains zero information". Then of course you mention about iCloud having access to all your photos? NO THEY DON't. not if you CHOOSE not to use iPhone to store photos? But see the important word there is CHOOSE.
Except it never asks you if you want to sync iCloud photos -- it's the default and it's automatically turned on. You have to explicitly turn it off and monitor that it stays off.
 
  • Like
Reactions: BurgDog


Apple today shared a document that provides a more detailed overview of the child safety features that it first announced last week, including design principles, security and privacy requirements, and threat model considerations.

iphone-communication-safety-feature.jpg

Apple's plan to detect known Child Sexual Abuse Material (CSAM) images stored in iCloud Photos has been particularly controversial and has prompted concerns from some security researchers, the non-profit Electronic Frontier Foundation, and others about the system potentially being abused by governments as a form of mass surveillance.

The document aims to address these concerns and reiterates some details that surfaced earlier in an interview with Apple's software engineering chief Craig Federighi, including that Apple expects to set an initial match threshold of 30 known CSAM images before an iCloud account is flagged for manual review by the company.

Apple also said that the on-device database of known CSAM images contains only entries that were independently submitted by two or more child safety organizations operating in separate sovereign jurisdictions and not under the control of the same government.Apple added that it will publish a support document on its website containing a root hash of the encrypted CSAM hash database included with each version of every Apple operating system that supports the feature. Additionally, Apple said users will be able to inspect the root hash of the encrypted database present on their device, and compare it to the expected root hash in the support document. No timeframe was provided for this.

In a memo obtained by Bloomberg's Mark Gurman, Apple said it will have an independent auditor review the system as well. The memo noted that Apple retail employees may be getting questions from customers about the child safety features and linked to a FAQ that Apple shared earlier this week as a resource the employees can use to address the questions and provide more clarity and transparency to customers.

Apple initially said the new child safety features would be coming to the iPhone, iPad, and Mac with software updates later this year, and the company said the features would be available in the U.S. only at launch. Despite facing criticism, Apple today said it has not made any changes to this timeframe for rolling out the features to users.

Article Link: Apple Outlines Security and Privacy of CSAM Detection System in New Document
Call this what you want, justify it any way that you want, mollify yourself with "I have nothing to worry about because I don't have child porn". Make yourself feel better by saying "It's for the children".

This is surveillance. Plain and simple. 4th Amendment doesn't apply, because you are agreeing to it.

Apple has no business performing surveillance on 1bn users, nor do they have any business monitoring our CHILDREN to make sure that they are behaving.
 
It’s interesting that I can’t find any articles of reporters questioning Google on this, but also interesting that googles pr dept has passed completely…. Apple gave them a potential gift…. They don’t have to talk about there own privacy standards which we know are lax… but they could just say they would never install something like this on your hardware and it would be a home run…. I guess they think it’s too risky to draw scrutiny of their online privacy policies.

anyway…. Missed opportunity for a rare attack opportunity, Apple does not stumble like this often
 
  • Like
Reactions: BurgDog
Missed opportunity for a rare attack opportunity, Apple does not stumble like this often

Epic and Whatsapp/Facebook attacked viciously, unsurprisingly.

Apple on the other hand was quick to deploy the big guns (Hair Force One interview to WSJ) and mostly recovered from the stumble.
 
Epic and Whatsapp/Facebook attacked viciously, unsurprisingly.

Apple on the other hand was quick to deploy the big guns (Hair Force One interview to WSJ) and mostly recovered from the stumble.
I suppose apples response distracted people enough from the real issue much like some in this thread have done…. you have to take note of how many post here are discussing things that have nothing to do with the main issue of spyware Being installed on your hardware. most media have been been duped in the distribution of “ don’t look there, look over here” …. Somewhat successfully burying the story through distraction , I would say a pretty large majority of users still don’t understand the distinction of iCloud scanning versus spyware installation
 
  • Like
Reactions: Vegas_Dude
As I understand, Apple has absolutely no idea of what is in the database. All they receive is the database of hashes....? So, in a worst case scenario, anything could be in the database.

And when Apple reviews a flagged phone, I assume they are only confirming the hashes are the exact match?
first as to the issue of the database, apple is creating its own database and to prevent what you are pointing out (not knowing what is in the database) they are using 2 databases (there are databases in the uk and europe) and only using images that match so the images they use are identical and are found in 2 databases

in this way, 2 databases holders would have to collude to plant false images, a very unlikely scenario, this gives us reasonable assurance that the images being scanned against are true images of child abuse and not something else

as to the issue of review, i am assuming that apple will look at the user's photo and make a first determination, this is or isn't abuse or a false positive and then they would alert ncmec, but again there would need to be 30 or more of these images from the user before any of this would happen

i stand to be corrected on this

more research on what apple does with matched images, apparently they do use humans looking at the "visual derivative" which i assume just means the actual photo ...

"If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation.
The reviewers are instructed to confirm that the visual derivatives are CSAM. In that case, the reviewers disable the offending account and report the user to the child safety organization that works with law enforcement to handle the case further."

found more clarification on this, yes, a human will actually review the actual offending photo:

or even in the mathematically unlikely case of a one-in-one-trillion false identification of an account, a human reviewer would see that the visual derivatives for the purportedly-matching images are not CSAM, and would neither disable the account nor file a report with a child safety organization.
 
Last edited:
"Apple are insisting that there's is only one version of iOS because they don't have the means to create multiple different versions."

And yet instead of putting one piece of code in iCloud that will do the same thing, just as anonymously, that doesn't infect your hardware, that doesn't set a dangerous precent, that doesn't require 1,000,000,000+ users to download said software, that doesn't require server time to download that software, doesn't require the Hardware of the user to use its processing power, its electricity, etc. etc. Doesn't require Apple to go against its own eco credentials where the overhead of not having it on server is immense?

And people wonder why the trade is suspicious, why Apple employers are suspicious, why the media in general is suspicious?
go to tineye.com and upload an original image of yours, a tree, a garden, a beach something you have shot, something common that you would expect there are millions of images, maybe a cat :)

tineye.com searches 45 billion images in milliseconds and so far i have tried about 10 images of mine and not gotten a single false positive, this scan is fast, ridiculously fast, you will notice zero resources used i suspect
 
That's assuming Apple doesn't scan in the cloud too -- they never said that. And I suspect they will, since an Apple device isn't the only way a pic can get added to the cloud.
the do the encryption keys to your icloud photos but if they scanned for photos in the cloud they would need to decrypt every icloud photo (owned by american users) and then run the scan, i'm pretty sure that would leave them liable to be sued ... not to mention it would cause a s@#$storm, i think that any kiddie porn sitting in icloud at the moment is probably safe (unfortunately)

apple is setting this up as "from this day forward" "this day" = installing ios 15
 
Last edited:
the do the encryption keys to your icloud photos but if they scanned for photos in the cloud they would need to decrypt every icloud photo (owned by american users) and then run the scan, i'm pretty sure that would leave them liable to be sued ... not to mention it would cause a *********, i think that any kiddie porn sitting in icloud at the moment is probably safe (unfortunately)
They do have the keys and can do that easily -- and that's ok for me. I doubt if they can be sued, others do the same thing and it's their servers and their service. That's pretty compelling in U.S. courts...

apple is setting this up as "from this day forward" "this day" = installing ios 15
You got a quote from Apple on that? And how do you account for photos not uploaded from an Apple device?
 
  • Like
Reactions: Mega ST
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.