Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
True. This is arguably not the perception of iCloud that Apple is going for with its privacy-focused advertising, but caveat emptor. Might change the public discussion of OS merits in the future though. Apple is often equated with privacy and seamless integration. The reality is, pick one.


"Super smart trained AI" - I work with state of the art machine learning models, and even the best of them make the occasional dumb mistakes, because ultimately it is a dumb method still far away from human thinking.

The system is looking at the content. The NeuralHash component (your step 2) works on "features of the image instead of the precise values of pixels," ensuring that "perceptually and semantically similar images" get similar fingerprints. Semantically similar, that is content matching. NeuralHash analyses the image content. If it was only about matching slight modifications, perceptual similarity would be sufficient. NeuralHash does more. Thus the fingerprint is among other things a content summary. A lot depends on the detail here, which in turn depends on the undocumented features Apple is looking for and the undocumented weights and thresholds of the system. "Two pink shapes" is more generic than "two nude humans" is more generic than "two people having sex" is more generic than "a man having sex with a boy" is more generic than "a grey-haired man..." and so on. The more detailed this goes, the closer we get to pixel perfect image comparison. We know Apple does not want that, so some level of genericness is preserved. Step 3 is comparing these image content summaries with the image content summaries from NCMEC.


True. The unspecified threshold is interesting, though. We know more than one matching picture is needed (so Apple won't do anything if they have one match, even if it is a perfect match, which is peculiar in its own right), but we do not know how many. Ten? Two?


First of all: Apple trusts its users so little that it suspects all of them of CSA, and it installs a black box into their personal property to check on them. To Apple, users are potential adversaries, who need to be checked and controlled. Information from Apple to its users must be read with this premise in mind. No claim from Apple should be taken at face value.

Your description of 6a assumes that all of this is perfectly implemented, without bugs or undocumented backdoors, and that the calculation is honest. There is no reason to make these assumptions. The trillion is hyperbole even under the most generous readings, as user accounts can differ by many orders of magnitude. External experts matter little - Apple picked them, and Apple has posited itself as our adversary. There is no basis of trust to fall back on, not any more. Apple needs to open-source this tool chain, so that we all can see what is going on in there.



The matching is described as taking content into account.

Also, you left out option three - the low-res photo looks like, well, the reviewer is not sure. Is it CSA or not? Are all those people adults? Consenting adults? Might be hard to tell with the blur. Is this a picture of a barely dressed kid or a young adult? If the former, is that legal? The reviewers will have to make decisions that are not nearly as clear cut as you describe. If they decide that they cannot rule out CSA and they would rather have the experts take a look, then we get to...

Step 8 - NCMEC Review
Here all bets are off, as we do not know how this works. If the questionable pics are not variants from those in their database, then they should drop the case. The only damage is several strangers having looked at private pictures. If it is a match, off to the police. What if it is not a match, but the NCMEC reviewer thinks this might be a hitherto unknown case of CSA? Can they ask the police to investigate?
Thanks for your detailed analysis, very informative.

I doubt the person you quote to will be persuaded though. He apparently doesn't care and just take everything from face value and assume that's about it.

If Apple starts to trust no one, then users have no reason to trust Apple taking care of their own business well. Frankly, I believe Apple can afford losing trust from customers, just like they could afford to damage its PR in order to force the development of CSAM scan software.

I feel Apple's "announcement" is generic enough that most customers think that's the end of it. But database maintenance, internal audit, human oversight issues etc aren't being addresses properly.
 
Free scanning energy.
It's NOT free or even close to free! I put my phone on a charger every day, and I pay for the electricity that charges it. At a much higher rate than a business rate that Apple would get charged. If I charge at work, they'll pay it and it's still not free! (And I wouldn't steal from work!)

I get it, you've only dealt with the numbers on the server side, but you'll have to start thinking about those billion phones and how much power they take for you to get my argument. Those numbers are a small pittance compared to a billion phones. There's a reason we still have large data centers! (it's cheaper!)
 

nicely put
Other companies allow side loading too for years..but I guess in this case the moto: "you knew what you bought" prevails right? Well we bought iphones for the privacy they advertised.

But I'm truly curious. Why Apple is so much willing to only help regarding child pornography?

Aren't they sensitive to other types of abuse? Rape? To domestic violence? What about other human rights? What about suicidal kids? We could save them by monitoring their text messages or their searches!! Those are scary situations around us.. I find it hard that Apple doesn't care... Do you agree that they should start scanning for such cases as well?? I would appreciate an answer
 
Last edited:
  • Love
Reactions: turbineseaplane
Aren't they sensitive to other types of abuse? Rape? To domestic violence? What about other human rights? What about suicidal kids? We could save them by monitoring their text messages or their searches!! Those are scary situations around us.. I find it hard that Apple doesn't care... Do you agree that they should start scanning for such cases as well?? I would appreciate an answer

I'm playing devil's advocate here because I don't know enough about every one of those situations and how they "spread". But I'm guessing CSAM's primary distribution channel is online with a very tangible way of tracking it, so to speak.

I realize the online distribution channel thing falls flat a bit compared to of those other horrific situations you mentioned.

I think there's also a general societal thought process that of all crimes, pedophilia is the worst of the worst. There's a reason people make jokes about those people "getting it" once they get to prison... 🤷‍♂️
 
I'm playing devil's advocate here because I don't know enough about every one of those situations and how they "spread". But I'm guessing CSAM's primary distribution channel is online with a very tangible way of tracking it, so to speak.

I realize the online distribution channel thing falls flat a bit compared to of those other horrific situations you mentioned.

I think there's also a general societal thought process that of all crimes, pedophilia is the worst of the worst. There's a reason people make jokes about those people "getting it" once they get to prison... 🤷‍♂️
I will assume it doesnt fall falt to detect messages that spread covid19 misinformation. So maybe it should apply to scanning such messages as well to save lives?

My point is that we have no idea what apple will do from now on.. They are opening Pandora's box!

PS. But my question about apple scanning for all those horrific situations still applies. If they could , should they?
 
  • Like
Reactions: turbineseaplane
I will assume it doesnt fall falt to detect messages that spread covid19 misinformation. So maybe it should apply to scanning such messages as well to save lives?

My point is that this thing won't stop... we have no idea what apple will do from now on.. They are opening Pandora's box

So, a legitimate question, because I still go back and forth on this whole thing.

If they're going to do that on iCloud anyway, doesn't the concern about that issue alone end up being nullified a bit?
 
So, a legitimate question, because I still go back and forth on this whole thing.

If they're going to do that on iCloud anyway, doesn't the concern about that issue alone end up being nullified a bit?

A tech expert would explain it better than me. There are many people who wrote very good analysis here in the forum. The whole point is the backdoor such a system leaves when it runs on a local level.

On the other hand for example I don't use icloud photos because I store medical material on my device - which is not supposed to be uploaded online! Who gives them the permission to scan it?

We purchased iphones based on their ads and their promotional material about privacy.

If you have time you can read this article
 
  • Like
Reactions: jntdroid
Thanks for taking the time to reply.

Of course software fails, but in this case we are pretty much talking about a "spyware" software intentionally installed on iphones, running on local level.

Many people are worried about the political implications and how governments will push apple in the future. I admit I share the same concern. We have already seen how Apple stepped back in China and in Russia (I think)
Apple is a corporation not a humanitarian organization.

Regarding trust. In general I don't trust companies, I just buy products that I believe are the best purchase at the specific moment. Right now I have invested in apple ecosystem and I don't even have the time or energy to bother moving my files somewhere else. I will do it if I have to.

On a personal level, Apple lost my trust (a lot!) when they started selling devices with icloud stream ON by default without asking user's permission. Many people who had medical documents or sensitive material in their photo album found out that their files were on cloud without their consent!! But Apple once again knew better for all of us.

Regarding apple's statistics, how many times haven't we heard about problems that affect "only a small number of users" where we can assume that this number is much higher.
Even though spyware isn’t a fair name its fine if you’d like to label it that way. Apple‘s software has a very specific function and they aren’t hiding it. Which by definition is different than spyware. They are also telling what it does and how its suppose to work and what its not intended for. Again this is a trust issue.

It is a corporation for sure. A corporation is made up of lots of people. So after a while it comes down to what is the culture of that corporation. Executives guide a lot of the decisions but they still take the feedback of their staff and also still have to get the software developers to write the code. That’s a lot of heads you have to convince to do something malicious. Plus once implemented you have to maintain it. Its hard to get employees to continually do something they find morally wrong, not impossible, just not sustainable. That doesn’t mean developers haven’t been influenced or asked to do something they didn’t agree with because they needed to keep their job. A are human and all of us have made poor decisions we later wish we hadn’t.

Its also smart to not unconditionally trust anything let alone a company. This is why I welcome your perspective. Its important to say ‘Okay Apple you’re doing this thing that raises all kinds of alarms for me because you say you want to do a thing for your customers. I want to know few things and have some assurances of x, y, z, etc to be even remotely okay with it.’

When you try to predict the future based off the past and other historical situations you are being wise. When you require external entities company, person, or other to not do something because of past experience without checks its not wrong but probably, and very likely, not right either.

For me, and maybe you but maybe not, I’m okay with taking a wait and see approach to this. One, because I can’t stop it even if I didn’t trust it, and two because I do have concerns but its not yet time for me to hit the brakes. Right now is the time for me to ask questions, get answers, and see how it goes. If it turns out to be a terrible thing at some point. I will be vocal about it.
 
  • Like
Reactions: dialogos
So, a legitimate question, because I still go back and forth on this whole thing.

If they're going to do that on iCloud anyway, doesn't the concern about that issue alone end up being nullified a bit?
No, because iCloud belongs to Apple- they are scanning their own computers. My iPhone belongs to me- they are scanning my computer. Even if your iPhone not connected to the internet, the CSAM hash list and scanning ability will still be lying dormant in the OS.

Honestly, I'm feeling quite hopeless about this. What are the odds that Apple backs down?
 
No, because iCloud belongs to Apple- they are scanning their own computers. My iPhone belongs to me- they are scanning my computer. Even if your iPhone not connected to the internet, the CSAM hash list and scanning ability will still be lying dormant in the OS.

Honestly, I'm feeling quite hopeless about this. What are the odds that Apple backs down?
near zero until bottom line is really hurt.

they need like a 20% plus hit. people simply saying NO. turning off their services, not spending the money, not upgrading phone, signing up for alternate services, and removing subscriptions to apple. all while providing the feedback why..

honestly, if apple did a huge shout face right now. they would get so much respect and more loyalty from me. but this action has completely removed all of the goodwill I had towards them.

I just can't see why they would do something like this. it is only going to produce backlash, especially now. in this climate, with all the other infringements happening around the world.
 
  • Like
Reactions: VulchR
Wow, really? Just read through the thousands of comments on the gazillion threads about it on this forum. You have people who think they're going to be thrown in prison for uploading their baby's "first bath" picture to iCloud or that Apple employees will be perusing their photo libraries on a regular basis looking through each of their pictures. You have people just certain that Apple is preparing the way for dictatorial governments to throw you in jail for having dissenting political images on your phone, when of course this has absolutely nothing to do with that. Slippery slope arguments are only valid if the negative consequences the arguer is proposing will happen are logically inevitable, which of course isn't true here, therefore it's a logical fallacy. You also have people confusing the on-device CSAM detection with the parental safety measures they're implementing in Messages for inappropriate photos (they're two separate things entirely).
OK so apparently you think there are three misrepresentations/misunderstandings. First, that there will be false positives and that these false positives (e.g., baby blowing bubbles in a bath) will be similar to the CSAM material. Apple hasn't exactly been transparent about the algorithm used, but the hash is likely to be some sort of compressed summary that captures visual features of the image and essentially Apple is matching your image to that template using some threshold for similarity. That means false positives are certain and that the false positives are likely to look like CSAM material - namely, sensitive photos of people. So people might not go off to jail for innocent baby pictures, but it possible Apple's process will trigger an Apple employee reviewing your baby's/partner's/selfie pictures. Second, the slippery slope idea is not certain, but what Apple has done is just demonstrated the feasibility and willingness of installing surveillance software locally on phones that is powered by AI-like processes. We shouldn't be flirting with this. I thinks the odds are very high - almost certain - that this kind of process will be misused at some point. Third, I don't think people's main concern is about the filtering for kids in iMessages. That appears to be entirely local. People would object if somebody at Apple was reviewing their iMessages though.

I am a psychologist/neuroscientist. I have had a long interest in matching learning and machine perception. People worry about what happens when these processes go wrong. That is a concern (e.g., algorithms learning racism), but we should be more concerned about potential consequences when these machine algorithms go right.
 
Objection!

It will be misused, that is 100% sure. Imagine what happens to dissidents in China/Russia/Your preferred authoritarian state if they are snitched out.

I believe the probability that the feature is going to have people killed (not to mention unlawfully imprisoned) is higher than its not going to have someone killed
I get what you’re getting at but that’s a stretch of the imagination and really could be applied to anything. For example, if I buy an electric car on a long enough timeframe someone in the oil industry could likely die because of oil barrens not getting what they want from some political figure pushing the clean energy agenda. Its unfair for me to make this argument because I didn’t directly kill that individual because I bought an electric car. I wanted to make a purchase that I figured was good for the environment.

Conversely, now some worker collecting lithium in South America is going to die because more and more people every year are buying electric vehicles and they are working longer hours and get chemical poisoning because of exposure. Than when my battery needs to be replaced they will take the old one and dispose of it and 50 years from now it will contaminate the ground water and kill off a species of plant which kill another thing an so on an so forth. Look its obvious those things won’t happen just as I stated but something will 100% negatively happen from that purchase of mine. Its a very indirect comparison. Quit a bit of mental gymnastics to get there as well. Thats the same thing you are doing. But let’s address it.

Apple’s software will 100% have a negative affect on folks it catches who are collecting the bad photos and I’m glad for that. You talk about the future of other countries with this feature. Which Apple has stated it hasn’t rolled out but I agree its fair to think they will try to bring it to those countries as well, but maybe not. One, Apple says it will consider how to best implement such features in those countries. In its current state Apple has a human review. So it would need to trigger all the other stuff first and still have Apple review it. Since Apple doesn’t want to be connected with someone’s death I’m sure they will handle this with a rigorous review or else they won’t implement it in said country until they are comfortable with it. Second, countries that want to get rid of dissidents don’t need something like this to get rid of them illegally. They do that already I’m guessing, but sure lets let them use this in our mental exercise. Given the process Apple currently has in place its difficult already for them to compromise the system. You can try to insert photos into the hash system so it will flag Apple‘s check, well Apple has human review so if they aren’t bad photos nothing gets reported. You can put the bad photos on their devices, well if you can do that really you could do anything at this point so kind of a useless argument.

The more scenarios I throw at this mental exercise the more often they fail with the current system. Apple isn’t going to implement it if they can’t control it. Its currently hard to misuse it and get away with it.

Now lets say a political figure is harboring bad photos and is using iCloud photos and triggers all the checks Apple has and human review verifies they have illegal photos. Further this political figure has a lot of power, money, and resources to kill people. I think you’re right in a scenario like this. This really sucks, but I still think its the right thing to do. Just because people who have power and are guilty of this crime are doing it does not mean it should not be in place. Long term I think this is right because if affective it will curb them from even doing it or at least using iCloud at all and this will save so many children in those countries from this type of predatory behavior. Its a tough choice because in your example I’m choosing between saving literally millions of children around the world or powerful people ruining the lives of people in the fallout, to reference the warhead comparison, in their actions of getting caught. Again in this example, which sucks, I’m choosing the kids. I don’t like it but I, and maybe not you, am going to choose the kids it will save over the not always likely but still very possible chance a powerful figure is going to ruin lives for getting caught.

I really don’t want it to play out that way. Yet if you’re going to make me choose I choose the children every time. Your concerns and points are valid and should be discussed. Thank you.
 
I’m confused why Apple is going so strongly towards this. Apple already scanned images on the cloud so why the big push for on device scanning. Something more is going on here.

Right now all we can do is turn off iCloud completely. I have been using iCloud on and off for some time. But this is the final excuse to bin it for good. If Apple doesn’t want the responsibility of running a cloud storage (which has never been that good and relies on Google etc) it should pull the plug and retain integrity as a brand for privacy. This idiot move that have made will cost it more. My screeching Voice will never stop. You have become too arrogant to see why this is dumb.
 
It's NOT free or even close to free! I put my phone on a charger every day, and I pay for the electricity that charges it. At a much higher rate than a business rate that Apple would get charged. If I charge at work, they'll pay it and it's still not free! (And I wouldn't steal from work!)

I get it, you've only dealt with the numbers on the server side, but you'll have to start thinking about those billion phones and how much power they take for you to get my argument. Those numbers are a small pittance compared to a billion phones. There's a reason we still have large data centers! (it's cheaper!)


You don't get it. Those billions of phones are going to be on whether or not it is scanning photos because users keep them on to receive calls/play games/use the internet/etc, so you only count the extra energy required to scan 1 photo on an iPhone. You don't include the energy that's needed to keep iOS running. With servers, you include the energy required to run the OS, the entire machines, the cooling room, the network switches, the extra writes/reads from iCloud storage, etc...

So with that said: overhead energy used to scan 1 photo on an iPhone requires less energy than 1 photo scan on a server when you count the rest of the setup. multiply both sides by 1 billion, servers still come out on top as more energy hungry than phones.

If you don't understand this, I can't help you anymore. This is as clear as I can be.
 
You don't get it.
Back at yah. :)

Those billions of phones are going to be on whether or not it is scanning photos because users keep them on to receive calls/play games/use the internet/etc, so you only count the extra energy required to scan 1 photo on an iPhone.
Yes, they'll be on anyway, but this new tech is going to use more battery so even if nothing changes, they'll have to charge that much more at night. Since we don't know exactly how much energy that scan will take, we can't say, but scanning 1 photo on every phone that took a picture that day is still going to take up more energy than you're servers that do the scanning. Doing it with a central resource vs. a lot of devices with a lot of redundancy in scanning is so inefficient I can't stand it. As an IT guy, if I can do a background task on my servers vs. doing it on all my user PC's, I'll pick the server every single time.

If you don't understand this, I can't help you anymore. This is as clear as I can be.
lol, you still aren't counting everything.

But anyway, this is just one of my complaints, and not even the most important one -- the loss of privacy, so lets just agree to disagree, there is no common ground to agree.
 
  • Like
Reactions: Shirasaki
OK so apparently you think there are three misrepresentations/misunderstandings. First, that there will be false positives and that these false positives (e.g., baby blowing bubbles in a bath) will be similar to the CSAM material. Apple hasn't exactly been transparent about the algorithm used, but the hash is likely to be some sort of compressed summary that captures visual features of the image and essentially Apple is matching your image to that template using some threshold for similarity. That means false positives are certain and that the false positives are likely to look like CSAM material - namely, sensitive photos of people. So people might not go off to jail for innocent baby pictures, but it possible Apple's process will trigger an Apple employee reviewing your baby's/partner's/selfie pictures. Second, the slippery slope idea is not certain, but what Apple has done is just demonstrated the feasibility and willingness of installing surveillance software locally on phones that is powered by AI-like processes. We shouldn't be flirting with this. I thinks the odds are very high - almost certain - that this kind of process will be misused at some point. Third, I don't think people's main concern is about the filtering for kids in iMessages. That appears to be entirely local. People would object if somebody at Apple was reviewing their iMessages though.

I'm sure there are more than just three - those are just the ones that came to my mind right away.

You are making very bold assertions without any evidence to back them up. I highly doubt Apple is going to implement a technology that's going to cause a gigantic workload for them reviewing tens of thousands of innocent pictures falsely flagged as CSAM, as you claim will happen (no you didn't say tens of thousands, but both you and I know that's a conservative estimate of how many of those innocent pictures that show partial or full nudity are synced to iCloud). That would just be silly.

I simply disagree with your cue-the-spooky-music approach to the on-device scanning. "Surveillance" implies that every single image on your phone is being perused by people, just salivating looking for a violation they can pin on somebody. Of course that's not at all what's happening. It's a completely impersonal, private process that only reveals scan data about images that match known CSAM to Apple. If you aren't uploading anything illegal, then Apple knows nothing about your photos. Well, unless you're that less than 1 in 1 trillion account that theoretically could be falsely flagged. I think you're safe to take your chances 🤣

I understand what the Messages thing is about, but many don't.
 
Its good to see this spreading and being discussed by the media.

Plus it seems most are not comfortable with this move either.

Maybe screeching majority rather than minority?
 
Last edited:
Its good to see this spreading and being discussed by the media.

Plus it seems most are not comfotable with this move either.

Maybe screeching majority rather than minority?

I admit I was slightly disappointed with the ATP guys last night on this.
It's not that there wasn't some discomfort (really mainly just Casey), but it sort of ended up going down the path of:

If you're worried about government, well they can force whatever, so "meh - what can we really do?!"

To me not enough was made of wanting Apple to fight these sort of changes, with their scale and power and rabid customer base, and push all of this up to a public policy debate BEFORE implementation.

It's really really really hard to roll this sort of change in approach "back", once you do it just once. Once the norm is set of scanning for ANYTHING on devices...the push will come from various entities to do it for other things of interest.

Before you know it - you've built a society wide surveillance apparatus.


Obviously nobody likes or wants CSAM to proliferate, but...

1. It's really debatable if this change will have any meaningful impact (particularly with such an easy opt-out -- for now)

2. Regardless of what is being searched for..

it needs to be a society wide debate about changing to a model of "we search for bad stuff ON your phone", with no cause, no suspicion and no warrant.

This is the start of going down the path of pre-emptive looking for stuff law enforcement wants "looked for" -- inside of the data of people we have no reason to suspect of anything.

If that's not uncomfortable to you - I don't what to tell you.
 
Even for children below the age of 13 I find it odd when parents see the need to monitor their children in such a way, this might be a cultural difference.

But it leaves the question, how Apple checks the age of the children. Who prevents parents from older children to enter a false age to be able to monitor their older children?
If you don't see the need, you don't have to use the feature. I believe the default will be on, but you can't disable it.

AFIAK know (based on experience) Apple doesn't "check" a child's age. When a child under 13 is added to an iCloud Family account their age gets locked in and it can't be changed, but it is entered voluntarily, there is no verification. If a child or parent wants to enter a false age to avoid that, there's nothing stopping them.
 
Its good to see this spreading and being discussed by the media.

Plus it seems most are not comfortable with this move either.

Maybe screeching majority rather than minority?
Whoever wrote that memo was clearly in a state of internal delusion. How on earth did it get the sign off given Apple is always banging on about privacy. A PR disaster.
 
  • Like
Reactions: 09872738
Alex Stamos elaborated on some ideas on how thinks Apple should proceed here...

On iMessage scanning:
1) Ship the child ML nudity detector as planned
2) Build a new in-app abuse reporting system that provides cryptographic security and integrity
3) Staff the backend team to handle the new reports
4) Build more client ML to assist abuse targets with reporting

Abuse reporting in messaging products almost universally sucks, and nobody really encourages victims to utilize the tools available. Apple's famous product and UX teams could make them a real leader in the industry.

On CSAM detection:
1) Announce that you are not shipping in iOS 15
2) Come clean on the long-term plan (are you encrypting all iCloud now?)
3) Host some public workshops to explain your goals, threat model, how you are considering trade-offs
4) Announce clear limiting principles

A good principle is "Client-Side ML will be deployed only to the benefit of the device's user".

If the big risk is iCloud shared albums, then just don't end-to-end encrypt that feature. It is reasonable to say "the risk of abuse here is too high" and use cloud-side scanning.

While leaving iCloud shared albums non-E2EE reduces the privacy protections for that one feature, I think it is a reasonable tradeoff to maintain a strong rule against scanning local files against the interests of the user.

In my dreams, the overall guiding principles for how to move forward would be worked out between multiple platforms and announced by the Technology Coalition. Standing against mission-creep for child safety technologies will require a unified stand by US tech around the world.

That might require Tim Apple and Mark Facebook to have a phone call and make up. Probably better than the two of them trying to shiv each other with regulation while both autocrats and democracies erode personal privacy.

BTW, these are my suggestions and I don't speak for Matt. We probably have different visions on how to move forward, but we are able to agree that the announced balance of equities wasn't the right trade-off.



You can follow @alexstamos.
Thread is here: https://threader.app/thread/1425819813939609605
 
but scanning 1 photo on every phone that took a picture that day is still going to take up more energy than you're servers that do the scanning.


nope. this is exactly where you're 100% wrong. the fact that you continue to say this tells me we're done talking. have a good one.
 
I’m confused why Apple is going so strongly towards this. Apple already scanned images on the cloud so why the big push for on device scanning. Something more is going on here.

Right now all we can do is turn off iCloud completely. I have been using iCloud on and off for some time. But this is the final excuse to bin it for good. If Apple doesn’t want the responsibility of running a cloud storage (which has never been that good and relies on Google etc) it should pull the plug and retain integrity as a brand for privacy. This idiot move that have made will cost it more. My screeching Voice will never stop. You have become too arrogant to see why this is dumb.
I don’t know if your comment is accurate. The quote you are referencing “screeching voice” was not made by Apple. You should go back an re-read the original article, also written poorly to confuse as if it came from Apple. But that’s how media works.

The screeching quote was made by an employee of National Center for Missing and Exploited Children Marita Rodriguez. She wrote it as a memo to Apple employees. Its wrong to assume this is Apple voice or thoughts. The fact that an Apple employee who received it leaked it leads me to believe that it is not their belief.

I could be wrong or misinformed so please update me as I to would like to read up on this but when has it been said by Apple they scanned iCloud Photos in the past. Also device side hash comparisons is not the same as photo scanning. Its not wrong to make the comparison but its also not the same thing. Its a way of Apple to check for illegal hashes without actually looking at the images. A thing called coupons is involved and once the flags for bad hashes meets the threshold then it has a human review to then verify the images are actually illegal. ****** job in my opinion to have. Also before the human review part the function that compares hashes has a 1 in 1 trillion chance of error in one year. Thats an incredibly high number. Like wrap your brain around it. Winning the lottery is higher and even with those odds Apple still wanted a human to check first before it every went off to any official organization.

Your concerns are warranted and I appreciate your perspective. Please contribute to the discussion. Thank you.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.