Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well you got the answer to your question I think! A huge amount of technical ignorance and fear on display here, as in all the other discussions on this. The ‘outraged’ seem to fall into one of these categories:

“Apple has no right to look at photos on my device! If they start ‘scanning’ my private photos, who knows where it will end?”

“This is a huge violation of my privacy! Apple has betrayed us all after telling us they take privacy seriously!”

“This opens a back door for the bad guys to get all sorts of information from my device!”

I don’t understand all the anger over this one, and the personal attacks on Apple leadership.

Technical ignorance? For sure, at least on my part. I don't pretend to understand the tech, but I do understand the precedent and the general principle. You appear to claim to understand the tech, but I'm not sure you appreciate the precedent it sets.

About on-device AI, if I search the photo library for something, I don't expect a law enforcement agency to potentially be notified of it. Can you tell the difference? There is also software on the phone that allows you to type messages and e-mail. Do you expect that Apple is reading those messages and e-mail just because they sold you a phone that has their apps on it?

About child pornography, it's about that for now. Hence the precedent part I mentioned, which you clearly didn't get despite all of the pages of these threads.

About the back door, again, precedent. Also, I find it very funny that you mention an Apple employee reviewing flagged images as some sort of a positive. That is exactly the opposite of what people want. The moment another person gets to see your photos is the moment your privacy is done for. This is not a tech question and it isn't difficult to understand. You don't grasp the problem we have with this, because if you did, you wouldn't mention human review as something reassuring. Incredible.

Sorry, but you absolutely don't understand all the anger. You're right about one thing, there needs to be trust and it's not there anymore.
 
also not everyone uses icloud. i use parts of icloud but my personal photos are only stored on my phone; i don’t upload them to icloud. photos that come through messages go through icloud for me but not anything from the camera app.
There’s a hole in your logic… apple is scanning hashes of photos as they are uploaded to iCloud and comparing them to the existing hashes of known child porn in the CSAM db… so, your camera photos have a small chance of triggering THAT particular alert as they won’t initially be in there. It’s the stuff you get from other sources, save to your phone and then try to share or store it

unless you are producing such content and the feds are already watching you and have tagged some of your content, in which case I suspect you’d have already been contacted.

they usually get folks for sharing existing stuff or coming across it during a criminal investigation of some sort.

personally I have no problem with them scanning for it and at least they are up front about it. We scream that the tech companies aren’t doing enough to combat this sort of thing, but when they do, people complain about that.

just my 2 cents
 
People just live for outrage nowadays. There’s always something.
Damn. Ya got me. All this time I was just waiting for Apple to give me something over which to be outraged, give me an excuse to sell my iThings at a loss, and go through the PITA of switching platforms. I've been dreaming of this day. And now it's here. I'm so happy I can barely contain myself.
 
“This opens a back door for the bad guys to get all sorts of information from my device!”
My reply: No it doesn’t. It only allows matching of images against a database of known CSAM images. The final step in the process is human review by Apple to guard against the extremely unlikely event of a false positive.

”Oppressive regimes could potentially use this to find and abuse dissidents.” This is about the only valid concern I’ve heard, but it would require them to (1) find a way to add other non-CSAM images to the database, and (2) convince Apple to hand over the matching records. Apple has responded by making clear the process of maintaining the database, and categorically stating that they will refuse all such requests. Based on their solid record in the past (and the absolute PR disaster that such a failure would be) I see no reason to suspect that they would lie about this.
Seems you lack imagination. How about someone hijacks DNS records so when the checker on the phone tries to compare to the CSAM database it ends up comparing to a database someone else has prepared with images not associated with CSAM but with what they want to look for? This would be a perfect way for China or other countries to do this. Anyone inside their country (which, by necessity are using their internet and cellular infrastructure) could have this behavior without anyone at Apple even knowing it was being done and with no one able to avoid it inside the country. They could modify the DNS records for everything at Apple too, so no report home behaviors could function.
 
That's totally wrong, that was yesterday, by this afternoon Apple has 3 and a half technologies for smoking hash, and right at this very moment a group huddle of PR people doing mass quantities, trying to figure out the next part of the narrative.

Yes, please go ahead, perhaps we can ask for a creative-writing forum and share plots!

As a reference point, for the past decade+ I've been participating here, this has been a super-useful resource for Mac Pro esoterica -- for me anyway, which is primarily where I used to engage -- and otherwise an echo-chamber for how much we all love Apple. I believe the level of ♥️ and trust has significantly decreased to the point where sentiment here is nearly identical to what I'm reading on Ars Technica, Reddit, and HackerNews. I think when Apple is being called out by the ACLU ... kinda points to the fact that they've totally lost control of their own narrative and are hitting a new low. I don't see how any creative contributions here could be worse than what their PR department has already done.

It was a dark and stormy night...
Pork Chop Express :cool:
 
After a lot of digging, reviewing, listening, watching, reading, and thinking, I find myself very uncomfortable with this direction Apple is taking. This was handled very poorly IMO by Apple and I fail to see the relevance for this method. Most concerning is the lack of communication from Apple.

Will this affect my use of Apple? Not at this time. I am already in the process of scaling back Apple in my life and will continue to do so. In the meantime I will continue to monitor this issue and see what happens.

You just listen to the old Pork Chop Express here now and take his advice on a dark and stormy night when the lightning's crashin' and the thunder's rollin' and the rain's coming down in sheets thick as lead.
 
  • Like
Reactions: jseymour
Technical ignorance? For sure, at least on my part. I don't pretend to understand the tech, but I do understand the precedent and the general principle. You appear to claim to understand the tech, but I'm not sure you appreciate the precedent it sets.
Thanks for the honesty. I do see what you're saying, but I don't agree that we can divorce those two things (technology and principle) if we're to have a meaningful discussion about this. Apple's proposed process and technology has checks and balances which address many of your concerns, if you would take the time to understand them. I know there are people who do understand the technology and still oppose this, and that's fine. A lot of the discussion on MR however is highly emotional and low on facts. (What's new.)

About on-device AI, if I search the photo library for something, I don't expect a law enforcement agency to potentially be notified of it. Can you tell the difference? There is also software on the phone that allows you to type messages and e-mail. Do you expect that Apple is reading those messages and e-mail just because they sold you a phone that has their apps on it?
No. Both of those things would be a violation of my privacy. If that was the technology Apple had built here, I'd be objecting to it just as strongly as you. But Apple's so called 'CSAM scanning' does no analysis of your photo content, no analysis of your on-device behaviour, and no 'reading' of your messages. Are you saying we should judge Apple for something they are not doing? Guilty until proven innocent? Now there's an interesting 'precedent'!

About child pornography, it's about that for now. Hence the precedent part I mentioned, which you clearly didn't get despite all of the pages of these threads.

About the back door, again, precedent.
I understand what the word precedent means. I'm not sure what you mean by it though. The precedent of doing image processing on your device? The precedent of referring criminal activity to law enforcement? You realise that tech companies already do these things right? That they actually have a legal duty to report these kinds of images in their possession? Apple, by doing the processing on-device before the files are uploaded to iCloud opens the way for stronger privacy measure like end-to-end encryption to iCloud drive, something they can't do if they process (hash) the images remotely.

When you say 'back door' what do you actually mean? I see people parroting those words, but it's often not clear what they are talking about.

Also, I find it very funny that you mention an Apple employee reviewing flagged images as some sort of a positive. That is exactly the opposite of what people want. The moment another person gets to see your photos is the moment your privacy is done for. This is not a tech question and it isn't difficult to understand. You don't grasp the problem we have with this, because if you did, you wouldn't mention human review as something reassuring. Incredible.
I don't find any of this funny. And sorry to be blunt here, but this is more evidence why you need a basic understanding of the process and technology before getting caught up in all the emotion and outrage. No one is looking at any of your personal photos. For it to get to the stage where a human is involved, you must have uploaded a collection of known child porn images, and if you do, these are the only images the human reviewer sees—low-res versions just to confirm that yes, you are an offender, so no one gets falsely reported to police. How could any responsible reporting process not have a human reviewer at the end? Please explain this to me.

Sorry, but you absolutely don't understand all the anger. You're right about one thing, there needs to be trust and it's not there anymore.
You have addressed me throughout as though I don't understand or care about personal privacy online, which is quite untrue. Online security and privacy are of paramount importance to me. It's one of the reasons I refuse to install Facebook apps on my phone, and eventually deleted Facebook altogether, despite the personal cost of doing that. I'm an advocate for online privacy and security, and teach friends and family about these issues when I can.

But what I'm seeing here is a lot of anger from people, most of whom don't appear to have taken the time to educate themselves about the technology they're so angry about. This doesn't surprise me, as I've seen what social media does to people—how it stirs them up by feeding incomplete or false information. (Another reason I left Facebook.) I mostly avoid commenting on MacRumors, as I don't find it a positive place for open discussion anymore, but this is an important issue and one that deserves better than getting buried by countless pages of mostly one-sided anger and ignorance.
 
I don't find any of this funny. And sorry to be blunt here, but this is more evidence why you need a basic understanding of the process and technology before getting caught up in all the emotion and outrage. No one is looking at any of your personal photos. For it to get to the stage where a human is involved, you must have uploaded a collection of known child porn images, and if you do, these are the only images the human reviewer sees—low-res versions just to confirm that yes, you are an offender, so no one gets falsely reported to police.
This guy (and possibly others) have figured out how to create false positives en masse that are innocent pictures but still will/would get flagged by Apple. It doesn’t seem as foolproof as you’re trying to make it sound…

 
Seems you lack imagination. How about someone hijacks DNS records so when the checker on the phone tries to compare to the CSAM database it ends up comparing to a database someone else has prepared with images not associated with CSAM but with what they want to look for?
DNS records? Sounds like you're making a case for storing the hash database on the user's device?

This guy (and possibly others) have figured out how to create false positives en masse that are innocent pictures but still will/would get flagged by Apple. It doesn’t seem as foolproof as you’re trying to make it sound…
Okay. Let's give these researchers credit for putting Apple's hash function through its paces. But let's also give Apple some credit for allowing them to. It can only make the final product stronger.

So, we can agree that hash collisions are not a good thing. But let's put this in perspective… A perfect hash function is impossible, because it distills a virtually infinite set of data (the possible arrangement of pixels in an image) down to a finite set of data (all the possible hashes of a certain length). Collisions will occasionally happen. There's no avoiding that. The real question is, what is the chance of one innocent photo creating a collision? Neither of us know the answer to that, but it's sure to be very small. But wait… Apple hasn't left it at that. They have said that only a collection of matching images will get flagged. I think the number 30 was mentioned somewhere. Honestly, this seems too high to me. If they have chosen such a number, it means they are playing it conservative and wanting to avoid false positives. Now if there is a false positive—that is, if all 30 images (or whatever the final number is) end up being innocent, personal photos—and only then, a human reviewer is brought in and spots the error immediately. Yes, that is unfortunate for that person, and it tells Apple something is wrong with their algorithm. It is not the end of the world though. Not by a long way.

Now ask yourself what the human cost of the free storage and sharing of abusive images is. Surely, it's immeasurable. It's certainly greater than the upset caused to some entitled Apple users who view everything in black and white—who have some ideological obsession with the false notion that freedom isn't a nuanced balancing act between the conflicting desires of people within our society. Yes, we need to have these discussions, which is why I'm swimming against the overwhelming tide of anger on MR and standing up for what I believe is a praiseworthy move by Apple. It's all the more praiseworthy because they look like pushing ahead with it despite what it might cost them financially.
 
Thanks for the honesty. I do see what you're saying, but I don't agree that we can divorce those two things (technology and principle) if we're to have a meaningful discussion about this. Apple's proposed process and technology has checks and balances which address many of your concerns, if you would take the time to understand them. I know there are people who do understand the technology and still oppose this, and that's fine. A lot of the discussion on MR however is highly emotional and low on facts. (What's new.)


No. Both of those things would be a violation of my privacy. If that was the technology Apple had built here, I'd be objecting to it just as strongly as you. But Apple's so called 'CSAM scanning' does no analysis of your photo content, no analysis of your on-device behaviour, and no 'reading' of your messages. Are you saying we should judge Apple for something they are not doing? Guilty until proven innocent? Now there's an interesting 'precedent'!


I understand what the word precedent means. I'm not sure what you mean by it though. The precedent of doing image processing on your device? The precedent of referring criminal activity to law enforcement? You realise that tech companies already do these things right? That they actually have a legal duty to report these kinds of images in their possession? Apple, by doing the processing on-device before the files are uploaded to iCloud opens the way for stronger privacy measure like end-to-end encryption to iCloud drive, something they can't do if they process (hash) the images remotely.

When you say 'back door' what do you actually mean? I see people parroting those words, but it's often not clear what they are talking about.


I don't find any of this funny. And sorry to be blunt here, but this is more evidence why you need a basic understanding of the process and technology before getting caught up in all the emotion and outrage. No one is looking at any of your personal photos. For it to get to the stage where a human is involved, you must have uploaded a collection of known child porn images, and if you do, these are the only images the human reviewer sees—low-res versions just to confirm that yes, you are an offender, so no one gets falsely reported to police. How could any responsible reporting process not have a human reviewer at the end? Please explain this to me.


You have addressed me throughout as though I don't understand or care about personal privacy online, which is quite untrue. Online security and privacy are of paramount importance to me. It's one of the reasons I refuse to install Facebook apps on my phone, and eventually deleted Facebook altogether, despite the personal cost of doing that. I'm an advocate for online privacy and security, and teach friends and family about these issues when I can.

But what I'm seeing here is a lot of anger from people, most of whom don't appear to have taken the time to educate themselves about the technology they're so angry about. This doesn't surprise me, as I've seen what social media does to people—how it stirs them up by feeding incomplete or false information. (Another reason I left Facebook.) I mostly avoid commenting on MacRumors, as I don't find it a positive place for open discussion anymore, but this is an important issue and one that deserves better than getting buried by countless pages of mostly one-sided anger and ignorance.

I think the main difference between the two camps is that your camp argues the technical aspects of NeuralHash and how it won't invade your privacy, while my camp argues that this is dangerous regardless of the checks and balances. I don't think we have to divorce principle and technology and the reason this is a problem for me is that technology can be changed, tweaked. Some praise Apple for designing this complex system, and so it stands to reason they can design something else entirely, something more intrusive and less complex. This hasn't changed and this is the technical side that I'm taking into account - that tech can later be tweaked and used for different purposes.

What has changed, though, and what is the actual problem for me is that Apple has now shown willingness to install this on my phone. That's the problem and the precedent I'm talking about - the fact that iPhones will now have scanning software that reports to the outside. Basically, what was yesterday sacrosanct, now isn't. I think if this goes ahead, it's over in the sense that iPhone will be fair game from now on. You may applaud Apple for how this software was designed, and you may well be right that this is impressive, but the technical quality of a solution doesn't necessarily mean it's all going to be good and tamper-proof. What is of utmost importance, far more important than the checks and balances Apple implemented, is Apple's willingness or lackthereof to protect the privacy and security of the device from all outside pressure and inspection.

And since you like to emphasize the technical aspects of this solution, I haven't even mentioned the probability of false positives. Whether it's a million to one or a trillion to one, I don't know so I won't speculate, but what I can say is that we are supposed to trust Apple with the reliability and integrity of this software? I'm sorry, but Apple can't get their own apps to function without bugs and iOS has been a demonstration of Apple's lack of competence for the last couple of years. Apple now routinely has bugs in new devices and new features, recurring bugs that somehow return after being fixed and bugs that are there for the last few years without getting a fix. I am not comfortable with Apple guaranteeing that a system like this will be as tamper-proof and as fail-proof as possible.

About the human reviewer part, I'll explain the issue. You focus on the outcome of a review, saying that if the account is flagged, a human will see the photos and if there's a violation, only then will there be prosecution. You are talking about prosecution and I am talking about privacy. I'll repeat what I said in my last post - when someone, a human reviewer, gets to see the photos, that is a violation of privacy. I can't say this more clearly. That reviewer is doing their review based on the account being flagged, which will inevitably happen to people, and inevitably some of those will be false positives. The moment a human reviewer sees the photos, the user's privacy will have been violated because there is no warrant saying a court of law decided that your right to privacy is less important than the interest of the community to prosecute a specific offense that law enforcement has probable cause for. Just out of curiosity - in your opinion, what is the acceptable probability of a false positive? Is one in a trillion good enough and is that a good estimate? With this being a new scanning system, how will Apple gauge this?

We can also talk about the actual human reviewer who will be doing this review. Who exactly will this be and with what background? How will this person have the right to decide whether a user should be reported? I think this will not be a reviewer singular, but plural, because it's a tough job and a lot of unpleasant responsibility for one person. So in all likelihood, there will be several people inspecting the photos that are exactly the ones a user doesn't want other people to see, and then they will have the burden of deciding whether a potential crime should be forwarded to NCMEC for further review or not. Also, what exactly is the point of Apple's human reviewer? NCMEC can do that review as well and you'd think they'd be more competent at it. So why does Apple have their own people in this chain, when no investigation can begin and no charges can be brought forward without humans reviewing the case and deciding if it's CP? This makes sense for Apple only if they expect a good amount of false positives, because otherwise if the system is solid and false hash matches will be almost non-existant, then their human reviewer seems like an unnecessary part of the process and the review will in any case be done by NCMEC anyway.

So, you see a responsible reporting process, and I see a reporting process where Apple inserted their human reviewer in an effort to calm suspicions that their software will inevitably produce false positives. It assures me less because it tells me that they don't expect the software to be reliable, and the same goes for the fact they'll need something like 30 matches to flag the account. If this is true, why aren't people talking about the fact that a person who has 25 CSAM-category photos will not be flagged? If this is for the children, isn't that too high a threshold and why are you praising Apple's system instead of wondering why so many CSAM photos will go undetected? Again, this makes sense only if they expect the system to work poorly and produce a good number of false positives. I fail to see how any of this is reassuring.

About the back door, I don't remember mentioning it (I don't see the post now, it's pages back) and if I did, it was done loosely. Some have said it's a back door, others have said otherwise, I don't know if it should technically be considered a back door (to my understanding, not) but I think the term is mostly used as a figure of speech, to reference a way into finding out the contents of a device rather than a way into the device itself.

Lastly, this is already too long sorry, there is emotion here and I think that's normal. This is a sensitive topic. I don't use any social media at all, not even LinkedIn which is stupid of me because I own my private business and I don't advertise there.

Those odds are nonsense. There are people on Reddit who have already interacted with this feature from a programming perspective and have fooled it into falsely alerting to completely innocuous images.


That’s just it - there is nothing about catching the people who view this stuff that stops the people who make it. That’s like saying going after heroin users will stop cartels in Asia that are producing heroin. They‘re not related activities other than they‘re both about heroin (or child porn).

This is important. We don't know the odds on this, but what we can reasonably expect is for Apple to misrepresent the odds somewhat to strengthen their argument, because they are the ones who will be coming up with the number. This will be the same as when they write, for any hardware or software bug that gains enough traction that they had to respond, "a very small number of users experienced..."

About the second paragraph, true. There is so much to unpack here and those who support this system don't address any of this. Why is the threshold at 30 and how come they're good with the fact the system will let those who possess a smaller quantity of CP go under the radar? Is it legal to possess less than 30 CSAM photos, but 30 is where you cross the line into a crime? And why does this focus on users of the material intead of the creators?
 
Damn. Ya got me. All this time I was just waiting for Apple to give me something over which to be outraged, give me an excuse to sell my iThings at a loss, and go through the PITA of switching platforms. I've been dreaming of this day. And now it's here. I'm so happy I can barely contain myself.

Perfect example - outrage over nothing. Seems warranted haha.
 
It's certainly greater than the upset caused to some entitled Apple users ...
"Entitled Apple users?" Srsly? People who are concerned about their personal security and privacy are now "entitled?"

I'd like to tell you where you can put that attitude, but it'd certainly get me banned on MR.

... who have some ideological obsession with the false notion that freedom isn't a nuanced balancing act between the conflicting desires of people within our society.
Freedom is actually pretty easy and not "nuanced" at all. Claiming freedom is "nuanced" is code talk from a statist for "Here's this delightful new way we're going to infringe on your freedom, but you shouldn't be upset, because it's for the good of society." (Which, in turn, is really code meaning for their good.)

... I'm swimming against the overwhelming tide of anger on MR...
Waitaminute. Just above you claimed "some" Apple users. Now it's an overwhelming tide of Apple users. Which is it?

And don't look now, but it's a lot more than "some entitled Apple users." Near as I've been able to find, it's been reviled by every security and privacy entity in the world. That is: Unless you know of one that has actually come out in support of Apple's spyware?

Perfect example - outrage over nothing. Seems warranted haha.
You don't really get sarcasm, do you?
 
This guy (and possibly others) have figured out how to create false positives en masse that are innocent pictures but still will/would get flagged by Apple. It doesn’t seem as foolproof as you’re trying to make it sound…


I suspect Apple is well aware about false positives. I can see no other reason for the initial set value at 30 otherwise.
 
This guy (and possibly others) have figured out how to create false positives en masse that are innocent pictures but still will/would get flagged by Apple. It doesn’t seem as foolproof as you’re trying to make it sound…

Few things about this.

1. Apple said the code found in iOS 14.3 isn’t final. Whether you believe that or not is up to you.

2. There’s another server side check that looks at the perceptual hash of the image and nobody knows what that hash is.

3. The real CSAM NeroHashes aren’t public information, and you could get into a lot of trouble trying to find them.

The point here is, even if you have 30 photos with these false matches uploaded to iCloud, the perceptual hash process will notice that it’s visually different from the real photo and it won’t pass to human review.
 
DNS records? Sounds like you're making a case for storing the hash database on the user's device?


Okay. Let's give these researchers credit for putting Apple's hash function through its paces. But let's also give Apple some credit for allowing them to. It can only make the final product stronger.

So, we can agree that hash collisions are not a good thing. But let's put this in perspective… A perfect hash function is impossible, because it distills a virtually infinite set of data (the possible arrangement of pixels in an image) down to a finite set of data (all the possible hashes of a certain length). Collisions will occasionally happen. There's no avoiding that. The real question is, what is the chance of one innocent photo creating a collision? Neither of us know the answer to that, but it's sure to be very small. But wait… Apple hasn't left it at that. They have said that only a collection of matching images will get flagged. I think the number 30 was mentioned somewhere. Honestly, this seems too high to me. If they have chosen such a number, it means they are playing it conservative and wanting to avoid false positives. Now if there is a false positive—that is, if all 30 images (or whatever the final number is) end up being innocent, personal photos—and only then, a human reviewer is brought in and spots the error immediately. Yes, that is unfortunate for that person, and it tells Apple something is wrong with their algorithm. It is not the end of the world though. Not by a long way.

Now ask yourself what the human cost of the free storage and sharing of abusive images is. Surely, it's immeasurable. It's certainly greater than the upset caused to some entitled Apple users who view everything in black and white—who have some ideological obsession with the false notion that freedom isn't a nuanced balancing act between the conflicting desires of people within our society. Yes, we need to have these discussions, which is why I'm swimming against the overwhelming tide of anger on MR and standing up for what I believe is a praiseworthy move by Apple. It's all the more praiseworthy because they look like pushing ahead with it despite what it might cost them financially.

While you make a good point, IMO it misses on two, for me, very critical points:
1. Why is Apple doing this on device instead of post device? It is very simple to bypass in its current form.
2. Why now? What is the driver behind using this solution?

If folks had the answer and understood what is driving this specific solution, while not happy, it would likely calm alot of the concern and anger. I have tried educating myself on this issue since it arose and find it strange that other platforms have considered and decided not to go this route. Did what I learn lesson my concern? Not really. It answered some questions, raised others and left me with an overall feeling that Apple is not giving us the true story. Call it lying by omission.
 
  • Like
Reactions: Schismz and zkap
DNS records? Sounds like you're making a case for storing the hash database on the user's device?


Okay. Let's give these researchers credit for putting Apple's hash function through its paces. But let's also give Apple some credit for allowing them to. It can only make the final product stronger.

So, we can agree that hash collisions are not a good thing. But let's put this in perspective… A perfect hash function is impossible, because it distills a virtually infinite set of data (the possible arrangement of pixels in an image) down to a finite set of data (all the possible hashes of a certain length). Collisions will occasionally happen. There's no avoiding that. The real question is, what is the chance of one innocent photo creating a collision? Neither of us know the answer to that, but it's sure to be very small. But wait… Apple hasn't left it at that. They have said that only a collection of matching images will get flagged. I think the number 30 was mentioned somewhere. Honestly, this seems too high to me. If they have chosen such a number, it means they are playing it conservative and wanting to avoid false positives. Now if there is a false positive—that is, if all 30 images (or whatever the final number is) end up being innocent, personal photos—and only then, a human reviewer is brought in and spots the error immediately. Yes, that is unfortunate for that person, and it tells Apple something is wrong with their algorithm. It is not the end of the world though. Not by a long way.

Now ask yourself what the human cost of the free storage and sharing of abusive images is. Surely, it's immeasurable. It's certainly greater than the upset caused to some entitled Apple users who view everything in black and white—who have some ideological obsession with the false notion that freedom isn't a nuanced balancing act between the conflicting desires of people within our society. Yes, we need to have these discussions, which is why I'm swimming against the overwhelming tide of anger on MR and standing up for what I believe is a praiseworthy move by Apple. It's all the more praiseworthy because they look like pushing ahead with it despite what it might cost them financially.
I’m regards to the bolded part… before a human is brought in, the 30 images are sent through another perceptual hashing algorithm that nobody has access to, just to make sure they’re not false positives, then if the photos actually make it through that as well (unlikely), then humans are brought in.

Seems like a lot of people are missing the server-side double check.
 
  • Like
Reactions: kalsta
I suspect Apple is well aware about false positives. I can see no other reason for the initial set value at 30 otherwise.
Which means Apple employees will be routinely inspecting (and saving?) your private photos…

Few things about this.

2. There’s another server side check that looks at the perceptual hash of the image and nobody knows what that hash is.

And I guess it didn’t occur to you that Apple supposedly has this secondary server-side portion of this tactic that they just conveniently left out when they were having the tactic peer reviewed for safety and security?

I’m regards to the bolded part… before a human is brought in, the 30 images are sent through another perceptual hashing algorithm that nobody has access to, just to make sure they’re not false positives, then if the photos actually make it through that as well (unlikely), then humans are brought in.

Seems like a lot of people are missing the server-side double check.
What‘s different about the additional hashing and why isn’t it simply part of the initial scan?
 
  • Like
Reactions: dk001
Which means Apple employees will be routinely inspecting (and saving?) your private photos…



And I guess it didn’t occur to you that Apple supposedly has this secondary server-side portion of this tactic that they just conveniently left out when they were having the tactic peer reviewed for safety and security?


What‘s different about the additional hashing and why isn’t it simply part of the initial scan?

If one in a trillion per year is “routinely”, sign me up for other one and a trillion stuff.
 
What‘s different about the additional hashing and why isn’t it simply part of the initial scan?
I'm guessing if it was part of the initial scan, people would reverse engineer it and try to trick that as well. The fact that the double-check is on the server and it's ONLY performed if 30 photos are already matched in the initial process, it seems like it's even more secure than just doing both scans on device. It's a safeguard that you can't fool because you don't have the code or the pictures to be able to fool it.

I'm sure people will try like hell to overload this system, but I'm sure Apple is prepared for it.
 
Even if I don't upload any photos Apple donwloads their search database to my device. This is where they start to not respect my privacy. To treat all of their customers and users like child porn criminals in need of a search is very offensive.
 
Even if I don't upload my photos Apple donwloads their search database to my device.
Which is encrypted and can't be viewed by anyone and the software will do nothing if you're not uploading pictures. If you have iCloud Photo Library disabled, nothing is checked for CSAM (or anything else).
 
  • Like
Reactions: MozMan68
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.