Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
These two articles kind of contradict each other. However I was really interested in this article that actually has some video around the 2 minute mark of Jane Horvath speaking and she doesn’t (in the video) talk about them scanning photos in the cloud but many news site quoted her saying so. All of the news sites speculate Apple is using Microsoft’s system for hashes.

I’d like to see the whole panel but ultimately it only verifies Apple doesn’t want to be scanning your stuff on their servers but instead have it be done on device and something the user can turn off inhibiting the CP from getting in their servers in the first place.
 
They are real images, whether you like them or not is irrelevant.
Any random collection of pixels is a “real image”. :) I was specifically looking for one pair of images where both:
a. visibly look normal and
b. not like someone was trying quite hard to make the hash match.

But, I understood before asking that it would be quite impossible to do so.

The fact of the matter is, these researchers worked VERY hard to make images that would match a hash THEY created, a trivial task for someone of their skill. They also proved that there’s an extremely low likelihood that any photo that anyone has taken would ever match their hash.
Did you even try the desert images match vs real porn image matches?
You mean the ones you said weren’t a neural hash? No, because they weren’t a neural hash.
you just don't agree with world leading experts, it's not the end of the world.
Nope, I AGREE with world leading experts that it’s simple to create fake matches if you create the hash and know what it’s looking for, it’s exceedingly difficult to find fake matches in the wild.
 
I’d like to see the whole panel but ultimately it only verifies Apple doesn’t want to be scanning your stuff on their servers but instead have it be done on device and something the user can turn off inhibiting the CP from getting in their servers in the first place.
I see it differently. I think if Apple weren’t required by law to REPORT CP (isn’t it that everyone is required to report child endangerment?), I doubt they’d care much about the images on their servers. There are loads of images of criminal activity, nudity and other things that’s being uploaded to their servers with no activity to reject them.

I don’t see it as being about Apple not wanting the stuff on their servers, I see it more is they WANT to encrypt everything but, with this government requirement to report CP they can’t encrypt everything until they have a solution the government will agree to that still flags CP.
 
I see it differently. I think if Apple weren’t required by law to REPORT CP (isn’t it that everyone is required to report child endangerment?), I doubt they’d care much about the images on their servers. There are loads of images of criminal activity, nudity and other things that’s being uploaded to their servers with no activity to reject them.

I don’t see it as being about Apple not wanting the stuff on their servers, I see it more is they WANT to encrypt everything but, with this government requirement to report CP they can’t encrypt everything until they have a solution the government will agree to that still flags CP.
Not sure it matters who is the one that cares. Apple is made of people and so are the governments. People around the world agree that CP is bad. So weather its the government people that dislike it making Apple and others do it or if its Apple’s own people that are taking it seriously it has the same result.

Whether Apple is being forced to do this or they want to do this is almost the same result. However Jane H does state they want to keep data private. If they are/were using a system like Microsofts they likely were not happy with the results or only did it until they found something that met their standards (my opinion). On device hashing is a lot more private than server side. Its also entirely in the users control.

One other person made the argument that if they aren’t an offender yet don’t want Apple hashing their photos they can’t use iCloud photos and turn hashing off or choose to opt in for server side scanning. They were just uncomfortable with on device hashing. I’m guessing that creating this makes scaling the feature more difficult or impossible for developers to use as an API for their software (also my opinion).
 
Apple is not required by law to scan for this. They are required to report it if they notice it. Apple is doing all this on their own volition likely because they thought it was good marketing.

However Apple is well within their rights to reject storing anything they don't like on their servers. If all the scanner did on device is prevent uploading illegal images it would meet Apple's stated desired to not have it.

That is not their plan. Their current plan is to permit uploading known illegal images until they have enough to support a legal case against the uploader. They are not preventing it on their servers - that is not their actual goal, getting the bad actor arrested is. Apple wants to play cop using spyware on the user's device to assist in gathering evidence of a crime.
 
Last edited:
I see it differently. I think if Apple weren’t required by law to REPORT CP (isn’t it that everyone is required to report child endangerment?), I doubt they’d care much about the images on their servers. There are loads of images of criminal activity, nudity and other things that’s being uploaded to their servers with no activity to reject them.

I don’t see it as being about Apple not wanting the stuff on their servers, I see it more is they WANT to encrypt everything but, with this government requirement to report CP they can’t encrypt everything until they have a solution the government will agree to that still flags CP.
there's important nuance here. Apple is required to report. But they are not required to scan. What does this mean to you?
 
No, I get your point. Computer scientists haven’t defeated the system, they just created something that will flag a match for an algorithm they created (notably not Apple’s, one they created themselves that we’re to ‘assume’ is functionally similar to Apple’s). That’s like me making a lock, then using the information I used to build the lock to create a key for it. That doesn’t mean I’ve defeated my lock. It STILL works as a lock and the door can’t be opened without a key. I’ve done nothing more than create another key that unlocks the lock (which is exactly how locks are expected to work).

Any computer scientist that COULDN’T fool a system they created themselves wouldn’t be very good computer scientists. And, for the ones that did, I’m sure they’re enjoying the links to their work. :)
Sorry but it seems to me that you do misunderstand (unless I am missing something - apologies if I am). Essentially the researchers I cited created a system for minimally editing images so that they do not match the relevant perceptual hash of the kind Apple proposed to use, but yet still seem very similar to the human eye. Their research is not about creating false positives (fooling the system to flag an image that is not a target). It is about creating false negatives (evading detection of an image that is a minimally edited target). And they do not need to the code for the matching process to do this - its properties can be inferred by submitting pictures to it and observing the outcome perceptual hash (the black box attack). In order for Apple to combat the modification of images, the company would have to make the threshold for a 'match' much less stringent, causing a massive increase in false positives (flagging pictures as targets when they are not). So Apple would be faced with letting edited CSAM material through undetected, or creating so many false positives that they would be manually screening many nontarget images, thereby violating customers' privacy for nothing.
 
Last edited:
  • Like
Reactions: IG88 and BurgDog
But, I understood before asking that it would be quite impossible to do so.
It isn't, you just refuse to accept reality and try to make your point. I get it your a butt hurt Apple fanboy and try to justify anything they do, forum is full of users doing the same. After all this is an Apple forum. As I said, desert pictures match neural hashes as well against actual porn images. I won't change your mind, that's totally ok. I also won't argue with you, it's like arguing with anit-vaxxers, flat earths and Trump voters.

Here's the thing how you contribute, get a PhD, get a teaching position at a top university, then make your arguments at conferences. Problem is, world leading experts experts say something else as you do, so you won't get far. But that's ok, not everyone needs a PhD. I probably would have stayed in the industry as well without or completely retired and do 50 years of traveling around the world. ?‍♂️
 
  • Like
Reactions: IG88
Not sure it matters who is the one that cares.
No, it doesn’t, it only changes how it’s characterized. It’s not that “Apple doesn’t want it on it’s platform” it’s “NO one wants it on their platform.”
One other person made the argument that if they aren’t an offender yet don’t want Apple hashing their photos they can’t use iCloud photos and turn hashing off or choose to opt in for server side scanning. They were just uncomfortable with on device hashing. I’m guessing that creating this makes scaling the feature more difficult or impossible for developers to use as an API for their software (also my opinion).
I think you’re right, the easy route is “encrypt everything on the server, use on-device hashing, done”. They are likely taking the more arduous path of making it configurable per account and redefining the iCloud experience. For those that don’t enable on-device hashing, Apple will just make it clear that any legal request for their images in the cloud will be accommodated only for those with on-device hashing turned off.
 
Apple is not required by law to scan for this. They are required to report it if they notice it. Apple is doing all this on their own volition likely because they thought it was good marketing.
You are not required by law to scan for it, either. However, if someone happens across your machine and finds it on your HD, you can try to say “WELL I DIDN’T KNOW IT WAS THERE!” but that wouldn’t go well. :)

So, sure, they’re not required to scan for it. BUT, since just being in possession of it is bad whether you put it there or someone else put it there, scanning is the obvious solution.
 
The road to hell is paved with good intentions. I agree. However it could also have been worded:

"Surreptitiously scanning a users photos to analyze them for potential nefarious purposes under the guise of protecting children" is put on hold permanently. Hopefully anyway.

So sad that people fall for this kind of nonsense.
 
  • Like
Reactions: IG88
Sorry but it seems to me that you do misunderstand (unless I am missing something - apologies if I am). Essentially the researchers I cited created a system for minimally editing images so that they do not match the relevant perceptual hash of the kind Apple proposed to use, but yet still seem very similar to the human eye.
Yes, I misunderstood. It’s not that they’re defeating a hash, they’re defeating the goal of being able to detect the images. In their black box study, though, there was no downside to evaluating multiple images to determine which ones haven’t been altered enough and which ones have been. In the real world, it’s unlikely that someone has a large cache of CP images they’re willing to upload to a service that’s performing CSAM checks to see which ones have been edited enough and which ones haven’t… making additional minor changes (once they get out of jail… if they get out of jail?) and trying again. I understand the methodology of their academic work, but it’s still academic. It’s like the group that indicated it was academic to defeat FaceID or TouchID… if you first assume a set of circumstances that are highly unlikely in the real world. There are many things that are possible yet still very unlikely.
 
  • Like
Reactions: VulchR
It isn't, you just refuse to accept reality and try to make your point. I get it your a butt hurt Apple fanboy and try to justify anything they do, forum is full of users doing the same.
“I really have no response… sooo BUTT HURT FANBOY!” :)
As I said, desert pictures match neural hashes as well against actual porn images.
And? Image gets flagged, manual review, it’s a desert… no action. I could understand if a single match immediately led to the account holder being put through an arduous legal process. THAT would be a cause for concern. There are absolutely ZERO legal professionals that would attempt to bring a case against someone because they have a picture of a desert. Unless they want to be disbarred, I guess.
 
“I really have no response… sooo BUTT HURT FANBOY!” :)
Ok, live with it. Response has been made, you just ignore it. All good. You're acting like one of my first semester students, knowing it all but horribly failing and then moving off to other places handing out degrees for free. At least you're in good company.

Not that it matters what's posted here in this forum by anyone. The scientific community, all the experts out in the world, have spoken out against Apples system for the reasons mentioned. Live with it or don't with the head in the sand.
 
  • Like
Reactions: KindJamz
You are not required by law to scan for it, either. However, if someone happens across your machine and finds it on your HD, you can try to say “WELL I DIDN’T KNOW IT WAS THERE!” but that wouldn’t go well. :)

So, sure, they’re not required to scan for it. BUT, since just being in possession of it is bad whether you put it there or someone else put it there, scanning is the obvious solution.
Apple has better lawyers than I can afford, so the "I didn't know it was there defense" might actually work for them.

Still Apple's plan doesn't prevent uploading kiddy porn images anyway and they knowingly have them on their servers, something I definitely can't do. They tag them as illegal with the device scanner and still upload them. They only act to report the uploader when they get enough of them, below that threshold they are cool with knowingly hosting them.

And Apple is also pretty sure they won't get in legal trouble for knowingly hosting those nicely pretagged as kiddy porn images as they are just gathering evidence to help convict the bad guys so they get a pass on a law that would put me in jail if I tried it.
 
Yes, I misunderstood. It’s not that they’re defeating a hash, they’re defeating the goal of being able to detect the images. In their black box study, though, there was no downside to evaluating multiple images to determine which ones haven’t been altered enough and which ones have been. In the real world, it’s unlikely that someone has a large cache of CP images they’re willing to upload to a service that’s performing CSAM checks to see which ones have been edited enough and which ones haven’t… making additional minor changes (once they get out of jail… if they get out of jail?) and trying again. I understand the methodology of their academic work, but it’s still academic. It’s like the group that indicated it was academic to defeat FaceID or TouchID… if you first assume a set of circumstances that are highly unlikely in the real world. There are many things that are possible yet still very unlikely.
Most pedophiles who get caught have hundreds if not thousands of images, and I think the manuscript said something about intercepting the output of the matching process. However, it is true the manuscript made many assumptions. Just remember the people involved are highly motivated criminals. Only one way to circumvent the detection needs to be discovered and distributed.

I think Apple should just scan server-side, or better yet, not at all until a search warrant is received for a specific account, after which I would hope they would scan with great gusto.
 
I've read up on the "issue". And it's nonsense. People on here parroting the same tired "OMG! What if China used this!" over and over again. If China wanted to use it, they would be using it already.

Or, my other favorite line: it's like they put a camera in your house to look for domestic abuse. No! This is like you wanting to beat your wife at the mall -- and complaining that the mall has you on tape and the cops are called when you do.

If you don't want to have your photos scanned, don't UPLOAD THEM to Apple's iCloud server -- Apple doesn't want your child porn on its servers! This isn't complicated.

Or, better yet, don't have child porn on your phone!

Every time I see someone go crazy over this, I just assume you're trying to model your life after Josh Duggar.

Apple wants to do end-to-end encryption of photos and this will allow them to do so. I *welcome* this new CSAM scanning as it will protect my data better than the current situation.

Time to say, you dont understand the issue st all.
 
Apple has better lawyers than I can afford, so the "I didn't know it was there defense" might actually work for them.
No, it wouldn’t. :) It might have back when content providers were not responsible for the content on their servers, but that ship sailed long ago (and led to the shutdown of many sites, the increased moderation of others, and others that were re-hosted outside the US jurisdiction).
They only act to report the uploader when they get enough of them, below that threshold they are cool with knowingly hosting them.
Which, again, is the point I’m making…below that threshold they’re cool with it because the authorities are cool with it. Once it passes a threshold, they have to report it. Because the relevant authorities say they have to.
 
Most pedophiles who get caught have hundreds if not thousands of images, and I think the manuscript said something about intercepting the output of the matching process. However, it is true the manuscript made many assumptions. Just remember the people involved are highly motivated criminals. Only one way to circumvent the detection needs to be discovered and distributed.
Right, when you read into the details of all academic papers, they contain assumptions, some which work in the real world, some that don’t. In this case, they intercepted the output of the matching process (which they were able to do because they had control of the “black box”). Rewrite the paper so that they don’t have access to the output… well, I’m assuming then the paper either wouldn’t get published OR would get FAR less attention, two things that would be against the best interests of the folks creating the papers.

In the real world, they wouldn’t have access to the output, which is a critical part of the whole exercise. Remove the method that they used to actually test and tweak the images (and who knows how many times they had to adjust their images in the black box test), and you completely remove their ability to create a match. Correction: there IS a way they could have access to the output in the real world. And, that’s if someone gets sent to jail (indicating that the changes they made to the images weren’t good enough to avoid detection). But, since they would only be investigated after several images had been uploaded, they wouldn’t even have full confidence of which images in the set were the ones that weren’t good enough. Not very efficient or effective, even for the most motivated criminal.
 
Student: But, aren’t these images, relatively speaking, easy to alter in this way? Like, are there any examples that use high res, high color images?
Teacher: YOU’RE JUST BUTT HURT! Get outta my class!
 
For those interested, more natural image collisions: This time also some near collisions there which helps to identify what images we can look for to find more collision than the already mentioned deserts vs actual porn and the ImageNet collision. There are plenty more out there. There are also over 6000 natural images with collisions in category #1 of the JFT300M set. Apple also found many collisions, but of course they say it's all within acceptable limits. More work in filter visualization has to be done in order to make work easier and reduce processing time for finding collisions.

Apple right now is manually adding published collisions to a blacklist so they won't match anymore. But luckily many research groups have now agreed to make a, for now, non public dataset to keep research going until Apple is using the system. When the. When publications are made then it will have more of an impact, similar to the 100+ million iOS users that were affected by malware when the XcodeGhost publication came out and the log4j issues we're seeing right now.

On a side note, maybe it's time for a more restricted forum for these type of discussions. There are already some minor restrictions for political news, which are not helping much to be honest. So a research/science section for these discussions with restrictions would make sense, say no verified PhD in a relevant field won't allow posting.
 
  • Like
Reactions: snek
Here's another interesting read by Susan Landau who holds a professorship in cybersecurity after her education at Princeton, Cornell and MIT.
https://www.lawfareblog.com/normalizing-surveillance

She pretty much agrees with things being said here: https://www.computerworld.com/artic...troversial-csam-scanning-back-to-the-lab.html

Many interesting things mentioned there, including the issue that even when using E2EE data is not secure anymore when scanning happens on devices. This opens a whole new can of worms.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.