Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
haha okay then
No @svenning. I've done a bit of recreational work in this area and this is my knowledge...

The device could make a blueprint describing the image's shapes, lines, and colors. That is coded into a hash that is compared with other images. Someone could change the pixels to a certain degree, but think of it this way... imagine a circle with a few pixels missing. Your mind can fill in the blank and still understand that it's a circle, right?

Image processing and machine learning works the same way. Computers have been trained to analyze shapes and patterns and fill in the altered or missing content to be extremely close to the original.

But I'm guessing that for privacy, the original photo hash is one way and cannot be decrypted to restore or attempt to recreate the original photo.

But I have no idea what Apple is actually doing.
Thanks for sharing (and educating me :)). Really helpful and interesting
 
Last edited:
It's so interesting they didn't bring this up - anywhere - at WWDC

If this feature is so noble and well conceived and implemented... and "increases privacy"

Boy you'd sure think they would have wanted to shout about it from the rooftops...not backdoor slide it in right before new iOS and macOS releases.
They knew there would backlash (or in Apple-speak, "misunderstandings".) They figured they could just slip it into a larger update and most people would be distracted by other features. Then it got leaked so they figured they could put out a PR statement and it would blow over by the next day. People are still angry more than a week later, so they figure good ol' Hair Force One can put a nice a spin on it ("on-device spyware is actually more private!"). Keep up the anger folks. If the outrage continues into next week we'll see the new FUD they put out.
 
The public are also very angry that Apple are refusing to come clean about all of this. The PR spin that Apple is astill pushing is being easily seen through. People realise no one is perfect and accept apologies for mistakes. What people do not stand for, is stubbornly trying to justify your mistakes with PR spin.

I agree with everyone here. Keep up the anger. Force Apple to deal with the situation. We all know Apple will not willingly do this, only by force.
 
Last edited:
I'm blown away - not in a good way - how many people I'm running into that are ok with basically all/any means to find "illegal content" people might have.

Do we really have so many people ready to start living in police state (more than we already have in so many places)?

It's so naive to align with that view...for all it takes to have a "problem" is a differing view on what is "illegal" or even "objectionable" down to simply "not what a certain entity wants to exist"

I just honestly don't think people think it through to the endpoints enough.
 


Apple's senior vice president of software engineering, Craig Federighi, has today defended the company's controversial planned child safety features in a significant interview with The Wall Street Journal, revealing a number of new details about the safeguards built into Apple's system for scanning users' photos libraries for Child Sexual Abuse Material (CSAM).

craig-wwdc-2021-privacy.png

Federighi admitted that Apple had handled last week's announcement of the two new features poorly, relating to detecting explicit content in Messages for children and CSAM content stored in iCloud Photos libraries, and acknowledged the widespread confusion around the tools:

The Communications Safety feature means that if children send or receive explicit images via iMessage, they will be warned before viewing it, the image will be blurred, and there will be an option for their parents to be alerted. CSAM scanning, on the other hand, attempts to match users' photos with hashed images of known CSAM before they are uploaded to iCloud. Accounts that have had CSAM detected will then be subject to a manual review by Apple and may be reported to the National Center for Missing and Exploited Children (NCMEC).

The new features have been subject to a large amount of criticism from users, security researchers, the Electronic Frontier Foundation (EFF) and Edward Snowden, Facebook's former security chief, and even Apple employees.

Amid these criticisms, Federighi addressed one of the main areas of concern, emphasizing that Apple's system will be protected against being taken advantage of by governments or other third parties with "multiple levels of auditability."


Federighi also revealed a number of new details around the system's safeguards, such as the fact that a user will need to meet around 30 matches for CSAM content in their Photos library before Apple is alerted, whereupon it will confirm if those images appear to be genuine instances of CSAM.He also pointed out the security advantage of placing the matching process on the iPhone directly, rather than it occurring on iCloud's servers.

When asked if the database of images used to match CSAM content on users' devices could be compromised by having other materials inserted, such as political content in certain regions, Federighi explained that the database is constructed from known CSAM images from multiple child safety organizations, with at least two being "in distinct jurisdictions," to protect against abuse of the system.

These child protection organizations, as well as an independent auditor, will be able to verify that the database of images only consists of content from those entities, according to Federighi.

Federighi's interview is among the biggest PR pushbacks from Apple so far following the mixed public response to the announcement of the child safety features, but the company has also repeatedly attempted to address users' concerns, publishing an FAQ and directly addressing concerns in interviews with the media.

Article Link: Craig Federighi Acknowledges Confusion Around Apple Child Safety Features and Explains New Details About Safeguards

Federighi can spin this anyway he wants. I don’t care how ”good” of a cause this is. This is still warrantless searching. If Apple or anyone else wants to know if there is anything illegal on someone’s phone, they can do it the proper, legal way. By getting a search warrant from a judge. End of story. Otherwise, get out!
 
"This is literally only matching on the exact fingerprints of specific known child pornographic images."

OK so does this do an exact match or not? If it does an exact match, good luck with that as pedophiles avoid the system by changing pixels. Also if that were true, then why the need for the threshold? What Apple has released already doesn't really match Federighi's statement. It sounds like they are doing an approximate match based on perceptual features, so the question is not whether there will be false positives, or how often false positives will be flagged up, but whether false positives will always looks like the CSAM images (e.g., having certain poses, exposed skin, etc.), in which case they will be sensitive pictures of people. And thirty false positives is potentially nothing given that people often take multiple pictures of the same event/scene.

I am not confused*, I am concerned. And the concerns I have covered here apply only if the system works as planned, and is not corrupted, as it almost certainly will be, for far less noble purposes than detecting child porn. Apple has just demonstrated to every authoritarian government on the planet that their new chips plus a software framework can be used as an extension of an AI agent that can perform surveillance about virtually anything. Good job Apple. Idiots.

*Other than by seemingly self-contradictory statements Apple has made and their lack of transparency about the algorithm.
 
We understand it just fine. That's not the problem. So sad to see Federighi trotted out to defend this cluster. They know that he is the favorite of Apple fans. Sad to see his rep tarnished. The sooner they backtrack, the less damage there will be. However, at this point, the damage will be substantial no matter what.

I can't say it any better than this post on social media: "So Apple still tries to defend their on-device surveillance with the standpoint that people just don’t understand what #spyphone is about. We do understand and you’re wrong."
 
No, like I’m sorry. I’ve debated this over and over in my head. But Apple cannot be this naive, this dumb- to have focused this long and this much manpower on a tool aimed at combating CSAM only to then alert pedo’s that they can just disable iCloud photos and they’ll be good. It’s almost so dumb that it makes the very notion look suspect. Regardless of the tech details, anyone can read the NeuralHash tech papers and understand what they’re setting out to do, just it seems completely unbalanced to sacrifice user privacy or at all degrade encryption in messaging for this CSAM goal that practically anyone hiding these kinda of photos now will have long disabled iCloud photos or would be thinking of other storage options now.

What does make more sense is that Apple is just merely trying to meet legal obligations and get this content off iCloud and wash its hands clean of it. But to think this will truely combat child abuse, I really don’t think so.
“Degrade encryption”. That’s false. End to end encryption isn’t broken at any point. They use on device intelligence to keep encryption intact
 
Rene Ritchie did a really good explanation on his YouTube channel. I can see why people are confused and upset about it. Maybe the explanation will help.
That explanation only told us how the system works. It is a good video.
However the video says little about the issues with the system will be sorted out. Rene in this exact same video even tried to defend Apple by saying - "E2EE was never possible because people keep forgetting their iCloud passwords, so CSAM is not at all different to that".

Rene's videos are amazing for the technical content they deliver. However he does not hold Apple to account when needed. He will defend Apple to the bitter end. If anyone watches his videos, need to realise the technical information on the videos is great, but Rene has his biases as well.
 
  • Like
Reactions: Stunning_Sense4712
I agree.
I guess it would be too "on the nose" to ask them to just say that?

I don't even see the harm at this point.
They're already going out of their way to tell actual Pedophiles out there how to avoid any issues here.

It begs the question - why do this? What's really in play here? (to your points)

It's a slap in the face for them to honestly say this will do much about CSAM...
...equally, it's an even bigger slap in the face to have them defend "this is even more privacy!" as a concept now.
It obviously won’t stop that many pervs. But even if they only catch a few, I’ll be happy. The real point is for Apple to keep their immunity from liability intact. Nobody can expect them to forfeit immunity, that would be business suicide
 
I'm blown away - not in a good way - how many people I'm running into that are ok with basically all/any means to find "illegal content" people might have.

Do we really have so many people ready to start living in police state (more than we already have in so many places)?

It's so naive to align with that view...for all it takes to have a "problem" is a differing view on what is "illegal" or even "objectionable" down to simply "not what a certain entity wants to exist"

I just honestly don't think people think it through to the endpoints enough.

Child s*x abuse is HUGE, a lot bigger than people seem to think. Most (like 70%+) of women have been abused s*xually as a child, almost half of men. Child s*x trafficking is bigger than adult s*x trafficking and barely legal teen p*rn is the most popular.

Child labor is bad but it doesn’t compare at all to how bad child s*x abuse and trafficking are.

Do I agree with csam in general? I’m iffy on the whole thing but it’s not surprising at all that people are ok with it.
 
Last edited by a moderator:
  • Like
Reactions: bryn0076
OK so does this do an exact match or not? If it does an exact match, good luck with that as pedophiles avoid the system by changing pixels. Also if that were true, then why the need for the threshold? What Apple has released already doesn't really match Federighi's statement. It sounds like they are doing an approximate match based on perceptual features, so the question is not whether there will be false positives, or how often false positives will be flagged up, but whether false positives will always looks like the CSAM images (e.g., having certain poses, exposed skin, etc.), in which case they will be sensitive pictures of people. And thirty false positives is potentially nothing given that people often take multiple pictures of the same event/scene.

I am not confused*, I am concerned. And the concerns I have covered here apply only if the system works as planned, and is not corrupted, as it almost certainly will be, for far less noble purposes than detecting child porn. Apple has just demonstrated to every authoritarian government on the planet that their new chips plus a software framework can be used as an extension of an AI agent that can perform surveillance about virtually anything. Good job Apple. Idiots.

*Other than by seemingly self-contradictory statements Apple has made and their lack of transparency about the algorithm.
They’ve been scanning your photos with on device intelligence for years. And foreign governments know that already
 
No. Apple will be scanning your hashed photos against a table of hashed photos of child porn.

Would you prefer the government scan you photo jpegs on Apple's servers, by government mandate, should push come to shove?
How about neither. The better option is for the police to actually do their job and get search warrants before they search anything, hashed or otherwise.
 
Apple says that this feature is more private because they don't have to scan all the photos in iCloud, like the other cloud storage providers do. They are ignoring something big here: there is no law that they must proactively scan iCloud for CSAM. They don't have to do any of this, and they shouldn't. They should E2EE iCloud, and if the FBI is what is stopping them, it just proves that Apple can't be trusted to stand up to the government.
 
Child s*x abuse is HUGE, a lot bigger than people seem to think. Most (like 70%+) of women have been abused s*xually as a child, almost half of men. Child s*x trafficking is bigger than adult s*x trafficking and barely legal teen p*rn is the most popular.

Child labor is bad but it doesn’t compare at all to how bad child s*x abuse and trafficking are.

Do I agree with csam in general? I’m iffy on the whole thing but it’s not surprising at all that people are ok with it.

How does something that is so narrowly scoped help at all really?

This will catch - perhaps...

Pedophoiles who use Apple Products and store CSAM in their camera roll and also use iCloud to collect known CSAM, who also have more than 30 photographs of known child pornography and who are going to read all of this news and still carry on as usual.
 
Last edited by a moderator:
I know exactly how it works from an Engineering standpoint and I still don’t like it.

Having a back door, is a back door, is a back door.

Your phone will snitch on you, will snitch on you, will snitch on you.

A personal device must be absolutely trustworthy for you to hell and back.

This is a bad precedent and will have unimaginable consequences down the road.

It will be like boiling the frog with warm water.

When rights are taken, for whatever justified reasons there may be, even temporarily, are seldom given back.
 
Google does this, Samsung does this, EVERYONE DOES THIS. So people saying it degrades Apple somehow aren’t paying attention. I don’t remember this uproar when Google started their version of this program.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.