Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
68,079
38,818


Apple's senior vice president of software engineering, Craig Federighi, has today defended the company's controversial planned child safety features in a significant interview with The Wall Street Journal, revealing a number of new details about the safeguards built into Apple's system for scanning users' photos libraries for Child Sexual Abuse Material (CSAM).

craig-wwdc-2021-privacy.png

Federighi admitted that Apple had handled last week's announcement of the two new features poorly, relating to detecting explicit content in Messages for children and CSAM content stored in iCloud Photos libraries, and acknowledged the widespread confusion around the tools:
It's really clear a lot of messages got jumbled pretty badly in terms of how things were understood. We wish that this would've come out a little more clearly for everyone because we feel very positive and strongly about what we're doing.

[...]

In hindsight, introducing these two features at the same time was a recipe for this kind of confusion. By releasing them at the same time, people technically connected them and got very scared: what's happening with my messages? The answer is...nothing is happening with your messages.

The Communications Safety feature means that if children send or receive explicit images via iMessage, they will be warned before viewing it, the image will be blurred, and there will be an option for their parents to be alerted. CSAM scanning, on the other hand, attempts to match users' photos with hashed images of known CSAM before they are uploaded to iCloud. Accounts that have had CSAM detected will then be subject to a manual review by Apple and may be reported to the National Center for Missing and Exploited Children (NCMEC).

The new features have been subject to a large amount of criticism from users, security researchers, the Electronic Frontier Foundation (EFF) and Edward Snowden, Facebook's former security chief, and even Apple employees.

Amid these criticisms, Federighi addressed one of the main areas of concern, emphasizing that Apple's system will be protected against being taken advantage of by governments or other third parties with "multiple levels of auditability."


Federighi also revealed a number of new details around the system's safeguards, such as the fact that a user will need to meet around 30 matches for CSAM content in their Photos library before Apple is alerted, whereupon it will confirm if those images appear to be genuine instances of CSAM.
If and only if you meet a threshold of something on the order of 30 known child pornographic images matching, only then does Apple know anything about your account and know anything about those images, and at that point, only knows about those images, not about any of your other images. This isn't doing some analysis for did you have a picture of your child in the bathtub? Or, for that matter, did you have a picture of some pornography of any other sort? This is literally only matching on the exact fingerprints of specific known child pornographic images.
He also pointed out the security advantage of placing the matching process on the iPhone directly, rather than it occurring on iCloud's servers.
Because it's on the [phone], security researchers are constantly able to introspect what’s happening in Apple’s [phone] software. So if any changes were made that were to expand the scope of this in some way —in a way that we had committed to not doing—there's verifiability, they can spot that that's happening.

When asked if the database of images used to match CSAM content on users' devices could be compromised by having other materials inserted, such as political content in certain regions, Federighi explained that the database is constructed from known CSAM images from multiple child safety organizations, with at least two being "in distinct jurisdictions," to protect against abuse of the system.

These child protection organizations, as well as an independent auditor, will be able to verify that the database of images only consists of content from those entities, according to Federighi.

Federighi's interview is among the biggest PR pushbacks from Apple so far following the mixed public response to the announcement of the child safety features, but the company has also repeatedly attempted to address users' concerns, publishing an FAQ and directly addressing concerns in interviews with the media.

Article Link: Craig Federighi Acknowledges Confusion Around Apple Child Safety Features and Explains New Details About Safeguards
 
Last edited:
  • Like
Reactions: usagora and ikir
Because it's on the [phone], security researchers are constantly able to introspect what’s happening in Apple’s [phone] software. So if any changes were made that were to expand the scope of this in some way —in a way that we had committed to not doing—there's verifiability, they can spot that that's happening.
How does this change the fact at all that there’s now essentially a new backdoor to be abused that’s installed in iOS 15?

Stop defending and get rid of this BS, Apple.
 
Thank you, Craig! I wonder if Craig woke up and read my comment from the previous article haha!


All jokes aside! Let's talk business Craig, you do not know what you are talking about when all you did was praise how important privacy was. Stay away from my PRIVACY, please. It is my HUMAN RIGHT. Craig! please, give us an Opt-Out option from CSAM, please. Let our voices be heard. I will not appreciate Apple scanning my iCloud photos whether it's through AI or Hash.

It sounds like Apple is using "Protecting Children" as an example to be spying on the consumers.

STOP this mass surveillance to be launched. This needs to SHUT DOWN.

Apple you are not a Law Enforcement organization. Stop acting like one. Apple, how are you not getting the point. You are violating our PRIVACY rights. Over 7000 signatures were collected. Stop playing with our PRIVACY and HUMAN right.

 

Attachments

  • 6165468F-854A-4DA1-90FE-0AE56E0CD80C.png
    6165468F-854A-4DA1-90FE-0AE56E0CD80C.png
    1.1 MB · Views: 206
Last edited:
Objectively, this is a good PR response to this issue. But he said this, “security researchers are constantly able to introspect what’s happening in Apple’s [phone] software.” Is there any truth to that? Apple software is closed source. Apple’s software is supposedly very secure. So, how will security researchers introspect this?
 
one word about 'authoritarian governments'. systems like those don't need apple's CSAM solution to lock up their opponents. they can just plant any image, video, whatsoever on your device with zero day exploit based tools, and use the plain old 'hand-controlled' police/secret police to conduct a made-up investigation to uncover the evidence they planted before.
 
It is confusing... but they are gaslighting us into thinking it is universal confusion when there is a large subset of people with clear understanding coupled with dissent

Indeed. The hubris is astounding. They STILL have not at all explained why they can't just scan on their iCloud servers instead of insisting it has to be done on the device, and they just keep piling on the obfuscation.
 
Craig! How are you defending Apple when you said "Privacy is a human fundamental right"! Come on man. You woke up on the wrong side of the bed too or what?

“At Apple, we believe privacy is a fundamental human right,” said Craig Federighi, Apple’s senior VP of software engineering. “We don’t think you should have to make a tradeoff between great features and privacy. We believe you deserve both.”
 
Last edited:
It’s the local scanning that bothers me. I see it as a weapon of mass surveillance. Even mighty Microsoft had to let China examine their Windows source code.

I fear the day when that local scanner looks beyond its scope and I can never trust Apple to do the right thing with this specific technology.
 
Indeed. The hubris is astounding. They STILL have not at all explained why they can't just scan on their iCloud servers instead of insisting it has to be done on the device, and they just keep piling on the obfuscation.
That part is not confusing, they’re trying to hammer home the fact that the code for doing the scanning is on your device which means it can be explored by anyone. Server side scanning is “scarier“ because that means we have no idea what they are actually doing. Of course, I believe they are already doing that for many things.
 
After reading Craig’s interview. Sounds like he’s here to brainwash us as well. Very disappointed! When all Craig talked about how important privacy is.

Craig Federighi! How about let’s perform a scan on your iPhone and see what we find inside your iPhone. Let's go over your iPhone and let's talk about exposing privacy before you ask us to give our privacy up.
 
Last edited:


Apple's senior vice president of software engineering, Craig Federighi, has today defended Apple's controversial planned child safety features in a significant interview with The Wall Street Journal, revealing a number of new details about the safeguards built into Apple's system for scanning users' photos libraries for Child Sexual Abuse Material (CSAM).

craig-wwdc-2021-privacy.png

Federighi admitted that Apple had handled last week's announcement of the two new tools, relating to explicit content in Messages for children and CSAM content to be stored in iCloud Photos libraries, poorly and acknowledged the confusion around the tools:

Federighi emphasized that Apple's system will be protected against being taken advantage of by governments with "multiple levels of auditability."


Federighi also revealed a number of new details around the system's safeguards, such as the fact that a user will need to meet around 30 matches for CSAM content in their Photos library before Apple is alerted to confirm if those images appear to be genuine instances of CSAM.



He also pointed out the security advantage of placing the matching process on the iPhone directly, rather than it occurring on iCloud's servers.



When asked if the database of images used to match CSAM content on users' devices could be compromised or have political material inserted, Federighi explained that the database is constructed through the intersection of images from multiple child safety organizations, with at least two being "in distinct jurisdictions."

These child protection organizations, as well as an independent auditor, will be able to verify that the database of images only consists of content from those entities, according to Federighi.

Article Link: Craig Federighi Acknowledges Confusion Around Apple Child Safety Features and Explains New Details About Safeguards
In our enlightened age you can't disagree, you just don't understand.

Buy a NAS and store your pictures there.
 
By having a threshold. One picture isn't enough. Maybe they have set the threshold to 50.
[...]
Apple has stated the the false positive probability is "1 in 1 trillion accounts per year" leads me to believe its [sic] 10 or more.
(50 + 10) / 2 = 30

Nailed it :)


 
He’s lying to us dude. Trying to defend Apple. Don’t get played by Craig's comments. 😂

It's all wishy-washy stuff again.

Why was this not announce in WWDC 2021?
 
Last edited:
Why do ‘the great feature’ image analysis at all, even if it is local. I was wondering the other day that even grain varieties are hashed. Or pets.
Who is pushing this idea? We customers are not interested in it, so we are not shareholders in this game.
It would be good if Apple would ask its customers who is actually excited about this functionality. Technically, I'm impressed with how perfectly facial recognition works, but suddenly I realize that a hash has already been prepared for every face in my library. This can end badly because it arouses the curiosity of people who are not interested in privacy. Then it can easily happen that hashes from terrorists, for example, boost the CSAM stuff in every private computer, and representative of this example, the dam is quickly broken.

Edward Snoden is not a crank, and he warned about exactly such scenarios a few days ago because he knows this world pretty well.

I don't think Apple will be strong enough to put this demon in its place either.
Never play with fire. They should know that in Cupertino.

Unfortunately, Apple has proven to be currently too weak to protect their future development fields (which are very dependent on customer trust). This stupidity is really hard to understand. In my company all scan instigators would have been warned off as damaging to future business.

It's high time to turn back, and please don't forget an apologetic genuflection to us customers, Mr. Craig, Erik Neuenschwander, 中華人民共和國國家安全部 / 中华人民共和国国家安全部, Федеральная служба безопасности Российской Федерации (ФСБ) etc.
I would have been impressed by such kind of straight statement:

"We at Apple have been forced by law to perform image scans, which we are supposed to justify with the protection of children. Since we are too weak to enforce consumer interests against the institutions by pushing for a general ban on image analysis, we want to be a little better than Google and the like, and have created an instrument that is difficult to explain."

This would return the game to those responsible and Apple would not have lost any trust and would not try crude and funky interviews…
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.