I hope so, I’m actually really disappointed with MacRumors coverage on this. I just read the Apple overview and their linked white papers.
They are fairly well done but I’m seriously convinced that two of the three Child Safety features were going to be announced (the first is the on-device warning that your child is sending / receiving materials of a potentially sensitive nature and the third is the apparent hard block on using Siri and Search for materials involving child sexual abuse.)
I have some nitpicks on both (especially the restrictions on Siri and Search) but nothing earth-shattering.
The on-device / client vs iCloud / server CSAM techniques though definitely elegant in design are an absolute joke in terms of application to this problem. And the focus on cryptography ignores both the very real risk of scope creep (e.g geocreep) and the potential for abuse of the process (not the individual techniques).
I didn’t know until reading the CSAM white paper that they’re not even directly comparing to NCEMC database, it’s instead a database created by Apple using images provided by NCEMC (and apparently other organizations but I haven’t found those names yet).
In addition, that database once loaded to my phone will be updated as a black box during unspecified iOS updates. Neither Apple nor us as end-users will have anyway of knowing what is in this “NeuralHash” database.
And while I’m sure Apple will have a process to ensure integrity of the process (I.e that [set] can be used to correctly and accurately reproduce pdata) this is a very easy spot where a bad guy OR a bad gov can force Apple (or even easier just bribe a few key employees) to introduce non-CSAM materials or otherwise abuse the process (not the protocols or techniques) with neither NCEMC / etc nor us as end-users being any wiser.
At minimum, this can be used to justify be immediately and very likely permanently locking someone out of their account. Bribe or co-opt just one or two people in the process be it law enforcement orgs or at NCEMC / “other orgs” and you can easily destroy someone’s life.
I also did not then, now nor ever will consent to Apple assigning a persistent “safety voucher” to my images on-device. I could understand a hash being persistently attached but not a safety voucher. Putting aside the NeuralHash database, these vouchers are even more problematic to me especially because of (NOT despite) the TSS technique. Frankly this entire process seems explicitly designed not for privacy but to allow / enable entrapment or blackmailing. I do applaud the steps forward to apply these techniques in the interest of user safety (I really do and with modifications I can see this process working and could for instance be used for revenge porn!) but the process “as is” currently is ripe for abuse
More to the point, the only non-malicious reason I can ascribe to Apple’s decision to use on-device cryptography reasons is that while this approach really is elegant in design…all steps to the PSI, TSS, “safety vouchers” and NeuralHash should all be server side and that Apple isn’t doing that is due to performance and resource constraints. I.e Apple is deliberately crippling their own commitment to security and privacy because the iCloud upload would take longer if this was all server side. That the synthetic vouchers are server side makes me a bit skeptical of this argument though but I’m definitely not an expert.
But being a bit more cynical here this second “CSAM detection” technique and notably the lack of transparency on it’s rollout, the obvious potential for it to be exploited by bad govs (or even just bad actors in good govs…FISA abuse happened here in the USA after all) as well as the fact very, very few people use iCloud to swap their CSAM materials in the clear makes me think this process has been deliberately designed to be exploited but with maximum plausible deniability by all parties (Apple employee or contractor? <—> or NCEMC [or other org]? <—> or local law enforcement? <—> or other?) and that someone(s) completely blindsided Tim Cook and senior management. There is no way that Tim Cook would’ve signed off on this if the potential for abuse had been made clear to him. And I say this as someone who is not really a “fan” of his.
So this is my long-winded way of saying the obfuscation of the fact that NeuralHash is a proprietary Apple database and especially the total lack of discussion around the black box updating of said database and the equally black box like “safety vouchers”, and what I am assuming is the complete and total unacceptability of this process to Apple’s senior management are why I believe this wasn’t going to be announced to end users and that someone forced Apple to disclose all three child safety “features”.
I may also be reading WAY too much into this but there is a very obvious typo in the “CSAM Detection” white paper (s/b a 9 instead of a 10 in the TSS section) and certain aspects of the risk assessments don’t conform to my general expectations (though again I’m not an expert).
In fact the risk assessments explicitly mention how us as end users will not see any impact and have no insight to the process as well as the techniques and protocols and will be unaware and these are cited as benefits of all this!
Sorry for brain-vomiting on you but I’m just absolutely floored at what happened. I may write my “baby’s first scathing letter to Tim Apple” tomorrow. Thank you for responding btw, it was very helpful!
Expanded Protections for Children
www.apple.com