Okay, honest question to those of you whose primary objection is that the hash database and matching software lives on your device…
Would you be okay with Apple checking your photos after they are uploaded to iCloud? Even if this means you can never have end-to-end encryption with iCloud? Or do you believe that Apple (and other service providers) should have no duty whatsoever to monitor its servers for CSAM?
Most everyone has already said this. Check the photos on the cloud, it's their property do what they want with it. Not on the local phone.
Google, MS, Facebook, they all do it that way already. What was so wrong about that solution? Apple was a laggard and has already failed the children, but to invade the boundary of the user's phone is the bright line and doesn't make it any better than existing solutions.
You might be OK with all the of the precautions Apple has stated they are taking but why is there such a rush? If they got this so right, where is the full transparency? Where is the extended beta test? Why haven't they even tested this on their own employees?
There is a legal and technical minefield with crossing this rubicon and the arrogance that Apple has displayed gives me no good feelings that they really stopped to consider how well this would be taken, or that they even planned it all correctly.
Why didn't they consult with the EFF and the ACLU before launching this?
The rampant amount of cheerleading that goes into this is troubling. Yes CSAM is bad, it's evil, but let's also be cognizant of what society does to people labelled as child molesters and predators.
1. There is very little research done on Child Predators and molesters. Treatments available to them are almost nil. Why? Because CSAM is bad, m'kay?
As a society, if we (the West) cannot and will not look for effective ways to reduce negative outcomes with child predators, we may as well just put them on an island to die or put them out of their misery. Additionally, many people who have issues with CSAM were victims of sexual predators when they were children as well. How can any legitimate clinician or researcher even conduct research into CSAM if everything about it is flagged and reported?
2. The definition of CSAM is not universal. In Canada depictions of CSAM can include drawings or illustrations, including anime and manga. The United States started with a similar definition of CSAM but it was struck down by the Supreme Court. However, that doesn't mean it couldn't be expanded again.
3. The NCMEC is a quasi-governmental organization which provides the hashes that Apple will use. Is there any oversight or review of this database? Like a no-fly watchlist, getting a hash on there might be difficult to remove, but also, how will this hash be interpreted in other countries? Apple said the hash is part of the operating system and will not be querying the NCMEC for updates. Yet Apple also deploys the universal OS image across many countries. Will Canada and Australia accept NCMEC hashes or roll their own database? How will this be interpreted? Is China going to say we have our own version of the NCMEC database and Apple must deploy it or else? How could Apple refuse such a 'reasonable' request?
4. The NCMEC image hashes can be reversed (claimed) to provide small greyscale thumbnails. So in effect, every iOS 15 image is carrying around a few thousand CSAM images?
5. Apple claims that the hashing collisions are 1 in a trillion, yet they offer 30 images to be positively hit before they do something about it. That really doesn't sound right. By law, ONE CSAM image should be reported to the authorities, not 30. Holding until 29 more CSAM images before reporting is a violation of the law. How generous that they are allowing 30 images, it almost seems like they have no faith that their system is going to work well on day 1.
6. What happens when an image hash is positive? If it's under the 30 image limit, does it get published anyway? Does it get held?
7. The scanner as it is enabled is NOT working on behalf of the owner. I could understand if a positive hash is identified and it told the owner, at least they could choose not to upload it or to ask Apple/NCMEC to review said image, but the system doesn't do that. If it silently increments the counter until it hits 30, the owner of the phone has no way of knowing or necessarily stopping the process. The scanner isn't designed to audit a library for the benefit of the user so they can remove any illegal images or see if there is a collision. It's just designed to report on the owner to Apple and then the NCMEC and then the police.
A phone's owner should be able to review and stop such a process in order to have some level of control over potentially bad outcomes like this.
8. The scanning process does not alert you at all about the counter or positive hits. The only feedback you'll probably get is the cops knocking on your door. After that you'll be required to defend yourself in a court of law, but Apple nor the NCMEC will make themselves available for cross examination. This is highly dangerous for any defendant. Getting the system 'right' is paramount, and rushing is not a good idea.
9. CSAM images are supposed to be reported to NCMEC or Law Enforcement. If a positive hashed image is given to Apple for review, are they expecting to see CSAM? Technically that's illegal, they aren't Law enforcement officers. The way Facebook etc are allowed to review this is because they aren't expecting CSAM, they are reviewing images wholesale... kind or problematic there too, unless Apple is assuming the role of government now.
This is in addition to all the technical concerns about attack surface, hijacking, forced collisions, API abuse, use of the phone camera while the phone is locked, foreign government spying, high level phone hacks that have all been said as well.
What is the need for this rush and surprise (not announcing it in WWDC) and limited access? Apple has sat on their hands for years about this and now this invasive system going to be deployed in a month or two?
Here's an example of why deploying systems without extended review and testing is not a good idea:
Stanislav Petrov tells the BBC how a decision he made 30 years ago may have prevented a nuclear war.
www.bbc.com
en.wikipedia.org
That system had 28-29 levels of security to pass too....