They addressed this in their material. Read on page 5 and 6 in their support article
here but I’ve copied the relevant paragraphs below. Also covered below is that all reports require human review. So if somehow hashes that were of MAGA hats or firearms were injected into the database they would be viewed by a human who would verify they are not child pornography and therefore not report it to the authorities. This would also put up red flags that the database is trying to be manipulated and they would report that and also tighten up their protocols. Last also covered below there is no automated reporting to authorities.
“This was proven by several independent researchers.” This was
hypothesized (corrected) by those same researchers using other hash systems which Apple also stated they are not using that exact version of hash and also those articles do state those researchers aren’t using the same system for their testing either. Keep in mind that those research companies do gain from discrediting Apple to ad validity to the individual or their companies they represent.
So while I do agree they should be considered they also are not using the same version to make their arguments. That’s like saying a cup with holes at the bottom will leak water (hashes) so Apple’s cup with holes only at the top and not the bottom (reaching 30 hashes) will also leak water at the bottom. I do think it would be fair to allow security experts access to Apple’s protocols and hash system for verification.
Quoted from Apple’s public materials:
Security for CSAM detection for iCloud Photos
Can the CSAM detection system in iCloud Photos be used to detect things other than CSAM?
Our process is designed to prevent that from happening. CSAM detection for iCloud Photos is built so that the system only works with CSAM image hashes provided by NCMEC and other child safety organizations. This set of image hashes is based on images acquired and validated to be CSAM by at least two child safety organizations. There is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. As a result, the system is only designed to report photos that are known CSAM in iCloud Photos. In most countries, including the United States, simply possessing these images is a crime and Apple is obligated to report any instances we learn of to the appropriate authorities.
Could governments force Apple to add non-CSAM images to the hash list?
No. Apple would refuse such demands and our system has been designed to prevent that from happening. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. The set of image hashes used for matching are from known, existing images of CSAM and only contains entries that were independently submitted by two or more child safety organizations operating in separate sovereign jurisdictions. Apple does not add to the set of known CSAM image hashes, and the system is designed to be auditable. The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under this design. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system identifies photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.
We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.
Can non-CSAM images be “injected” into the system to identify accounts for things other than CSAM?
Our process is designed to prevent that from happening. The set of image hashes used for matching are from known, existing images of CSAM that have been acquired and validated by at least two child safety organizations. Apple does not add to the set of known CSAM image hashes. The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under our design. Finally, there is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. In the unlikely event of the system identifying images that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.