What I’m afraid is that I have tons and tons of family pictures from vacations over the years and my kids got in pools and beaches obviously and I’m scare that CSAM or whatever the hell is call would easily mistaken it for child P…n.
Then you should educate yourself about how the mechanism works: it has a table of hashes of
specific CSAM pictures that are
already known to be circulating between pedophiles. If you have one or more of
those pictures on your phone, it would throw up a red flag. So
unless you've been been taking pics of your kids in the pool
and uploading those pics to pedophile forums, you should have absolutely zero problems with this system mistakenly red flagging your pics. It's not looking at the
content of the pics, it's looking for exact matches with pics already known to be bad.
(By the way, "CSAM" is the name used for the bad content itself, "
Child
Sexual
Abuse
Material",
not for any of the various mechanisms that have been designed to detect it.)
They also have an
entirely different mechanism, which can
optionally be turned on, that looks at the content of pics sent to kids, using machine learning algorithms, and if, say, your 8yo daughter's iPad detects, say, a dick pic arriving in the Messages app, it'll pop up a message
on her device only, saying something along the lines of, "this picture appears to show a sensitive part of the body, you may want to check with mom or dad before viewing it" - it
doesn't report anything to the parents or to Apple, it doesn't keep the kid from viewing the picture, it just gives the kid an age-appropriate dialog box equivalent of the "NSFW" tag that adults might find on a pic or forum post.
Apple made a rather big PR mistake of initially talking about these two separate mechanisms on the same day, and people started conflating the two.
People also got upset that the CSAM detection mechanism was "scanning all their pictures" - putting aside that "computing a hash" is entirely different than what people think of when you say "scanning"... guess what, code in iOS is
already scanning all your pictures (and has been,
for years) in order to locate/tag human faces as well as pets/animals, objects, etc. - if you go into Photos and search for "dog" or "car", it'll show you pictures that contain dogs or cars. It can do that, at a reasonable speed, because is has already built an index, as the pictures came in.
And, again, the CSAM detection isn't "scanning" your pictures in that it isn't looking at an image and trying to figure out what it is (this is the part that people worry is going to incorrectly flag pics of their kids in the pool) - the CSAM detection
only computes a hash (a checksum) for the picture and compares that hash against a table of hashes of
already-known-to-be-circulating CSAM pics - your random pool pics are not going to be listed in that table (unless you've been uploading them to pedophile forums). People also complained "but it's doing the CSAM detection
on my phone!" - well, yes, yes it is; Apple decided doing everything on
your phone was better at preserving
your privacy - the alternative would be to scan your pics once they're uploaded to iCloud (which would mean that your pics on iCloud could
not be encrypted) - this is what many other services are already doing. And, for those saying, "well, but Apple shouldn't be doing any of this in the first place"... yeah, well, the government is working on making some sort of CSAM detection mandatory everywhere - so Apple devised the most privacy-protecting mechanism they could, to deal with that.