Making a long story short - Apple's proposed CSAM-scanning works locally on your device whether you like it or not whenever you download an image to iCloud; it will result in both false positives and false negatives, and false positives will be reviewed by a human being who would see your private pictures; and would be easily circumvented by editing of images. Also, any system that can be used to identify CSAM material can also identity political material like flags, memes, written statements and even words in audio files and could therefore be abused. Apple's system opens the door to real-time local monitoring/surveillance by machine algorithms - a digital Big Brother in your pocket. Honestly I don't think Apple should do any scanning unless there is a search warrant, and then it should do it server side and not in the iPhone. Otherwise it is a slippery slope. Just my 2 cents.
I do agree though, that this possible new emergency feature for iPhones would be amazing and potentially life-saving. I would be tempted to break my boycott if Apple puts the emergency satellite communication in the next iPhone.
I disagree, because I think the trade-off in privacy is worth protecting children. We make lessor trade offs everyday just for “free” services.
But I understand your considered ping-of-view. I care a great deal about privacy, and loath Facebook and Google and the data harvesting industry. I have friends who couldn’t care less, and it upsets me. So I also understand there must also be others like you who care more about other types of scanning which don’t bother me.
These are interesting debates, and finding the right balance for all of us is hard. Thanks for sharing your thoughts.