MeWho’s not going to upgrade their Apple devices because of this CSAM?
MeWho’s not going to upgrade their Apple devices because of this CSAM?
I have an android phone on order. And no, I don't trust google as far as I can throw them, but they don't do on device scanning, yet, Apple is leading the way. But at least that android phone is truly different (a S Flip 3), unlike years of "faster" and better camera. I was already getting tired of that, this just pushed me WAY over the edge.
If they don’t cancel this I’m seriously going to have to look at alternative products, which saddens me.
Screw them - I'm going to save appx £1000 annual expense and get a feature phone
Which “alternative products”? Android flavours? If you feel strongly about it, why not do something? You could write an email to Cook to voice your concerns and ask your social media circle to do the same. This might actually help.
This is why I think whoever recommended this to TC was an idiot. This renders “What happens on your iPhone, stays on your iPhone” a fake news.I doubt that's the reason. I believe they actually believe they are using their market power to do something altruistic and good - snuff out CP.
It is conceivable if China got their hands on the key.
Respected university researchers are sounding the alarm bells over the technology behind Apple's plans to scan iPhone users' photo libraries for CSAM, or child sexual abuse material, calling the technology "dangerous."
![]()
Jonanath Mayer, an assistant professor of computer science and public affairs at Princeton University, as well as Anunay Kulshrestha, a researcher at Princeton University Center for Information Technology Policy, both penned an op-ed for The Washington Post, outlining their experiences with building image detection technology.
The researchers started a project two years ago to identity CSAM in end-to-end encrypted online services. The researchers note that given their field, they "know the value of end-to-end encryption, which protects data from third-party access." That concern, they say, is what horrifies them over CSAM "proliferating on encrypted platforms."
Mayer and Kulshrestha said they wanted to find a middle ground for the situation: build a system that online platforms could use to find CSAM and protect end-to-end encryption. The researchers note that experts in the field doubted the prospect of such a system, but they did manage to build it and in the process noticed a significant problem.
Since Apple's announcement of the feature, the company has been bombarded with concerns that the system behind detecting CSAM could be used to detect other forms of photos at the request of oppressive governments. Apple has strongly pushed back against such a possibility, saying it will refuse any such request from governments.
Nonetheless, concerns over the future implications of the technology being used for CSAM detection are widespread. Mayer and Kulshrestha said that their concerns over how governments could use the system to detect content other than CSAM had them "disturbed."
Apple has continued to address user concerns over its plans, publishing additional documents and an FAQ page. Apple continues to believe that its CSAM detection system, which will occur on a user's device, aligns with its long-standing privacy values.
Article Link: University Researchers Who Built a CSAM Scanning System Urge Apple to Not Use the 'Dangerous' Technology
But it’s not using AI. It’s using hash matching followed up by manual review.Privacy matters, the switch to use AI to go through each and every photo/video in the cloud is a dangerous slippery slope. gone will be privacy, different governments can force more of this. Apple ought immediately stop this activity
and bring out a mini version...Shame it wasn’t called HomeCSAM. Then they’d drop it immediately.
All good and fair, you are describing one feature that Apple plans to add to iOS 15, but the rage is about a completely different feature which Apple unfortunately presented at the same time, having a lot of people confused.From what I’ve read here: they don’t. The problem with their service was that it used an external server to scan for content. That’s not the case in Apple’s implementation at all. All communication is completely end-to-end encrypted. Malicious users can still send offensive material to whomever they want and no-one except for the receiving user will know about it. However, with the new service, if parents choose to enable it, children’s accounts will scan the received images after decrypting them but before displaying them and present the minor with a content warning. If the kid is below the age of thirteen, the parents will can choose to get a warning that improper material was sent to their child. None of this is enabled by default. No external sources are alerted, the service (iMessage) or its provider (Apple) don’t get a notification at all. So, the E2E-messaging is still safe, but children get an optional layer of protection from creeps. Also, older minors can avoid unsolicited dick pics without their parents knowing about it (just in case some moronic parents try to blame their kids just for receiving that kind of harassment. Sadly, victim blaming is not unheard of).
They clearly don't know that Cloudflare has been offering "fuzzy hash" CSAM scanning for all its customers for nearly 2 years, and it seems neither do you: https://blog.cloudflare.com/the-csam-scanning-tool/They clearly don't know how the technology works.. oh wait
A dumb phone - something that makes phone calls and the occasional text, with a battery life measured in weeks. Then buy a decent camera every 5 years or so ...What’s a feature phone?
Let's say that will work (I'm not sure of the intricacies or reality of it). All it will take now is for the malicious app to do those exact same things and trigger some sort of communication to the authorities that you are in possession of CSAM material. Said malicious app could also post messages as you on various forums to get you in trouble with authoritarian regimes. The issue would exist whether there was "fuzzy hashing" or not.All it will take to frame someone is one malicious app with some encrypted CSAM hidden in it calling UIImageWriteToSavedPhotosAlbum() to put 30 incriminating images into someone's photo library. If they have iCloud Photos enabled the on-device scanning will then pick those images up and report the device's owner to Apple despite that person being innocent.
Worse still, if Apple is declared a monopoly and forced to open up the walled garden it will be much easier to get apps like this onto devices from cowboy app stores as there would be no checks by Apple like they do with the App Store.
Well - I guess we all could .... tcook@apple.com would be my guess. Of course he most likely will not read it. But perhaps when he hears that his inbox has melted, it may cause a pause ?With respect, he will never read it. There are hundreds of thousands or millions of people vocally opposing this. And prominent people and groups/researchers/foundations out there already VERY vocal asking Apple to stop. Some big names attached. Apple has not expressed any intent to change course still.
Some random guy writing a better isnt going to change Apple's mind here. It's a waste of time honestly.
There is the Librem Phone that runs on Pure OS [Linux]. They even have a US-made version with, I believe, a vetted and secure supply chain.True but at least you won't get Craig and Tim lying to you on video with smug looks.
"A foreign government could, for example, compel a service to out people sharing disfavored political speech. That's no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy. Russia recently fined Google, Facebook and Twitter for not removing pro-democracy protest materials."
Without taking a side in this, none of those things in the above warning could be forced on Apple with what Apple says they're doing. Apple has a list of hashes and they're checking images uploaded to iCloud to see if they match that hash. WeChat's content matching would require text content analysis — totally different. The India example appears to be the same kind of thing (or maybe a requirement for human pre-screening, which is more different still). And the Russia example is where Russia identified posts or pictures and demanded they be removed; which is absolutely not the same thing.
So... this article seems to be people urging Apple not to proceed with its plans, based on warnings that have little to do with what Apple is actually doing.
They're not?The next thing they would do is accuse these researchers as pedos or anti-vaxxers.