Now there's a an automatic path for reporting CP, which can be abused or even more likely have bugs.
It's not automatic. Humans get involved in the lasts steps which are manual.
Now there's a an automatic path for reporting CP, which can be abused or even more likely have bugs.
Unless I'm missing something here.
1) on server searches by cloud hosts are as much of a black box for us users as on-device searches, not sure about your point about “hasn’t been reviewed by anyone”; the process doing it is just like any processing apple already does to your Photo Library, and it’s not spyware if they make you agree to it
2) common sense suggests that on-device pre-labeling is less invasive than straight on-server searches (since negative matches aren’t even “bothered” once they’re on the cloud), that would be enough of a “why”. Also the very article you linked explains that these hash matches nowadays can sometimes be done on encrypted data as well (depending on the type of encryption). So not sure what’s your point about being able to encrypt before upload. The iPhone user already has one crucial defense against this: just disable iCloud Photos (formerly known as iCloud Photo Library), if you feel Apple is overreaching. If anything, this whole drama is giving us more awareness about all of this.
Apple did admit that there is no silver bullet answer as it relates to the potential of the system being abused, but the company said it is committed to using the system solely for known CSAM imagery detection.
Turns out 1 in a trillion isn't quite as reassuring as you might think...
Yes, I absolutely prefer that, because it leaves me a choice of not uploading anything or encrypting it first where possible. Once they start snooping through my data on my own device I will no longer be in control, and the end-to-end encryption for services like iMessage that they market so proudly will be a farce.
With on server scanning, the dissident in question simply doesn't upload to Google cloud; Apple adding on-device scanning gives governments to monitor what is actually on the population's iPhones, iPads and Macs whether they are connected to iCloud or not; all they have to do is tell Apple to do it, and Apple will do it if they want to continue to sell devices in that country (I don't seem them giving up on China any time soon).
It's not a backdoor. A backdoor would require it to happen without either Apple or the user's knowledge.I'm no Edward Snowden fan but this quote and the one from the EFF are perfect:
"No matter how well-intentioned, Apple is rolling out mass surveillance to the entire world with this," said prominent whistleblower Edward Snowden, adding that "if they can scan for kiddie porn today, they can scan for anything tomorrow." The non-profit Electronic Frontier Foundation also criticized Apple's plans, stating that "even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor."
I've read enough to understand this system won't false flag personal photos on a phone etc. I do trust the only people getting busted for child abuse photos will be true criminals. However, the bottom line is Apple has stated, over and over, that backdoors are unacceptable to them because of the potential for abuse. They have even defended not making backdoors when pressed to help the goverment recover data from known terrorists planning to attack US citizens on our home soil. Now, suddenly, backdoors are acceptable and despite our universal revulsion for child abusers this is the top of a very, very slippery slope where the protections so unique to Apple begin to slip away.
Still on Apple.com: "Apple has never created a backdoor or master key to any of our products or services. We have also never allowed any government direct access to Apple servers. And we never will."
Well. Clearly you dont understand how hash works. Knowing the hash from the database you can easily manipulate catbpics to be flagged. You have enough flags and have cops at your doors. It can be your son photos or gf photos …
Anybody here is good with statistics and boolean logic?
Let’s suppose the threshold to raise a red flag is 10 offences, and the hashes have 1 millionth chance of a false positive (it’s probably even smaller), what are the odds of getting 10 false positives by sheer unluckiness?
But I think that we're all arguing at cross-purposes. Folk ask what is to stop this system being abused, and other folk reply that it can't be abused because the chances of a false positive is 1 trillion to one. That's not answering the question that's being asked.
What is to stop this system from being abused? If anyone knows then they should drop a note to Apple, because they've already admitted that it's open to abuse.
Apple can’t even stop scam apps in their App Store. So to answer your question, yes, I don’t trust them to be less incompetent in managing this new “feature” than the way they handle the App Store.It’s nearly impossible to escalate to human review if you don’t have multiple pedo pics on your device.
Not understanding this is some anti-vax level of failing at statistics and computer science.
Now, one could still argue that the system is a black box and not fully transparent, but is Apple really that incompetent to create a system that generates tons of false positives and unnecessary human reviews? With their track record of championing privacy?
You can't. I was speaking more in general terms. There are 3rd party tools and services that allow you to encrypt files before uploading them to the cloud (e.g. Cryptomator and various E2E encrypted file syncing services). For example, you could use Cryptomator to upload pictures to iCloud Drive in encrypted form. But if Apple starts scanning the files on our devices, that undercuts any kind of E2E encryption. That's what alarms many security experts about Apple's proposal.So how would you encrypt the iCloud Photo Library today?
I can't really think of an easy way to do it and still have all the features intact.
No. They’re putting the scanning ON your iPhone. Your iPhone, even though n airplane mode, is searching your photos for kiddie porn or whatever else the government wants Apple to look for.
This will go to the SCOTUS.
If it wasn't, how comes you call the police when i walk into your house? I don't steal anything, i just watch and observe what you do. Just that! 24/7.
It's your house? Well, it's your iPhone too. Am just watching you, just that, where's your problem?
You came to that conclusion because 1.4 trillion > 1 trillion?
Yeah comparing a yearly flow of millions of apps to review to the number of instances of frickin’ child p0rn escalated to human review is so relevant and edgy.Apple can’t even stop scam apps in their App Store. So to answer your question, yes, I don’t trust them to be less incompetent in managing this new “feature” than the way they handle the App Store.
Say that you're an official with the CCP. You have huge stacks of brochures of anti-CCP materials. You've got them scanned and hashed. Next you call Apple and say, "please alert us if any similar imageries appears in your customers' devices". Apple would say, "Sure, we're just following your law"... Hence when a Chinese photographs such brochure "in the wild" using an iPhone, someone from "the government" will knock the next day and "strongly enquire" about yesterday's photo.
Yeah comparing a yearly flow of millions of apps to review to the number of instances of frickin’ child p0rn escalated to human review is so relevant and edgy.
Yeah thanks for totally ignoring the fact that Apple sucks at their gate keeping job. We haven’t gotten to how bad Apple is at fixing known security flaws.Yeah comparing a yearly flow of millions of apps to review to the number of instances of frickin’ child p0rn escalated to human review is so relevant and edgy.
I read the article; thanks for the recap.Apple says the scan happens right before you send the photo to iCloud.
Anyhow, it just a technical detail. Let's say in scans independent of uploading to iCloud. If a match is found a safety voucher is created with this information. But it's marked in such a way that you can't know, Apple can't know and anyone who gets hold of your iPhone can't either. So completely secret from everyone.
When these marks called safety vouchers are uploaded to iCloud Apple can't read the content of them. But can't they just count the numbers of them? Yes, but that doesn't indicate anything. Apple is creating false safety vouchers also, even for people with no matches. So the presence and number of safety voucher doesn't mean anything until the number of real safety voucher reaches a threshold.
Then Apple has enough data of a secret key to determine which safety voucher is real or fake and read which image was matched.
Except Apple calls itself the beacon of users privacy. That's their entire business moto. If others are doing it, then Apple should be able too is hypocritical.how many times do we have to go down this road? If another company does it, nobody cares. If Apple does it, the internet is on fire. In this particular case: multiple tech companies already do image scanning for CSAM. Google, Twitter, Microsoft, Facebook, and others use image hashing methods to look for and report known images of child abuse.
China: China: Hey Apple, we want you to tell us whenever someone stores a picture of protests in Hong Kong
Apple: Err, we could do that, it’s as easy as adding a list of hashes to an index, but we’d rather not
China: <raises eyebrow>
But you can add hashes for media relating to the protests in HK that allow China to contextually identify the protestor in other ways. You can also do it in the Messaging System, as Apple explained. “Apple, monitor hashes that match with x symbol or feature that is a common denominator in the photos, I.e facial tracking and expression recognition (anger, happiness, sadness, all learned by the AI), or signposts or locational data based on specific building frames, etc”. There’s ALWAYS a way. And it’s probably already happening. If the AI can identify based on a database, it can also identify based on expressions that can be curated from other protests to “teach” the AI what anger and inciting information looks like. You know, like from the show Person Of Interest.You can't create a set of hashes which covers "looks like protests in Hong Kong" even if you have thousands of pictures of a particular protest in Hong Kong to work from. Photos from the same event creates vastly different hashes.
Yeah thanks for totally ignoring the fact that Apple sucks at their gate keeping job. We haven’t gotten to how bad Apple is at fixing known security flaws.
Soon, billions, not millions, of comparisons will be made on user’s phones. What can go wrong?
This just seems like a major contradiction. How can Apples system get a match on a cropped, color adjusted, pixel modifies, distorted image, yet claim false positives will be so low? There is some leeway to the hash matching if a unique photoshop edit can be caught.