Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Now that governments around the world are aware of Apple's ability to do this they will absolutely pass laws requiring Apple to look for images of protests, controversial political figures, critical memes etc--or Apple won't be able to sell iPhones in that region. Apple has already caved in China.

That being said, at this point you should assume that anything on a device like a smartphone that connects to the internet isn't truly private anyway--regardless of what Apple says they do or don't do. If we aren't going to have privacy, then scanning for imagines of child porn is most definitely a worthwhile use. I don't want some kid to suffer because I'm worried that Apple is going to judge me for looking a bit drunk at some wedding.
 
If your device (not Apple) analyses an image using A.I algorithms, converts the outcome to an encrypted ticket that is then compared to a database of indecent images, then where is the unreasonable/invasion of privacy?
I'm not arguing that they have not come up with an elegant technical solution. Simple, I don't want them doing it on MY device. The same type of arguments are made for "anonymous" geo location tracking. As if it is really that hard to put some pieces together (frequent locations - work, home...) to "decrypt" who that anonymous person really is.
 
Precisely, a scenario. Just more "what if/maybe/could/" et al.

You didn't even have time to read it... the scenario is one that matches, in concept, what we're dealing with here. And it's why many of us have concerns about the situation - and legitimate concerns, not FUD-based concerns. You seem awfully quick to dismiss anything that remotely questions your stance on this topic.
 
You are correct…no discussion is needed…me along with many others on here have tried to explain it/dumb it down as much as possible, but you continue to give false and incorrect examples/possibilities despite claiming you read the white paper. Read it again.

I’ve done my best to explain to you why an image cannot be altered to match an image in the database And even gave you the odds of how unlikely it is to happen…nearly impossible by the way…and even if an image was altered but found NOT to be a match or a=even a modified version of an image in the database, you will never know as you technically did nothing wrong.

For your latest example…IT DOESN’T MATTER!! If the image is in the database and hashed (modified or not…and btw, they can easily detect if the image is modified from an original source), any version of that imaged (modified or not) will set off alarms via IDENTICAL hashes. This happening to one image alone will not flag your account….ONE IMAGE WILL NOT FLAG YOUR ACCOUNT.

Multiple images (again, not one) must have an identical match to the hashes of a database image (in part or in whole, modified or unmodified) for you account to be flagged and reviewed by Apple. If this happens you either a) are in possession of child pornography and should be reported or b) are one of th luckiest people in the world since it is a one in one trillion chance that the images are innocent shots that happen to identically match the hashes associated with a child pornography image….again, ONE IN ONE TRILLION CHGANCE!

Now, please respond with more examples of how photoshopped images can be flagged…

But what you and others are not explaining well is Apples claims that any modification to the image will result in it being flagged as a match. Yet a similar photo doesn’t get flagged. This is the discrepancy I am struggling with. If I modify a picture and it’s still a match, how does another one that looks identical not be a match?
 
So a heavily modified image in photoshop will NOT get flagged? Thank you. This is ALL I WANTED TO KNOW.

So I can take a flagged image, Turn the subject into the hulk, it will NOT get flagged?
Here’s the key thing with your example that is bothersome to this discussion…you are taking the view of the child pornographer, not the innocent person who is wrongfully worried about their personal innocent photos being tagged.

I stated this in another response to you, your example assumes that you are in possession of the original image (modified or not) which means you are in possession of child pornography.

As long as you do not save that image to your camera roll and upload to iCloud, yes, you can have all the child porn you want on your phone and even share with others. It will only get tagged if it is part of the database.

This will not stop all child pornographers, it will definitely help reduced the spread of materials and help catch bad people in possession of these materials.

People who take awful pictures like this, keep them on their camera role and even upload them to iCloud will NOT be tagged as those images need to be added to the database, and the only way for that to happen is for them to be caught some other way, be prosecuted, and the ”new” images in their possession added to the database so as to be flagged by other people having them.

Apple is looking for people who already have known child pornography, not people necessarily creating it.
 
I still think Apple is looking at public reaction. If the technology used to scan for CSAM could be forced to expand to look for other things by any government agency, then Apple may just drop the idea altogether.
 
  • Like
  • Love
Reactions: dk001 and itzoo
I think you might be incorrect about #5. Or maybe it's a semantic issue. The hash matching happens on your device whether you have iCloud Photos enabled or not. I've read everything, including some of the deeper technical articles, and seen nothing to show me otherwise.

Hence the quoted “(or better, it’s like the sound of a tree falling when nobody is listening)” parentheses.

The “CP or not” security voucher attached to each photo will forever remain cryptographically encrypted nonsense unless the photos are uploaded to iCloud Photo and at least an unknown number of them are positive matches.

Hence they may as well not exist until the user makes the conscious decision to submit his pictures to Apple’s servers. Until then they’re a glorified “EXIF” parameter nobody in the universe knows about, not much of a mass surveillance tool when they’re in this “inactivated” state.
 
You seem awfully quick to dismiss anything that remotely questions your stance on this topic.
Not at all. But after sifting through 19 pages of pure waffle, I've come to the conclusion that very few people have actually taken the time to read and thus understand what Apple is doing and how they're implementing it. It's just been a series of knee-jerk reactions and tin-foil hat comments that, often, are baseless.

If the discussion was objective, less people would be discussing the ifs/buts/maybes and more of the potential to help prevent a crime.
 
I think I’ve finally gathered my thoughts on this:

1. This whole situation is fishy to me. We know the U.S. government has put pressure on Apple to weaken security and privacy before. Given that csam scanning can be thwarted by changing a single setting, it feels like this is malicious compliance.

2. this has put me back in perspective about smartphones. We’ve known that they’re privacy nightmares for over a decade now. Yet we’ve seemed to delude ourselves into thinking that we were secure.

3. I do appreciate the tightrope Apple is trying to walk here, using hashes and to scan for known photos is quite ingenious. Even if I believe that they shouldn’t scan a single ****ing thing, I can appreciate it from a technical perspective, trying to keep the neural matching off of their own servers.

For me, I think I’ll continue using Apple devices, but be more attentive to what I upload. I’m not a pedo so I have nothing to worry about currently, and if the U.S. gov comes after me for wrongthink, it’s probably because I’ve publicly stated my sentiments about them before. (Glowies can suck it)

I’m not gonna apologize for Apple at all, but I do think we had an unrealistic view of privacy from them that’s now been shattered.
 
Not at all. But after sifting through 19 pages of pure waffle, I've come to the conclusion that very few people have actually taken the time to read and thus understand what Apple is doing and how they're implementing it. It's just been a series of knee-jerk reactions and tin-foil hat comments that, often, are baseless.

If the discussion was objective, less people would be discussing the ifs/buts/maybes and more of the potential to help prevent a crime.

Then why are you still here? Someone attempts to engage in a rational discussion about this, and genuinely asks your take on a concern that hasn't been addressed by you or others who are all for what Apple is doing, and you simply dismiss it? C'mon now, you're better than that...
 
Here’s the key thing with your example that is bothersome to this discussion…you are taking the view of the child pornographer, not the innocent person who is wrongfully worried about their personal innocent photos being tagged.

I stated this in another response to you, your example assumes that you are in possession of the original image (modified or not) which means you are in possession of child pornography.

As long as you do not save that image to your camera roll and upload to iCloud, yes, you can have all the child porn you want on your phone and even share with others. It will only get tagged if it is part of the database.

This will not stop all child pornographers, it will definitely help reduced the spread of materials and help catch bad people in possession of these materials.

People who take awful pictures like this, keep them on their camera role and even upload them to iCloud will NOT be tagged as those images need to be added to the database, and the only way for that to happen is for them to be caught some other way, be prosecuted, and the ”new” images in their possession added to the database so as to be flagged by other people having them.

Apple is looking for people who already have known child pornography, not people necessarily creating it.

No I’m just not understanding the discrepancy and providing examples. The person might not have the original picture but a modified picture. But that still gets flagged. But a similar picture with an adult/legal subject won’t? How? I thought modifications result in a match still. Remember, that person doesn’t have the original, but ONLY the modified one. But it’s still a match? Is that correct?

And it would be extremely helpful if it wasn’t for the patronizing responses. I said from the beginning I didn’t understand fully and looking for help.
 
  • Like
Reactions: dk001
I think you might be incorrect about #5. Or maybe it's a semantic issue. The hash matching happens on your device whether you have iCloud Photos enabled or not. I've read everything, including some of the deeper technical articles, and seen nothing to show me otherwise.

Have you read this? It's an interesting take from someone who knows their stuff (you've probably already seen it): http://www.hackerfactor.com/blog/index.php?/archives/929-One-Bad-Apple.html

I agree with you, and others, completely that there is some serious FUD and misinformation being spread about this. However, I don't think you can be as quick to dismiss some of the rational concerns being brought up here, and I'd be curious your take on this hypothetical scenario:

There is a rampant problem with theft in the world. Because of that, my homeowner's association has a new device that they're going to require all homeowners in our neighborhood to have in their homes. That device can scan every barcode that comes into the house and label it as legit, or match it up with barcodes in a stolen database housed elsewhere. As long as the stolen item stays in my home, the device stays quiet. But if the item leaves my house, the device immediately reaches out to local police to let them know it's out there. By the way, if you don't want to participate in this, simply never take any items out of your house and you'll be fine, the device will remain dormant and quiet.

While I realize that's not a perfect analogy, it's very similar in concept. I think the problem that I (and others trying to stay rational here) have with this change, is that they're stepping into my personal space, so to speak, to implement this change. To stick with that analogy, I don't believe I should be required to allow that device in my house in order to take things I own wherever I want to take them. Why not let the police do their job and track down the stolen devices, get a warrant to search my house, and then come find it/me?

I'm 100% fine if Apple decided to implement this exact same process with everything uploaded to iCloud. It's on their servers at that point, so they have every right to take steps to find that filth, and should be able to implement the exact same secure process. Yes, they would have the encryption keys, but they have them already, and we already trust them with that. So why do this massive encryption workaround just to do the scanning on my device?

I wonder if it is to enable scanning prior to encryption (like sending via Messages)?
 
All this verbal outcry isn't going to change anything. These people are still going to buy iPhones. (Although I wish they wouldn't, the Apple ecosystem is getting more and contaminated with Android users etc and whiny companies that want to ruin the Apple Way)
 
Hence the quoted “(or better, it’s like the sound of a tree falling when nobody is listening)” parentheses.

The “CP or not” security voucher attached to each photo will forever remain cryptographically encrypted nonsense unless the photos are uploaded to iCloud Photo and at least an unknown number of them are positive matches.

Hence they may as well not exist until the user makes the conscious decision to submit his pictures to Apple’s servers. Until then they’re a glorified “EXIF” parameter nobody in the universe knows about, not much of a mass surveillance tool when they’re in this “inactivated” state.

Yes, I get that, 100%, and agree 100%. But that's not the point... please make sure to read my entire post, or simply disengage from the thread completely if you're going to lump all counterpoints together and dismiss them.
 
  • Like
Reactions: 09872738
Whenever you hear about this sort of crime on the news it seems to involve heavily encrypted harddrives and the dark web - not iCloud! I can't see this catching anyone, especially now with all the press this is getting. So if Apple don't find anything will they remove this? No. This will creep in one direction only.
 
Some folks like to view things based on principles and not just emotional ends justify means rationale.
So where do the principles end?

When one leaves the house, are they immediately suspicious that their neighbour - who glances at them - is going to steal their BBQ? Was the store assistant being a bit too nice when they offered help?

You can endlessly apply this logic to Apple's services or to random strangers, but there has to be a point at which reason overcomes emotion.

And, as has been discussed, reason has described that an A.I algorithm that converts image data to a secure, encrypted ticket, that is then compared to a database of known indecent images, isn't invasive. It's binary data that cannot even be recreated as an image.

Emotion says "BUT WHAT IF (...)", and debates a whole host of unreasonable, baseless scenarios.
 
  • Like
Reactions: JMacHack
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.