Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The way I heard it described on a podcast was like this:

Apple is not doing a visual scan of your photos. They're not looking at the actual contents of your photos.

They are, instead, comparing hashes of *known* CSAM images. These are photos that have already been labeled as child porn.

So there's no danger of Apple flagging a photo of your child in the bathtub or whatever.

With all that said... no one knows what else Apple could do in the future. Perhaps they could start scanning the actual contents of your photos. So I can see why people are freaked out.

But as others have said... all of the big companies are doing similar things. So I dunno.
My concern with this, and people have never really addressed other than being patronizing because I am ASKING for what I am missing here. We have two scenarios.

Picture 1: I stand in front of my house, thumbs up next to my mailbox.
Picture 2: I keep the pose because my eyes were a bit closed and they took another picture seconds later.

Picture 1 and 2 are VERY VERY CLOSE in terms of a pixel match.

Okay lets walk through the scenario. Assume Picture 1 made it on CSAM. Apple's claims and documentations state that any modifications to picture 1 would result in a collision. Its not a 1:1 hash match to the entry in CSAM. We on the same page so far? Picture 1 gets flagged but picture 2 does not. Correct in that assumption yes? From what people have said, this is indeed correct.

Okay, so I launch Photoshop and adjust picture 1 to match exactly like picture 2. I spend hours to the point where it matches picture 2 pixel for pixel. Still with me? I now have three pictures. Picture 1, modified Picture 1 and Picture 2.

According to Apple any crops, color adjustments, rotations, transforms and edits to the picture will still result in a collision. So are they essentially saying now my modified picture 1 will result in a collision? Yet people are somehow stating picture 2 will NOT be a collision? 1/1trillion odds it will be a collision?

This is a clear contradiction according to Apple's own statements (modification of image results in collision) and people's own explanation of the logic (picture 2 CANNOT be matched).

The reason this type of scenario is concerning is that there might be quite a bit of legit pictures that have the same lets say "pixel attributes" that might produce the same hash as a matched image.

And please, do not respond with just "I don't understand". This is EXACTLY why I am posting this scenario because if someone DOES know why picture 2 truly will not match but modified picture 1 does and there is no concern here, then please explain it. I looked through Apple's document (modifications will be a match and its using AI to get the match). I am looking to understand this type of scenario. How can two identical images not match, yet any modifications to the matched images will match?!
 
  • Haha
Reactions: ohio.emt
The three questions that did not get a yes/no answer form the biggest problem of this detection system. It's still not too late Apple, please cancel these plans.
 
Well, how do we know that this latest move is not mandated by the government already? They sure do seem to pass a lot of 3000+ page bills that get turned into law. I mean they have to pass the bill so we can find out what is in it…
 
  • Like
Reactions: DesignTime
Could governments force Apple to add non-CSAM images to the hash list?
Apple will refuse any such demands…

Yeah, well, the NCMEC (as being well-connected to the government) may not.

Once the hash is in the database, it becomes merely a question of having an entry (exploit) into the user’s device to check for any matches. The three-letter agencies probably have that already - or will buy it (such as Pegasus).
Exactly
 
As it's been stated before (several times) Apple currently will not view photos. Just scan for the hash of photos that some poor Law Enforcement Agent had to view, flag as illegal and add to a database of similar images. But this has so many ramifications:

1. It doesn't matter what Google, Adobe, or other companies are doing - in terms of this conversation. This is about Apple rolling out this 'feature' on your Apple device.

2. Chester the Molester isn't going to keep his illegal collection of photos on iCloud anyway. The percentage of offenders who do this must by infinitely small. I'm sure Apple already has these statistics and is aware of this. Why subject 99.999 % of population to this? Do police stop all cars on the road and check IDs to try to catch the 0.01%?

3. For the last two decades, we've seen tremendous loss of freedoms begin with the 'to protect the children' line... You can't really rally against that, are you a monster?! They know it, we know it, they know we know it. It's common practice cause it works.

4. This doesn't even 'actively protect children' as the photos must already be cataloged and reside in a database of known perv images. For active abusers - there won't be any known hash to scan for. So they're only looking for images that are already widely shared.

5. Many think that what's on our phone is an extension of what's in our brain - it holds our thoughts (if you write them down), photos, locations, preferences, searches, contacts, feelings and life experiences of our friends that communicate with us etc. While no court order can get into your brain, they can with your devices. But with this new 'feature' - a court order isn't even needed. They automatically scan for illegal hashes or whatever.

6. This isn't really a 'slippery slope' - we've seen this technology used again and again in nefarious ways by govt agencies. By allowing a company to scan your private data 24/7 for what the govt today deems is illegal - is a big gaping hole in the floor called 'the future.' Who is to say what the govt will find as illegal in 2yr, 5yr, 10yr?

7. The trend line isn't good. We already see people that promote views that are against the govt's line getting removed from social platforms (at the governments request) - and labeled as 'extremists.' Don't want a vaccine in 1999? You're just a fool - today, you're now an extremist. Where will it end?

8. Imagine waking up one day and finding that your iPhone updated and iCloud was turned on by default with the new update? (oops - we're sorry). Or additional 'spyware' was installed, or the govt sent Apple new things to search for? Law Enforcement needs a warrant to search your property today - well, that just flew out the window, as all your data has already been captured.

9. Does anyone 'really' believe that the NSA would not utilize every bit of data it can get it's hands on for their own purposes? Do you really think they're girl-scouts selling cookies for merit badges? Will scanning of your hard drive come next? In the coming years we probably won't even have hard-drives - everything will be cloud based (already headed that way), so essentially you've just handed over your freedom of privacy for the future.

10. Apple has made a big deal of privacy in the last few years. Funny (but sad) to see it was all BS.

1. It does matter since it gives you context. Apple is doing it in a more privacy-minded way.

2. You don’t know what you’re talking about, Facebook had more than 20 million CSAM incidents reported last year only, Chester maybe is not always as bright as you think.

3. This doesn’t mean that on the other hand you can weaponize “think of the children” for the opposite argument.

4. You don’t know, child abuse fighting NGOs know better than you.

5. They will ask you to consent. They‘re leaving the possibility to opt-out. They’re doing this scan ONLY, EXCLUSIVELY, SOLELY on data you’re about to PUT ON THEIR FRICKIN’ SERVERS anyway. (or better, it’s like a tree falling in a forest with no one listening until you upload them)

6. “Scan your private data 24/7” is a dumb catch-all buzzword. Do you disable the local file indexing of Windows Search or Spotlight? Do you disable the Photo app scanning your library for dogs, cats, trees, faces and birthday cakes? That’s 24/7 scanning. That could be “slippery sloped” as well. Just use only open source OSes and disable file indexing, go ahead.

7. I see, so you’re one of those guys…

8. What a bunch of “**** could happen” crap. Again, by this logic you should only use open source hardware, open source software and self-host your personal cloud.

9. “stuff could happen”

10. Yeah so funny for apple bashers and companies with a grudge against apple, except Apple while not perfect is still the champion of consumer privacy.
 
The previous post was posted for all the lovers of Reynolds products everywhere. We all know that our government would never do anything like that.
 
How long before USA and/or China demands Apple to start scanning for content other than CSAM on all iPhones (not just the ones with iCloud Photo Library enabled), or they would not be allowed to sell devices in those markets? These two countries represents two-thirds of Apple's revenue and therefore have a lot of leverage on the company.

In addition to the above:

Despite Constitutional prohibitions against such things, the US has a well documented history of secret warrantless wiretaps and abuse of the FISA court system.

Essentially every technology has, in its time, been illegally subverted by US law enforcement.

As for China it doesn’t even have bedrock Constitutional like protections limiting the behaviors of it’s state actors.

Apple has clearly demonstrated a proof of concept that could be applied to anything, face comparison, words, word combinations or proper nouns, these aided by AI, all of this increasing in speed and scope ability as AI and on device processor speed increase.

These systems aren’t static, they will continually be refined and possibly expanded. Just as Apple gave none of its explanations until folks started complaining, Apple has given no commitment to notifying the public of any significant or incremental changes to the scope, spread, speed, or use of this tech.

Apple has stated that it follows local laws, clearly to both to protect against being prosecuted and, without a doubt, to protect market share or access, to even possibly to develop goodwill from enforcement entities as a hedge against monopoly breakup threats. (What justice department would want to break up a “cooperative” partner that makes it look good to its political leaders? This is what mutually beneficial aggressive back scratching could look like.)

Even if Apple’s actions here are pure and not the result of some current accommodation, appeasement or acquiescence to government pressure or demands, at some point in the future, maybe soon, maybe a generation or two down the road, future Apple management may see refusal as unnecessary or impossible, or as a hinderance to personal profit and the slippery slope becomes a vertical drop.

As a thought experiment: I don’t know if it exists or is technically feasible now, but let’s consider “AI Sub-hashing” where parts of photos, videos, or recorded or live voice audio, are extracted and analyzed after preparation with an AI twist that adjusts for positioning, lighting, aging of faces, or acoustic effects (spatial audio anyone?), that can compare the picture of your driver’s license in wallet, or faces/voices in your photos, or key words or text strings elsewhere on device, against sub-hashed faces/voices from public videos shot or provided by some kind of known or unknown security personnel.

The ability of powerful AI will be a force for future good, but like all past human endeavors and technologies, each can be a force for amplifying the abilities of totalitarians or evil doers.

Apple has shown us a glimpse of a self surveillance future, one where these extensions of our brain, person, private papers and effects are now capable of reporting us to a private non law enforcement entity and eventually, possibly directly, to a state security entity.

While the stated goals of Apple’s efforts here appear reasonable and laudable, it doesn’t take away that these surveillance techniques, and their application, or expansion, don’t exist in a vacuum.

Similarly, Apple’s assurances today, are not absolutes. The future ability and use of this tech will be shaped by future confluences of social pressures and AI performance increases we can not fully see or imagine and we only have the assurances of today’s Apple management that they and their distant successors, people they may not ever meet, won’t relent to what will be relentless government demands for expansion.

Doing the wrong thing for the right reason is still doing the wrong thing, because in the world of coercion and appeasement, the road to hell is paved with short-sighted accommodations and good intentions naïvely understood to be absolute and enduring in scope and this is what gives me unease.

(As for assurances of security and accuracy, I’m reminded respectively of NSO Group intrusions and iOS spellcheck that makes odd errors all the time.)
 
  • Haha
Reactions: ohio.emt
My concern with this, and people have never really addressed other than being patronizing because I am ASKING for what I am missing here. We have two scenarios.

Picture 1: I stand in front of my house, thumbs up next to my mailbox.
Picture 2: I keep the pose because my eyes were a bit closed and they took another picture seconds later.

Picture 1 and 2 are VERY VERY CLOSE in terms of a pixel match.

Okay lets walk through the scenario. Assume Picture 1 made it on CSAM. Apple's claims and documentations state that any modifications to picture 1 would result in a collision. Its not a 1:1 hash match to the entry in CSAM. We on the same page so far? Picture 1 gets flagged but picture 2 does not. Correct in that assumption yes? From what people have said, this is indeed correct.

Okay, so I launch Photoshop and adjust picture 1 to match exactly like picture 2. I spend hours to the point where it matches picture 2 pixel for pixel. Still with me? I now have three pictures. Picture 1, modified Picture 1 and Picture 2.

According to Apple any crops, color adjustments, rotations, transforms and edits to the picture will still result in a collision. So are they essentially saying now my modified picture 1 will result in a collision? Yet people are somehow stating picture 2 will NOT be a collision? 1/1trillion odds it will be a collision?

This is a clear contradiction according to Apple's own statements (modification of image results in collision) and people's own explanation of the logic (picture 2 CANNOT be matched).

The reason this type of scenario is concerning is that there might be quite a bit of legit pictures that have the same lets say "pixel attributes" that might produce the same hash as a matched image.

And please, do not respond with just "I don't understand". This is EXACTLY why I am posting this scenario because if someone DOES know why picture 2 truly will not match but modified picture 1 does and there is no concern here, then please explain it. I looked through Apple's document (modifications will be a match and its using AI to get the match). I am looking to understand this type of scenario. How can two identical images not match, yet any modifications to the matched images will match?!
What is a "collision"? A match or not a match?
 
My concern with this, and people have never really addressed other than being patronizing because I am ASKING for what I am missing here. We have two scenarios.

Picture 1: I stand in front of my house, thumbs up next to my mailbox.
Picture 2: I keep the pose because my eyes were a bit closed and they took another picture seconds later.

Picture 1 and 2 are VERY VERY CLOSE in terms of a pixel match.

Okay lets walk through the scenario. Assume Picture 1 made it on CSAM. Apple's claims and documentations state that any modifications to picture 1 would result in a collision. Its not a 1:1 hash match to the entry in CSAM. We on the same page so far? Picture 1 gets flagged but picture 2 does not. Correct in that assumption yes? From what people have said, this is indeed correct.

Okay, so I launch Photoshop and adjust picture 1 to match exactly like picture 2. I spend hours to the point where it matches picture 2 pixel for pixel. Still with me? I now have three pictures. Picture 1, modified Picture 1 and Picture 2.

According to Apple any crops, color adjustments, rotations, transforms and edits to the picture will still result in a collision. So are they essentially saying now my modified picture 1 will result in a collision? Yet people are somehow stating picture 2 will NOT be a collision? 1/1trillion odds it will be a collision?

This is a clear contradiction according to Apple's own statements (modification of image results in collision) and people's own explanation of the logic (picture 2 CANNOT be matched).

The reason this type of scenario is concerning is that there might be quite a bit of legit pictures that have the same lets say "pixel attributes" that might produce the same hash as a matched image.

And please, do not respond with just "I don't understand". This is EXACTLY why I am posting this scenario because if someone DOES know why picture 2 truly will not match but modified picture 1 does and there is no concern here, then please explain it. I looked through Apple's document (modifications will be a match and its using AI to get the match). I am looking to understand this type of scenario. How can two identical images not match, yet any modifications to the matched images will match?!
My simple question to these questions; explain to me how you could have a picture so close to one that has been marked as child pornography without it actually being child pornography? Now explain how you could have more than one picture like this to the point where it would actually trigger a review from an Apple employee? And then try to imagine this will happen one in one trillion times and even comprehend what that number means in real world terms.
 
  • Like
Reactions: ohio.emt and I7guy
Apple did put their foot down and refused the FBI's request to unlock the iPhone of the San Bernardino mass murderer...

But that was a few years ago.

Do you think their stance has changed now?

I think what’s missing is context.

Apple fight the FBI request then because the case was about setting precedent, and the last thing Apple wanted was for the courts to establish a precedent that FBI could compel them to unlock a criminal’s iphone whenever necessary because it would likely have required them to install some sort of back door, or knowingly leave a security flaw unpatched (which could in turn be discovered and abused by other parties).

I don’t think Apple’s stance has changed. They are still a design-led consumer electronics company intent on making the best products for their users (based on their own interpretation thereof), and this scanning feature simply leverages on their control over hardware and software.
 
Again, you do realize that it would take more than one picture matching to flag an account…and the chances of more than one picture “matching”, being reviewed by an Apple employee and NOT being child pornography is a one in one trillion chance of happening?

Oh…and for the other idiots up above (not you), they are reviewing the flagged images. They do NOT have access to every pic in your iCloud account.
Apple can’t even get iTunes Match right yet everyone buys this “1 in a trillion” like it’s gospel and Apple can do no wrong.
 
Step 1. Apple does something new, but in a different way than the rest of the industry.

Step 2. The internet cries it’s evil and they will vote with their wallets and leave the ecosystem. Completely ignoring Apple does it in a better and privacy oriented way.

Step 3. Apple explains some better what it did.

Step 4. Everybody shuts up and the industry follows Apple’s lead.
This is spot on. What's more-Apple is the only tech company I would trust to execute something like this while at the same time maximizing privacy.

People should probably read more about it before peeing into their shoes. It's tied to a youth's phone and login being part of a family plan. It can be deactivated. Everyone just calm down.
 
Apple can’t even get iTunes Match right yet everyone buys this “1 in a trillion” like it’s gospel and Apple can do no wrong.
For your lame example, I would say that means child pornographers would get away with the system, not innocent people getting flagged incorrectly.
 
The way I heard it described on a podcast was like this:

Apple is not doing a visual scan of your photos. They're not looking at the actual contents of your photos.

They are, instead, comparing hashes of *known* CSAM images. These are photos that have already been labeled as child porn.

So there's no danger of Apple flagging a photo of your child in the bathtub or whatever.

With all that said... no one knows what else Apple could do in the future. Perhaps they could start scanning the actual contents of your photos. So I can see why people are freaked out.

But as others have said... all of the big companies are doing similar things. So I dunno.

Until:

A POS you have as a friend on a social media site downloads the picture you posted of your kid in the bath. He/she then includes that in a batch of CSAM photos he/she trades online for other CSAM images. Your photo hash gets collected and distributed by NCMEC. Now, you have CSAM material on your phone.

or

Someone wanting to harm you does the same and posts your photo on a CSAM forum, knowing it will be distributed and the hash will be logged.
 
This is spot on. What's more-Apple is the only tech company I would trust to execute something like this while at the same time maximizing privacy.

People should probably read more about it before peeing into their shoes. It's tied to a youth's phone and login being part of a family plan. It can be deactivated. Everyone just calm down.
People on here aren't upset about the iMessage thing...and yes, those that may be should read more and stop commenting.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.