Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
@farewelwilliams

This is so simple

People don't want ANY tool built into their phone whose purpose is to compare their private data against black box third party databases.

It's literally as simple as that.

There are also people in this very thread, who don't want any software which can perform scanning installed on the device.

Under any circumstances, they say.

If these people are OK with scanning software which is an advantage to them, then we have ONE circumstance were it is ok.
 
I’m not very good with words.

That's not true! Don't be so hard on yourself!
I appreciate your perspectives

I'm mostly just focused on this type of tool and the move to this sort of scanning to on our devices as opposed to getting into the weeds of what is being looked for and by who.

Once the tool and infrastructure are there -- it's essentially too late, as all that's stopping the goals and uses of that tooling from expanding to "who knows what" .... is Apple and their policies and whatever pressures that will get subjected to (a lot of pressures would be a very safe bet)
 
  • Like
Reactions: VulchR
These are all the steps spelled out, go to Step 7 for manual/human review.

Step 0
- Apple releases iOS15
- iOS 15 contains the hashes database, downloaded in full to every iPhone
- subsequent updates to the database can only happen as an iOS update, so the system cannot be abused for time sensitive searches (like targeting dissidents that were at a march “last week”)

Step 1
- the user activates iCloud Photos, basically surrendering his photos to Apple servers, like calling the police and saying “I am going to bring the whole content of my home to the local police station”
- then and only then the local scanning process begins
- said scanning process by itself has no way to phone home to Apple HQ

Step 2
- the scanning process creates fingerprints of the user photos (the photos that the user has already promised to surrender to Apple servers, not photos that the user “hasn’t shared with anyone” like some privacy advocate said, not a contradiction to “what happens on iphone stays on iphone”)

Step 3
- said fingerprints are compared by a super smart trained AI to the fingerprints in the database
- the AI is needed not to look at the content of the picture (the content is no longer part of the equation since Step 2) but to have some leeway, some wiggle room to be able to catch slightly modified (cropped, etc.) versions of the known offending picture
- the system is engineered to only match the known offending old photos from the NCMEC repository, it can’t look for new/personal children-related content

Step 4
- the output of the matching process is a label called a “security voucher”, attached to every photo
- this label only says 2 possible things
1) YES, this photo matches a known CSAM photo from the NCMEC repository
2) NO, this photo doesn’t match a known photo from the NCMEC repository
- at this stage though this label is still cryptographically secured GIBBERISH, no human on Earth can read it, not even someone having physical access to your phone
- embedded in the security voucher there’s also a low-res version of the user’s photo
- the label will remain gibberish till the end of time if Step 6b (see below) doesn’t happen
- (is a search whose output is gibberish on a post-it note that just sits there till the end of times an actual search?)

Step 5
- the user uploads the photos to iCloud Photos just like HE promised to do in Step 1
- now and only now the company known as Apple Inc. is involved in any way
- at this time, Apple Inc. can do one thing and one thing only: count the positive matches security vouchers
- now 2 things can happen
1) the number of positive security vouchers is smaller than the threshold —> go to step 6a
2) the number of positive security vouchers is bigger than the threshold —> go to step 6b

Step 6a
- the security vouchers remain unreadable gibberish till the end of times, well after we are all dead
- not even Tim Cook, the Pope, God, Thanos with all the stones, etc. can crack their multi factor encryption, it’s like granpa Abe Simpson Hellfish unit treasure in that classic Simpsons episode, you need a set number of keys to open the vault, that’s why the “threshold” system is not a policy decision that could be changed easily by Apple Inc. but a technical safeguard that’s built-in in the system: no one could ever end up in Step 6b and Step 7 because of a single unlucky wrong match (or “false positive”)
- Apple Inc. says that a good ballpark estimate of the chance of getting enough false positives to surpass the threshold is 1 in 1 trillion per year; some people dismiss this as “yeah how do I know they’re not being too optimistic” but it should be pointed out that Apple Inc. has given 3 external experts some access to the system, and that even if that quote was wrong by tenfold (1 in 10^11 instead of 1 in 10^12) it would be still be an extremely rare event (one innocent account flagged every 117 years); moreover, the order of magnitude of said quote is perfectly plausible since we’re talking about the compound probability of multiple rare events (as an example, it would be easy to get to 1 in 10^12 as the compound probability of six 1 in 10^2 rare events)

Step 6b
- if the number of positive security vouchers is above the threshold, finally Apple Inc. has enough cryptographic keys to decrypt the positive security vouchers
- now and only now said security vouchers stop being gibberish, basically any user that only reaches Step 6a has his privacy completely preserved (compare this to server-side searches of decrypted data on servers that equally invade the privacy of both innocent and not-so-innoncent users)

Step 7 - HUMAN REVIEW
- now and only now the positive security vouchers, no longer gibberish, can be looked at by a human reviewer at Apple Inc. HQ
- the human reviewer will be able to look at a low-res version on the user’s supposedly offending photo
- if the low-res photo is something innocuous like a sunset, a bucket of sand, a cat, a goldfish, etc., (and remember: the matching is based on hashes, not content, so the content won’t necessarily be children-related, that’s not the kind of similarity the AI would catch, don’t worry about the pics of your kids, they have no more probability of being accidentally flagged than any other subject), the human reviewer will acknowledge the system made an error and discard it, no automatic calls to the cops
- if the low-res photo actually looks like actual kiddie p0rn (that gotta be the worst job on Earth and these reviewer are sometimes psychologically scarred), then Apple Inc. will disable your iCloud account and maybe report you or maybe not (depending on the follow up internal investigation)

(phew that was long…imagine people trying to encapsulate the complex implications of all of this in a buzzword, a meme with dogs, or a simplistic hot take or an imprecise real-world analogy)

Users with 0 matches (likely 99%+ of iCloud accounts) will never go past Step 6a and never reach human review.

Users with 1 match will never go past Step 6a and never reach human review.

Users with 2 matches will never go past Step 6a and never reach human review.

Users with n-1 matches (with “n” being the unknown threshold Apple has chosen to use) will never go past Step 6a and never reach human review.


Hope this long recap post helps as many people as possible make an informed evaluation of how this works and what are the actual privacy implications.

I dedicate this to the “it’s not about the tech” crowd who’d wish to bury all this under blind buzzwordy outrage.
Thanks for your time and effort to write this highly educational meme. I appreciate that.
 
They could to this today with the iCloud backup software which is on your iPhone today. Its purpose is to copy all your data files. It's just one boolean value from backup up your phone to iCloud with an encryption key Apple has access to.

How you stop Apple from misusing this software?

You can't. You just have to trust Apple.
Unfortunately now, many people simply no longer trust Apple.
 
I do disagree with it quite honestly.

With my device and my data, I don't want to be a part of a dragnet.

My phone and my data are not here to be combed through to "catch predators" (or any other goal of some third party).

Data already surrendered to Apple’s servers.

Dragnet routinely performed by most other cloud hosts.
 
Care to elaborate?
Other cloud hosts have been treating us like suspects for the better part of a decade now, sifting thru our data with a dragnet.
All cloud hosts, including Apple, have been doing this since Microsoft gave everyone PhotoDNA. Apple has been doing it server side all along just as others have. This new implementation is completely different.
 
All cloud hosts, including Apple, have been doing this since Microsoft gave everyone PhotoDNA. Apple has been doing it server side all along just as others have. This new implementation is completely different.
Sure, but what is the only big tech company that up until last year reported a ridiculously low number (256 in 2020) of CSAM incidents compared to the others? (like Facebook’s 20M incidents)
They’re the last to get serious about treating every user as a suspect. (to use the words of people against this principle)
Maybe they are doing it in preparation to legislation that would force them to do it anyway.
 
They’re the last to get serious about treating every user as a suspect. (to use the words of people against this principle)
Maybe they are doing it in preparation to legislation that would force them to do it anyway.
That's certainly possible, maybe even probable -- it doesn't make it any more palatable though. Turning off iCloud photo's is about all I'll do about it this round and I'll wait for what comes next. (and switch to an android phone as my primary)
 
  • Like
Reactions: VulchR
These are all the steps spelled out, go to Step 7 for manual/human review.

Step 0
- Apple releases iOS15
- iOS 15 contains the hashes database, downloaded in full to every iPhone
- subsequent updates to the database can only happen as an iOS update, so the system cannot be abused for time sensitive searches (like targeting dissidents that were at a march “last week”)

Step 1
- the user activates iCloud Photos, basically surrendering his photos to Apple servers, like calling the police and saying “I am going to bring the whole content of my home to the local police station”
- then and only then the local scanning process begins
- said scanning process by itself has no way to phone home to Apple HQ

Step 2
- the scanning process creates fingerprints of the user photos (the photos that the user has already promised to surrender to Apple servers, not photos that the user “hasn’t shared with anyone” like some privacy advocate said, not a contradiction to “what happens on iphone stays on iphone”)

Step 3
- said fingerprints are compared by a super smart trained AI to the fingerprints in the database
- the AI is needed not to look at the content of the picture (the content is no longer part of the equation since Step 2) but to have some leeway, some wiggle room to be able to catch slightly modified (cropped, etc.) versions of the known offending picture
- the system is engineered to only match the known offending old photos from the NCMEC repository, it can’t look for new/personal children-related content

Step 4
- the output of the matching process is a label called a “security voucher”, attached to every photo
- this label only says 2 possible things
1) YES, this photo matches a known CSAM photo from the NCMEC repository
2) NO, this photo doesn’t match a known photo from the NCMEC repository
- at this stage though this label is still cryptographically secured GIBBERISH, no human on Earth can read it, not even someone having physical access to your phone
- embedded in the security voucher there’s also a low-res version of the user’s photo
- the label will remain gibberish till the end of time if Step 6b (see below) doesn’t happen
- (is a search whose output is gibberish on a post-it note that just sits there till the end of times an actual search?)

Step 5
- the user uploads the photos to iCloud Photos just like HE promised to do in Step 1
- now and only now the company known as Apple Inc. is involved in any way
- at this time, Apple Inc. can do one thing and one thing only: count the positive matches security vouchers
- now 2 things can happen
1) the number of positive security vouchers is smaller than the threshold —> go to step 6a
2) the number of positive security vouchers is bigger than the threshold —> go to step 6b

Step 6a
- the security vouchers remain unreadable gibberish till the end of times, well after we are all dead
- not even Tim Cook, the Pope, God, Thanos with all the stones, etc. can crack their multi factor encryption, it’s like granpa Abe Simpson Hellfish unit treasure in that classic Simpsons episode, you need a set number of keys to open the vault, that’s why the “threshold” system is not a policy decision that could be changed easily by Apple Inc. but a technical safeguard that’s built-in in the system: no one could ever end up in Step 6b and Step 7 because of a single unlucky wrong match (or “false positive”)
- Apple Inc. says that a good ballpark estimate of the chance of getting enough false positives to surpass the threshold is 1 in 1 trillion per year; some people dismiss this as “yeah how do I know they’re not being too optimistic” but it should be pointed out that Apple Inc. has given 3 external experts some access to the system, and that even if that quote was wrong by tenfold (1 in 10^11 instead of 1 in 10^12) it would be still be an extremely rare event (one innocent account flagged every 117 years); moreover, the order of magnitude of said quote is perfectly plausible since we’re talking about the compound probability of multiple rare events (as an example, it would be easy to get to 1 in 10^12 as the compound probability of six 1 in 10^2 rare events)

Step 6b
- if the number of positive security vouchers is above the threshold, finally Apple Inc. has enough cryptographic keys to decrypt the positive security vouchers
- now and only now said security vouchers stop being gibberish, basically any user that only reaches Step 6a has his privacy completely preserved (compare this to server-side searches of decrypted data on servers that equally invade the privacy of both innocent and not-so-innoncent users)

Step 7 - HUMAN REVIEW
- now and only now the positive security vouchers, no longer gibberish, can be looked at by a human reviewer at Apple Inc. HQ
- the human reviewer will be able to look at a low-res version on the user’s supposedly offending photo
- if the low-res photo is something innocuous like a sunset, a bucket of sand, a cat, a goldfish, etc., (and remember: the matching is based on hashes, not content, so the content won’t necessarily be children-related, that’s not the kind of similarity the AI would catch, don’t worry about the pics of your kids, they have no more probability of being accidentally flagged than any other subject), the human reviewer will acknowledge the system made an error and discard it, no automatic calls to the cops
- if the low-res photo actually looks like actual kiddie p0rn (that gotta be the worst job on Earth and these reviewer are sometimes psychologically scarred), then Apple Inc. will disable your iCloud account and maybe report you or maybe not (depending on the follow up internal investigation)

(phew that was long…imagine people trying to encapsulate the complex implications of all of this in a buzzword, a meme with dogs, or a simplistic hot take or an imprecise real-world analogy)

Users with 0 matches (likely 99%+ of iCloud accounts) will never go past Step 6a and never reach human review.

Users with 1 match will never go past Step 6a and never reach human review.

Users with 2 matches will never go past Step 6a and never reach human review.

Users with n-1 matches (with “n” being the unknown threshold Apple has chosen to use) will never go past Step 6a and never reach human review.


Hope this long recap post helps as many people as possible make an informed evaluation of how this works and what are the actual privacy implications.

I dedicate this to the “it’s not about the tech” crowd who’d wish to bury all this under blind buzzwordy outrage.
Places where this can all go badly wrong:

Step 0: Apple gets pressured by local government to release a version of the OS that allows the phones of its citizens to be scanned using political hashes (flags, memes, etc.) rather than child pornography. Is Apple really going to forego business in the PRC or the rich Gulf states to stick to its morals? I no longer trust them to do so. I object to what they are doing now, but I fear even more what they might do in the future.

Step 1: Apple decides someday in the future to scan all files, which is presumably a minor change, and, in combination with the issue raised in Step 0, could led to repressive scanning of more than just photos. Apple's scheme is a generic blueprint for spyware, plain and simple.

Step 2: Apple uses an approximate match, and hence the inevitability of false positives. What we do not know now is the extent to which the hash represents explicitly or implicitly perceptual features, like the amount of exposed skin. If it does, the false positives won't be random - they will be of people, and possibly sensitive. AI is never perfect. Never.

Step 5: A user takes a series of similar pictures, as people often do. If one in the series is flagged as a false positive, it is likely that others will too. So then Apple sees a number of apparent hits and a human employee decrypts and examines your pictures that could be of your kids, or of you, or of your partner.

How many times does this have to go wrong to be an unacceptable invasion of privacy? Once in my opinion. That's all. Do we know if Apple will even inform people if their pictures have been inspected by a human observer? Can you imagine the reaction to that?
 
I don’t care how well designed the tool is. As soon as it exists, it WILL be abused. Easy to use or not doesn’t matter.
You know face scanning has existed on your phone for a long time, right? Why hasn't this been abused, yet?

The latest update to Photos on iOS included a lot of features that specifically scan your photos
From Apple's Photos page:
Using on‑device machine learning, the Photos tab hides similar photos and reduces clutter by removing screenshots and receipts, so you can easily enjoy your best shots. Photos also uses intelligence to find and focus on only the best part of your photo for better previews.
Intelligent face recognition and location identification make it easy to find the exact photos you’re looking for, based on who you were with or where you were when you took them. You can even search for general categories, like “Japanese restaurant,” or get more specific, like “Aspen Ideas Festival.”
Using advanced machine learning, scene and object recognition lets you search your photos for things like motorcycles, trees, or apples. You can also combine multiple search terms — like “beach” and “selfies” — without having to tap each word in search.

It would be much easier to abuse one of those already existing systems than this CSAM one, which would require you to completely remove it and rebuild something else in its place to do what everyone seems to be worried about. Just saying, if you're going to be paranoid about this, at least look at the things that are actually tools capable of surveilling you in any meaningful way, tools which already are on your device and have been for a long time.
 
That's certainly possible, maybe even probable -- it doesn't make it any more palatable though. Turning off iCloud photo's is about all I'll do about it this round and I'll wait for what comes next. (and switch to an android phone as my primary)
Just know that in 2020 Google reported 546000 CSAM incidents to NCMEC.
Apple reported 265 incidents in the same period, less than Adobe.
You can read about it here


Apple was the last big company standing, up until 2020.

Some people here talk like they’re the first.
 
You know face scanning has existed on your phone for a long time, right? Why hasn't this been abused, yet?

The latest update to Photos on iOS included a lot of features that specifically scan your photos
From Apple's Photos page:




It would be much easier to abuse one of those already existing systems than this CSAM one, which would require you to completely remove it and rebuild something else in its place to do what everyone seems to be worried about. Just saying, if you're going to be paranoid about this, at least look at the things that are actually tools capable of surveilling you in any meaningful way, tools which already are on your device and have been for a long time.
Yes, I know it has been around for a long time. That’s why I hold off any upgrades until my iPhone 6s Plus died cause I accidentally dropped it into water. After upgrading, I just assume Apple already got my facial data anyways and decide to move on.

That doesn’t mean I agree with what Apple is doing here scanning all my photos, which not only has my face on it, but also faces of friends, families, photos of special moments and so on. It’s none of their business to even potentially know the full picture of my family, friend etc.

Oh btw, facial recognition technology has already been abused so much in certain countries people no longer care.
 
These are all the steps spelled out, go to Step 7 for manual/human review.

Step 0
- Apple releases iOS15
- iOS 15 contains the hashes database, downloaded in full to every iPhone
- subsequent updates to the database can only happen as an iOS update, so the system cannot be abused for time sensitive searches (like targeting dissidents that were at march “last week”)

Step 1
- the user activates iCloud Photos, basically surrendering his photos to Apple servers, like calling the police and saying “I am going to bring the whole content of my home to the local police station”
- then and only then the local scanning process begins
- said scanning process by itself has no way to phone home top Apple HQ

Step 2
- the scanning process creates fingerprints of the user photos (the photos that the user has already promised to surrender to Apple servers, not photos that the user “hasn’t shared with anyone” like some privacy advocate said, not a contradiction to “what happens on iphone stays on iphone”)

Step 3
- said fingerprints are compared by a super smart trained AI to the fingerprints in the database
- the AI is needed not to look at the content of the picture (the content is no longer part of the equation since Step 2) but to have some leeway, some wiggle room to be able to catch slightly modified (cropped, etc.) versions of the known offending picture
- the system is engineered to only match the known offending old photos from the NCMEC repository, it can’t look for new/personal children-related content

Step 4
- the output of the matching process is a label called a “security voucher”, attached to every photo
- this label only says 2 possible things
1) YES, this photo matches a known CSAM photo from the NCMEC repository
2) NO, this photo doesn’t match a known photo from the NCMEC repository
- at this stage though this label is still cryptographically secured GIBBERISH, no human on Earth can read it, not even someone having physical access to your phone
- embedded in the security voucher there’s also a low-res version of the user’s photo
- the label will remain gibberish till the end of time if Step 6 (see below) doesn’t happen
- (is a search whose output is gibberish on a post-it note that just sits there till the ends of time an actual search?)

Step 5
- the user uploads the photos to iCloud Photos just like HE promised to do in Step 1
- now and only now the company known as Apple Inc. is involved in any way
- at this time, Apple Inc. can do one thing and one thing only: count the positive matches security vouchers
- now 2 things can happen
1) the number of positive security vouchers is smaller than the threshold —> go to step 6a
2) the number of positive security vouchers is bigger than the threshold —> go to step 6b

Step 6a
- the security vouchers remain unreadable gibberish till the end of times, well after we are all dead
- not even Tim Cook, the Pope, God, Thanos with all the stones, etc. can crack their multi factor encryption, it’s like granpa Abe Simpson Hellfish unit treasure in that classic Simpsons episode, you need a set number of keys to open the vault, that’s why the “threshold” system is not a policy decision that could be changed easily by Apple Inc. but a technical safeguard that’s built-in in the system: no one could ever end up in Step 6b and Step 7 because of a single unlucky wrong match (or “false positive”)
- Apple Inc. says that a good ballpark estimate of the chance of getting enough false positives to surpass the threshold is 1 in 1 trillion per year; some people dismiss this as “yeah how do I know they’re not being too optimistic” but it should be pointed out that Apple Inc. has given 3 external experts some access to the system, and that even if they that quote was wrong by tenfold (1 in 10^11 instead of 1 in 10^12) it would be still be an extremely rare event (one innocent account flagged every 117 years); moreover, the order of magnitude of said quote is perfectly plausible since we’re talking about the compound probability of multiple rare events (as an example, it would be easy to get to 1 in 10^12 as the compound probability of six 1 in 10^2 rare events)

Step 6b
- if the number of positive security vouchers is above the threshold, finally Apple Inc. has enough cryptographic keys to decrypt the positive security vouchers
- now and only now said security vouchers stop being gibberish, basically any user that only reaches Step 6a has his privacy completely preserved (compare this to server-side searches of decrypted data on servers that equally invade the privacy of both innocent and not-so-innoncent users)

Step 7 - HUMAN REVIEW
- now and only now the positive security vouchers, no longer gibberish, can be looked at by a human reviewer at Apple Inc. HQ
- the human reviewer will be able to look at a low-res version on the user’s supposedly offending photo
- if the low-res photo is something innocuous like a sunset, a bucket of sand, a cat, a goldfish, etc., (and remember: the matching is based on hashes, not content, so the content won’t necessarily be children-related, that’s not the kind of similarity the AI would catch, don’t worry about the pics of your kids, they have no more probability of being accidentally flagged than any other subject), the human reviewer will acknowledge the system made an error and discard it, no automatic calls to the cops
- if the low-res photo actually looks like actual kiddie p0rn (that gotta be the worst job on Earth and these reviewer are sometimes psychologically scarred), then Apple Inc. will disable your iCloud account and maybe report you or maybe not (depending on the follow up internal investigation)

(phew that was long…imagine people trying to encapsulate the complex implications of all of this in a buzzword, a meme with dogs, or a simplistic hot take or an imprecise real-world analogy)

Users with 0 matches (likely 99%+ of iCloud accounts) will never go past Step 6a and never reach human review.

Users with 1 match will never go past Step 6a and never reach human review.

Users with 2 matches will never go past Step 6a and never reach human review.

Users with n-1 matches (with “n” being the unknown threshold Apple has chosen to use) will never go past Step 6a and never reach human review.


Hope this long recap post helps as many people as possible make an informed evaluation of how this works and what are the actual privacy implications.

I dedicate this to the “it’s not about the tech” crowd who’d wish to bury all this under blind buzzwordy outrage.

This is a great breakdown of the process. I'm a little curious about a few of the nitty gritty details and if this is all sourced elsewhere. But it's not worth wasting time nitpicking those items because the general idea and process is here.

For those of us that understand this process in general, Step 1 is where, I think, we've been getting hung up (the idea of this scanning process having to start on my device - at least for me anyway).

I had written out a "rebuttal" to your police analogy, and upon finishing it, I realized it needed tweaking... and continued tweaking... until I ended up with the two scenarios below. I'm going to go ahead and leave them there (italicized), but have some new thoughts below them.

Your police analogy over-simplifies it a bit. Try this. The city has a centralized storage facility that allows you to store any of your stuff there. Before you take your stuff there, the city sends you a scanner and requires that you scan the barcodes of everything you plan to store. They won't know the items you have, but they'll run the codes against a database of stolen items (also just codes), and if you have more than 20, someone will double check the items (seeing what they actually are) to ensure a match, and then call the police and lock up all of your stuff. That process takes a little time, so you'll still be able to store your items there. You won't know the results of the database comparison unless you get a call that your account has been locked up.

Compare that to this scenario.

Take the same analogy, and instead of them sending you a scanner, they just let you store everything in their facility. But once it's there, they can unlock your storage room and dig through your stuff to scan and compare to the stolen database. If there are problems, the results are the same. The difference is, they had to rummage through your stuff in this scenario. In the first scenario, they didn't.

Now that I've typed through that, I'll admit that doesn't seem quite as unreasonable. I don't like the lack of transparency involved in that process in the first analogy, but that concern exists in the second analogy as well.

Now, I could see how someone could still maintain the mentality of, "It's my stuff, you have no right to look through it before I send it to you. Why should I go to the effort of doing this for you?" Or, "Why would I trust some scanner you're sending me to do this?" Or even, "How do I know this scanner isn't opening me up to some other database?"

I think those are all valid perspectives to have here. Speculative? Sure. Possible? Absolutely.

For me personally, I don't lack trust in Apple... right now. They've given me enough reason over the years to give them a chance at handling this correctly. So for me personally, something about the way it's been framed here has me bending a bit in my original stance.
 
Just know that in 2020 Google reported 546000 CSAM incidents to NCMEC.
Apple reported 265 incidents in the same period, less than Adobe.
You can read about it here


Apple was the last big company standing, up until 2020.

Some people here talk like they’re the first.
So now what the minority of companies do defines 'moral'? We need to assess where AI is going to take us ethically rather than blithely assuming it'll all be OK.

In any case, I think you'll find that people's strong reactions here are from Apple's repeated advertising and marketing that it would never do invasive survillience like this, which created the expectation they wouldn't, not that people think Apple is the first. I won't show Apple's 1984 commercial again, but it seems sadly ironic now.
 
Just know that in 2020 Google reported 546k CSAM incidents to NCMEC.
Yep, know that.

Looks like they are serious about it and might actually make a dent in the problem. (and I'm no fan of sexual exploitation of children!)

Apple was the last big company standing, up until 2020.
Yep. They stood for privacy and I trusted my iPhone -- they no longer do, so I'll use the better phone as my primary now. (I never trusted google or samsung in the first place)

Some people here talk like they’re the first.
They *are* the first doing on device scanning.
 
For me personally, I don't lack trust in Apple... right now.

The key thing that's changing here is Apple is offloading the trust component by building tools like this on a device.

They want to move to focusing on "iCloud is E2EE - you are safe!" and simply not talk about the ability they built in to scan things on your device, completely outside the scope of their E2EE.

They want to get back to people fully trusting Apple and iCloud and let's just not think about or talk about the "safety scanning tools" built right into your device and what third parties will ask them to scan for (different in different regions)

They are trying to stop having to fight law enforcement and government requests.
I can understand that a bit from their view.

They really just want to make a lot of money and get rid of the hassles.

They were kind of our "white knight" company and they seem to want to move on from that role.
It's a bummer
 
Good for them. It would be pretty spineless to kowtow to negative press from vocal minorities who don't seem to even understand what they're doing and why - or misrepresent it.
 
It's like people don't understand that a hammer can be used to:

1. Pound nails to build a house
2. Break a window to save someone
3. Murder someone by bashing their head over and over

Tools don't have opinions or even necessarily specific things they are "meant for"

How they are used and by whom and for what is all that matters.
This is exactly it.

Since this tool is capable of matching near-exact hashes of the internal iOS database of child porn hashes, then it's only a matter of time before it's used to match far-from-close hashes of the internal iOS database of MAGA hats. Then, when Apple reviews a pixelated version of your picture of a cat's turd at a beach, they can forward it along to the authorities.
 
Good for them. It would be pretty spineless to kowtow to negative press from vocal minorities who don't seem to even understand what they're doing and why - or misrepresent it.
How has it been misrepresented or misunderstood?
 
Yes, I know it has been around for a long time. That’s why I hold off any upgrades until my iPhone 6s Plus died cause I accidentally dropped it into water. After upgrading, I just assume Apple already got my facial data anyways and decide to move on.

That doesn’t mean I agree with what Apple is doing here scanning all my photos, which not only has my face on it, but also faces of friends, families, photos of special moments and so on. It’s none of their business to even potentially know the full picture of my family, friend etc.

Oh btw, facial recognition technology has already been abused so much in certain countries people no longer care.
Hmm, my first iPhone was an iPhone 5. It already did face scanning of all my photos then. The issue that many in this thread are complaining about is that their device now, all of a sudden, will be scanning their photos! The issue is that it's simply not true. All of your photos have already been scanned for many reasons, and tools have been in place for a long time to detect people in your photos.

If people want to be paranoid about tools on iOS being abused for surveillance, then they really should look at and complain about the tools that are well-equipped for doing so -- tools which have been on their phones for a long time. This new tool is not even close to a feasible target for being abused for surveillance.
 
Hmm, my first iPhone was an iPhone 5. It already did face scanning of all my photos then. The issue that many in this thread are complaining about is that their device now, all of a sudden, will be scanning their photos! The issue is that it's simply not true. All of your photos have already been scanned for many reasons, and tools have been in place for a long time to detect people in your photos.

If people want to be paranoid about tools on iOS being abused for surveillance, then they really should look at and complain about the tools that are well-equipped for doing so -- tools which have been on their phones for a long time. This new tool is not even close to a feasible target for being abused for surveillance.
Except complaining tools solves nothing. They should’ve complained all people involved into creating those tools. We have yet to live in the age where tools build tools themselves.

People is always the issue behind this. Tools just serves their purpose and nothing more.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.