Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The way I heard it described on a podcast was like this:

Apple is not doing a visual scan of your photos. They're not looking at the actual contents of your photos.

They are, instead, comparing hashes of *known* CSAM images. These are photos that have already been labeled as child porn.

So there's no danger of Apple flagging a photo of your child in the bathtub or whatever.

With all that said... no one knows what else Apple could do in the future. Perhaps they could start scanning the actual contents of your photos. So I can see why people are freaked out.

But as others have said... all of the big companies are doing similar things. So I dunno.
You don't seem to get it. If a threshold is reached, a person at Apple will review the images. Moreover, how is any perceptual algorithm going to classify an image as child porn without assessing the amount of skin exposed as a feature? Thus, a human reviewer might be looking at sensitive photographs of you, your partner, or some innocent photo of your kid swimming, as a false positive. And how, exactly, is Apple going to keep pedophiles and pervs out of that job or reviewing your photographs?

Apple in 2022: "We have reveiewed a photo on your iCloud for marching with CSAM system. We inform you that the photo doesn't match any known child abuse images. Nice bikini."
 
In most countries, including the United States, simply possessing these images is a crime and Apple is obligated to report any instances we learn of to the appropriate authorities.

Apple will refuse any such demands

So in the first case they say they are obligated to follow the law. In the second they say they will refuse any demand to search for any other kind of images. But when that request becomes a law they will simply switch to the Apple is obligated to report line.

This is a pandora's box they won't be able to control.
100% this. This has mission creep written all over it.
 
You don't seem to get it. If a threshold is reached, a person at Apple will review the images. Moreover, how is any perceptual algorithm going to classify an image as child porn without assessing the amount of skin exposed as a feature? Thus, a human reviewer might be looking at sensitive photographs of you, your partner, or some innocent photo of your kid swimming, as a false positive. And how, exactly, is Apple going to keep pedophiles and pervs out of that job or reviewing your photographs?

Apple in 2022: "We have reveiewed a photo on your iCloud for marching with CSAM system. We inform you that the photo doesn't match any known child abuse images. Nice bikini."
I’ll repeat it until I’m dead…it is one in one trillion chance of multiple innocent pictures being flagged to cause a review by an Apple employee.

One in one trillion.

One in one trillion.

Do you really want to add in the odds to that of that particular employee being pedophile or some other deviant?? Your head might explode.

EDIT: You also do not seem to understand how the hash system works…there are no visual cues such as “amount of skin” that is taken into consideration.
 
Maybe it’s just me, but I’m starting to get creeped out that your constant examples are from the pedo point of view….wondering if the illegal image will still get matched if it is altered in one way or another.

Maybe think of better examples to give as I’m sure you are just trying to understand the tech…but seriously…stop using that kind of example.

And I’m getting a bit irritated about the constant hate and patronizing attitude against me. All I’m trying to do is learn how it works and understand the “modifications are also flagged” portion. So perhaps be a little nicer next time someone is struggling to understand.
 
Mostly a made up stat from Apple

See here:

Hah..it took me one minute to read that and then determine he is not comparing apples to apples.

The one in one trillion number is not based on single pictures which he tries to utilize in all of his examples. It is the odds of multiple pictures being flagged on a single account and all of them being innocent perfect matches to the hashed pictures. Apple has taken the odds of a single picture and then multiplied that by the number of pictures they actually need to flag an account. All we know id that it is more than 1.

It is totally possible to imagine the odds as being that high if you understand the context in which they are setting up the review process.

Not surprising that your so called “expert” is no better at reading what Apple wrote than more than half of the people on this forum. He is correct…it is not one in one trillion chance that a single image might match…but that is NOT what Apple said/claimed.
 
I bet you the Chinese government will be very interested in this technology.

Notice how Apple keeps changing the wording and being very careful with the document. I hope this backfires and Apple catches themselves in a massive lawsuit.

Privacy matters. Apple: Let us (the consumers decide) if we want you to scan our iPhone.

Apple you are a TECH company. You are not a law enforcement. Don’t lose your vision. Stop chasing the $.

Reports like this will be out left and right…



Easiest way to avoid this spying technology.

1. Turn off iCloud photos and messages.
2. Do not login to your iPhone using your iCloud credentials.
3. Possibly use a fake account to continue using your iPhone. Or, simply do not login with your Apple ID at all.

View attachment 1816465
I live in China, have been an Apple fan ever since 1996, but my friends and I are also shocked by what Apple is planning to do with this new scheme.

As you mentioned, Apple works with every government differently and always comply with local regulations, so it's not rare for our government to ask Apple to use this new "SCANNING and REPORT" technology to scrutinize the contents other than child porn, say, people who saved tons of Winnie the Pool mockery pictures on their phones...
 
And I’m getting a bit irritated about the constant hate and patronizing attitude against me. All I’m trying to do is learn how it works and understand the “modifications are also flagged” portion. So perhaps be a little nicer next time someone is struggling to understand.
Your example pattern: “So, if I have a picture of a child being violated in a bathtub and I photoshop the Incredible Hulk over him/her, will the image get flagged?”

Better example: “If I had a picture of a baseball laying in the grass and I photoshopped a picture of a basketball over it, would the image still get flagged?”

See the difference?
 
Child advocacy groups always use the line 'think of the children' when they want to push/force their agenda onto who ever they can. Adults are being expected to give up their privacy rights for the rights of children.

If Apple ignored CSAM then they would be accused of ignoring the abuse of children but as we can see, if they accept CSAM, then they are accused of interfereing with the privacy of Adults.

A very good example of how 'think of the children' agenda is pushed through is illegal immigration. The US and the UK have been in the media for many reasons for trying to tackle illegal immigration and what did we see, a photographer took photo's of a dead migrant child washed up on a US shore. The same happened to the UK, a dead migrant child washed up on a UK beach, a photographer took numerous photographs and boom, both incidents made headlines around the world and it forced the UK and the US to soften it's polices on illegal immigration with the result being more and more adults..not children, exploiting the system.

People need to learn, when something affects both adults and children, the wealthfare and safety of children is always put first and adults have to put up with it. Technology is no different as we are seeing ALL tech companies get hit with the same line from child advocacy groups.
I guess everyone's a potential pedophile until their photos/messages are scanned by big tech and proven SAFE.
 
Your example pattern: “So, if I have a picture of a child being violated in a bathtub and I photoshop the Incredible Hulk over him/her, will the image get flagged?”

Better example: “If I had a picture of a baseball laying in the grass and I photoshopped a picture of a basketball over it, would the image still get flagged?”

See the difference?

No because I wanted to know how far the edits would be flagged. Basketball over the baseball is a small edit which will probably get flagged if the original is still. Tracking how far the edits causes a match has a direct link to how far legit images need to differ.
 
1. The problem is them doing scanning on our own devices.

2. Apple has gone out of their way to brand their approaches as "more private" than the other big Corps. It's not unreasonable for people to be upset when finding out it was mostly a facade.
The Reason Apple is scanning on the phone is because

1. They don't want CSAM on their servers
2. They cannot add end to end encryption on iCLoud in future because they won't be able to scan E2E encrypted files
3. They don't want to scan the entire iCloud library for CSAM as that would mean the same privacy violation as Google and Microsoft on thier "private online drives"
4. They are scanning iCloud photo hashes not the photos themselves in transition to icloud. They are not, and cannot scan hashes of non-icloud data
5. Google scans private gmail accounts and reports them to police frequently. from a procedural aspect, that is a bigger violation of privacy than decrypted icloud photos in transition to Apple servers
 
  • Like
Reactions: MozMan68
People do realise that companies such as Google, Adobe, Facebook et. al already use some form of automated technology to scan for and detect CSAM? Adobe does it with Creative Cloud:


That's just one example.
Not saying it’s a bad thing but why ppl thinks brining other in is considered a good argument.

yes privacy is already gone in today world but that doesn’t mean ppl can’t still try their best to use or choose something that respect privacy of users
Coz in ur reasoning. Since governments and tech already monitor us so sure let’s just let them do whatever they want
 
I’ll repeat it until I’m dead…it is one in one trillion chance of multiple innocent pictures being flagged to cause a review by an Apple employee.

One in one trillion.

One in one trillion.

Do you really want to add in the odds to that of that particular employee being pedophile or some other deviant?? Your head might explode.

EDIT: You also do not seem to understand how the hash system works…there are no visual cues such as “amount of skin” that is taken into consideration.
I assume the hash system is a function of the content of the image, so the content of the image would be represented implicitly even if the hash system performs considerable dimension reduction. My point is that false positives would look like the offending photo's, so false positives are likely to be of a sensitive nature. And Apple's estimate of 1 in a trillion is just that - an estimate, based on what??? Are they assuming random properties of pixels or does their estimate actually to into account the statistics of real digital photos, which tend to be alike (think of the number of sunset/sunrise photos that all look similar)?

In regard to template-matching, which is essentially what this is, I remember a story from one of the founders of machine learning. Apparently somebody developed a robotic system for picking oranges, with the hope it would work 24/7. Initially the results were good. Then one night they found the robot aimlessly wandering around the trees waving its arms in the air. It was trying to pick the moon. Not sure if the story is true or not, but it does make a point.
 
Not saying it’s a bad thing but why ppl thinks brining other in is considered a good argument.

yes privacy is already gone in today world but that doesn’t mean ppl can’t still try their best to use or choose something that respect privacy of users
Coz in ur reasoning. Since governments and tech already monitor us so sure let’s just let them do whatever they want
Apple's solution to CSAM is the only one that works if you want privacy because

1. Apple will not scan entire cloud libraries like Google and Microsoft do
2. Apple will not scan private non-icloud photos on your device
3. Apple will only scan hashes using device AI on device in transition to iCloud.
4. Apple wants to ensure that its icloud servers are not a hub for CSAM
 
I assume the hash system is a function of the content of the image, so the content of the image would be represented implicitly even if the hash system performs considerable dimension reduction. My point is that false positives would look like the offending photo's, so false positives are likely to be of a sensitive nature. And Apple's estimate of 1 in a trillion is just that - an estimate, based on what??? Are they assuming random properties of pixels or does their estimate actually to into account the statistics of real digital photos, which tend to be alike (think of the number of suinset/sunrise photos that all look similar)?

In regard to template-matching, which is essentially what this is, I remember a story from one of the founders of machine learning. Apparently somebody developed a robotic system for picking oranges, with the hope it would work 24/7. Initially the results were good. Then one night they found the robot aimlessly wandering around the trees waving its arms in the air. It was trying to pick the moon. not sure if the story is true or not, but it does make a point.
All I can say is read the white paper from the original article. It explains how hashes work and why Apple does not need to “see” the image, pixels, etc.
 
The Reason Apple is scanning on the phone is because

1. They don't want CSAM on their servers

Too bad
I don't want it on my device

My data isn't theirs to scan through, regardless of the stated intent.

Re: E2E - let's just see if they actually ever do that or not. It's all speculation that seems to have started with theories from Gruber and Ritchie.
 
The Reason Apple is scanning on the phone is because

2. They cannot add end to end encryption on iCLoud in future because they won't be able to scan E2E encrypted files

This is a solid point. But they've now built a way to see E2E stuff. I know, they can't actually see it (unless it's escalated to iCloud, and yes I know that even then it's not a free for all).

But in an effort to be dogmatic about saying they can't access your stuff, they've actually opened up a way to access your stuff. It's a workaround so they can stick to their claims, but it's a workaround that might not be worth the trade-offs. One of the biggest "perks" of E2E is that nobody can access it but the sender and receiver. That's no longer completely true... but hey, at least Apple can't "see" it.

I'm not necessarily against this either. But it's not a "black and white" issue like so many are making it out to be (all bad or all good).
 
  • Like
Reactions: turbineseaplane
Too bad
I don't want it on my device

My data isn't theirs to scan through, regardless of the stated intent.

Re: E2E - let's just see if they actually ever do that or not. It's all speculation that seems to have started with theories from Gruber and Ritchie.
the icloud data isnt yours alone either. APple has shared icloud data with feds and with foreign governments because it is not end to end encrypted. Apple wants to get out of business of hosting content that can be requested with warrants. Apple will legally be able to say we cannot just like Whatsapp and Telegram do when the server content is end to end encrypted

As soon as you choose iCloud photos. That data is eligible to be scanned for feds because the data becomes decrypted immedaitely. Because of the fact that iCloud photos is NOT end to end encrypted, the iCloud photos on your device cannot be either, that is the fundamental flaw which requires iCloud to be E2E
 
I’ll repeat it until I’m dead…it is one in one trillion chance of multiple innocent pictures being flagged to cause a review by an Apple employee.

One in one trillion.

One in one trillion.

Do you really want to add in the odds to that of that particular employee being pedophile or some other deviant?? Your head might explode.

EDIT: You also do not seem to understand how the hash system works…there are no visual cues such as “amount of skin” that is taken into consideration.
Ok, Trillion number one we have here - that went quick - wish I did play these numbers in a lottery...

 
All I’m trying to do is learn how it works and understand the “modifications are also flagged” portion.
This information is available here: https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf

It's complicated (although lucid). I suppose there are two ways to take that: either (a) the folks working on this at Apple are pretty smart and are doing due dilligance or (b) complicated concepts exist to hide information. I read it as (a) and, admittedly, have no particular reason to believe that Apple would attempt to lie about the implementation.

Eventually they were going to be strong-armed into doing something; this really seems like a measured approach.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.