Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
  • Like
Reactions: artfossil and dk001
You are reading too much into those words...it is quite clear that they are talking about the SAME IMAGE, just slightly modified by cropping, color levels, etc.

Well, actually, they say "for example cropping etc." and don't say what other examples might be. As you read further, the text slowly degrades the criteria from "identical" to "nearly identical" to "visually similar" - those words do not mean the same thing, so please choose one. Their only visual example (which you'd reasonably expect to demonstrate the power of the system) contrasts two images of a palm tree that only differ by one having been converted to monochrome with a third completely unrelated one of a (tree free) city scape.

It is easy to - quite mechanically and 100% reliably - detect if two images are identical. It is easy to use a cryptographic hash (the type people are most likely to be familiar with) to do that without having to have an identifiable copy of the reference image - with a tiny (and well understood) probability of a false match. But, as soon as you allow for the possibility that the image may have been cropped, resized, blurred, resampled, watermarked - or probably several of those (....which would otherwise easily defeat the checking process) you're into the realms of "intelligent" image analysis and pattern recognition... which is exactly what some people here are claiming the process isn't.

Exactly. If a photo is slightly cropped, it should still result in a match.

...and that one little "if" changes a simple, transparent process with a mathematically well-defined probability of a false match into a far more complex AI problem.

It is not just hashing, it is Apple(r) NeuralHash(tm) perceptual hashing which is not the sort of hash you might have used to check that a download is identical to the original.
 
  • Like
Reactions: Pummers and dk001
Just because they dumbed it down with the tree example so idiots could understand it, doesn't mean the system isn't way more complex than that.

They were simply showing that despite the changes, the numerical value applied to the image was the same.

Again, you are overthinking it. And now they've clarified that it takes up to 30 images to be marked before an account is flagged. The one in one trillion number is so plausible (it even seems low now)...it is practically impossible for enough innocent images to accidentally be flagged as matches to images in the database to even cause a review.
 
Interesting interview -

Thx.
Decent interview but it still fails to say “Why?” they elected to do it client side.
 
  • Like
Reactions: Pummers
Just because they dumbed it down with the tree example so idiots could understand it, doesn't mean the system isn't way more complex than that.

They were simply showing that despite the changes, the numerical value applied to the image was the same.

Again, you are overthinking it. And now they've clarified that it takes up to 30 images to be marked before an account is flagged. The one in one trillion number is so plausible (it even seems low now)...it is practically impossible for enough innocent images to accidentally be flagged as matches to images in the database to even cause a review.

That could be part of the problem. You have two audiences; basic consumer and educated consumer. One would think that Apple would have an appropriate level of information for both. Especially with all the angst and concern over this feature.
 
That could be part of the problem. You have two audiences; basic consumer and educated consumer. One would think that Apple would have an appropriate level of information for both. Especially with all the angst and concern over this feature.
You are forgetting about the, "I don't care how they explain it, I think there were children being molested in the back of a pizza parlor" crowd as well that want to believe anything that CAN happen WILL happen.
 
Interesting interview -
Great interview. He explained it exactly as it was explained in the white papers, but it was easy to understand. This feature is restricted to only photos as they're being uploaded to iCloud and they're sent along with a safety voucher and then those vouchers are checked on Apple's own server to determine if there's enough of them to warrant further investigation.
 
That could be part of the problem. You have two audiences; basic consumer and educated consumer. One would think that Apple would have an appropriate level of information for both. Especially with all the angst and concern over this feature.
Bingo. You've got a couple of paragraphs stuffed with technical image recognition jargon followed up by a picture example that Big Bird would find insulting and that a 14 year old could match with a few lines of Python. They'd already talked about coping with cropping/resizing and reduced quality, so the obvious example would have been a cropped version of the Palm tree picture.

You'd expect a document released by Apple on such a sensitive subject will have been through a lot of scrutiny by several people and that they'd want to make sure the language was clear and consistent and the examples convincing. "Who is the audience" is one of the first questions you should ask when writing an important paper. Someone should have read it from the perspective of a critic of the system and highlighted the ambiguities and contradictions... or at least pointed out that the palm trees example wouldn't make sense if someone printed the document in B&W....

I'm a great believer in Hanlon's Razor: never attribute to malice what can be adequately explained by incompetence - but the best case there is that Apple have done a sloppy job of communication and deserve the push-back... but when the summary paragraph (the bit the managers read) says "identical" but that changes to "visually similar" after the tech-dump, it's time to start being skeptical.

Likewise, I don't give much credence to about unlikely evil conspiracies by the Powers That Be, but more the proven inability of said Powers to find their backsides with both hands and value politics over reality: over-zealous application of so-called "AI" and over-statement of its accuracy, confirmation bias when faced with horrible subjects like child abuse, misrepresentation of statistics (by people who who should know better) in criminal cases and large companies/organisations trying to conceal their cock-ups at the expense of ruining people's lives are not the stuff of fantasy.
 
  • Like
Reactions: Pummers and dk001
Bingo. You've got a couple of paragraphs stuffed with technical image recognition jargon followed up by a picture example that Big Bird would find insulting and that a 14 year old could match with a few lines of Python. They'd already talked about coping with cropping/resizing and reduced quality, so the obvious example would have been a cropped version of the Palm tree picture.

You'd expect a document released by Apple on such a sensitive subject will have been through a lot of scrutiny by several people and that they'd want to make sure the language was clear and consistent and the examples convincing. "Who is the audience" is one of the first questions you should ask when writing an important paper. Someone should have read it from the perspective of a critic of the system and highlighted the ambiguities and contradictions... or at least pointed out that the palm trees example wouldn't make sense if someone printed the document in B&W....

I'm a great believer in Hanlon's Razor: never attribute to malice what can be adequately explained by incompetence - but the best case there is that Apple have done a sloppy job of communication and deserve the push-back... but when the summary paragraph (the bit the managers read) says "identical" but that changes to "visually similar" after the tech-dump, it's time to start being skeptical.

Likewise, I don't give much credence to about unlikely evil conspiracies by the Powers That Be, but more the proven inability of said Powers to find their backsides with both hands and value politics over reality: over-zealous application of so-called "AI" and over-statement of its accuracy, confirmation bias when faced with horrible subjects like child abuse, misrepresentation of statistics (by people who who should know better) in criminal cases and large companies/organisations trying to conceal their cock-ups at the expense of ruining people's lives are not the stuff of fantasy.
This will all blow over once people realize that there isn't some mass amount of people getting arrested for false positives. Nobody except the sickest of people will even have their privacy invaded.

So yeah, switch to some other device or turn airplane mode on. Whatever you wanna do for that 1 in a trillion chance your 30 innocent photos are looked at by an Apple employee.

Panic much?
 
I laid out the reasoning for you just a few posts above. Conveniently, you have neither acknowledged nor refuted my post.



Oh, I understand it quite well. But frankly, how the tech works is irrelevant.

It doesn’t matter if the search is done using hash-matching algorithms, bloodhounds, or black magic. A search is a search. And a search should only be conducted on my personal device if there is a warrant.

Innocent citizens, who are not suspected of committing a crime, should not be subjected to mass surveillance. The technical implementations of the search/surveillance do not matter.

This is all the more frustrating, because I really don't care if Apple searches my photos. I don't have anything to hide, so I'd be happy to consent to a search in iCloud, in exchange for accessing the iCloud services. However, I must object to this implementation of on-device scanning in principle. Once surveillance technology is built into our personal devices, the devices are no longer personal.

And no, using the "moments" feature is not the same thing. I consent to using that image analysis, and it is done for my benefit. It is not a surveillance feature that phones home to report on citizens of criminal behavior.
You use iCloud, it’s already being searched there. Your photos are already being scanned, so where this happens is irrelevant. if you don’t use iCloud photos the images are not subjected to CSAM scanning.

Now Apple can E2E encrypt your photos on iCloud as a result of doing what they do on iCloud on device. I’d argue it’s more secure to abuse if it’s done on device too and effectively will (if the implement E2E make your photos in the cloud more secure) - since there are tin foil hats here thinking folks are going to inject kiddy porn into their photos.
 
  • Like
Reactions: artfossil
I’ve had enough. I have reverted back to iOS 14 and macOS 11 from the dev betas, and I will stay there until the hardware stops working. Also I have stopped working on a macOS dev project and will just focus on command line stuff to get my work done. When my Apple hardware quits working I’ll move to a flip phone and Linux. You all have fun arguing.
Bye 👋🏻
 
You use iCloud, it’s already being searched there. Your photos are already being scanned, so where this happens is irrelevant. if you don’t use iCloud photos the images are not subjected to CSAM scanning.

Now Apple can E2E encrypt your photos on iCloud as a result of doing what they do on iCloud on device. I’d argue it’s more secure to abuse if it’s done on device too and effectively will (if the implement E2E make your photos in the cloud more secure) - since there are tin foil hats here thinking folks are going to inject kiddy porn into their photos.

No it isn’t. That is the surprise. Apple today only searches as requested by a subpeona or similar request. Appple does not do the cloud search like Amazon, Google, and others.
 
No it isn’t. That is the surprise. Apple today only searches as requested by a subpeona or similar request. Appple does not do the cloud search like Amazon, Google, and others.
Read the EULA:

“However, Apple reserves the right at all times to determine whether Content is appropriate and in compliance with this Agreement, and may screen, move, refuse, modify and/or remove Content at any time, without prior notice and in its sole discretion, if such Content is found to be in violation of this Agreement or is otherwise objectionable.”
 
Read the EULA:

“However, Apple reserves the right at all times to determine whether Content is appropriate and in compliance with this Agreement, and may screen, move, refuse, modify and/or remove Content at any time, without prior notice and in its sole discretion, if such Content is found to be in violation of this Agreement or is otherwise objectionable.”
This right here. The device may be paid for by you, but you still legally agree to everything they install on it when you accept that agreement.

Don't like it? Move onto another device or scrap smart phones all together.

Funny how people are willing to carry around a device that has constant access to the internet and GPS and care about how private they are.

You wanna be truly private and not part of that 1 in a trillion statistic, then ditch your electronic devices and stop complaining.
 
That depends on the country you live in. My country has laws to guarantee your privacy and immunity of your apartment and private life for example. Up to constitutional guarantees. A judge could still let the police come to your place if he has a reason, Apple hasn't and is not a government entity at all but a business.

My country, Germany, learned this lesson the hard way with our horrible Gestapo and Stasi past: Protect privacy. Stasi even had a "personal odor/smell" database of people, I am not kidding.

The legal side will be very interesting when or if they should rollout this "new service" beyond the US.
 
Read the EULA:

“However, Apple reserves the right at all times to determine whether Content is appropriate and in compliance with this Agreement, and may screen, move, refuse, modify and/or remove Content at any time, without prior notice and in its sole discretion, if such Content is found to be in violation of this Agreement or is otherwise objectionable.”

Doesn’t mean they do it. If Apple is scanning the iCloud for CSAM, how are they only reporting 265 instances last year?
Apple won’t, so far, come out and say specifically what they are / are not doing. However the numbers don’t support a scanning claim.

For the year 2020 - CSAM reports:
Total: 21.4 million
Facebook: 20,307,216
Google: 546,704
SnapChat: 144,095
Microsoft: 96,776
Twitter: 65,062
IMAGR: 31,571
TikTok: 22,692
DropBox: 20,928
Apple: 265
 
Last edited:
  • Like
Reactions: Pummers
Doesn’t mean they do it. If Apple is scanning the iCloud for CSAM, how are they only reporting 265 instances last year?
Apple won’t, so far, come out and say specifically what they are / are not doing. However the numbers don’t support a scanning claim.
The low numbers suggest to me that they truly are keeping your things private, but hey, that's just me reading into things.
 
  • Haha
Reactions: dk001
I don't use Apple technology so they can be moral police. I don't have anything to worry about on my phones but.. What if a few years from now they decide to do similar to find curse words on you messages?
 
I don't use Apple technology so they can be moral police. I don't have anything to worry about on my phones but.. What if a few years from now they decide to do similar to find curse words on you messages?
Or.... what if... they don't?

It's best not to get caught up in "what ifs" and "maybe they could".

I trusted Apple before this and I'll continue trusting them as there's no reason for me not to.

Edit: About your curse word thing. That would be completely separate from what we're talking about here. You're thinking about parental controls where if a kid swore, the iPhone could catch it before it is sent and notify the parent that implemented the child safety features on their kid's device. That would be completely fine in my opinion if it was an optional parental control thing.
 
  • Angry
Reactions: pdoherty
I don't use Apple technology so they can be moral police. I don't have anything to worry about on my phones but.. What if a few years from now they decide to do similar to find curse words on you messages?
If Apple is allowed to do this without any blowback, you can be assured that other technology providers will start down the same path.
 
  • Like
Reactions: pdoherty and LinusR
Doesn’t mean they do it. If Apple is scanning the iCloud for CSAM, how are they only reporting 265 instances last year?
Apple won’t, so far, come out and say specifically what they are / are not doing. However the numbers don’t support a scanning claim.

For the year 2020 - CSAM reports:
Total: 21.4 million
Facebook: 20,307,216
Google: 546,704
SnapChat: 144,095
Microsoft: 96,776
Twitter: 65,062
IMAGR: 31,571
TikTok: 22,692
DropBox: 20,928
Apple: 265
“U.S.-based Electronic Service Providers report instances of apparent child pornography that they become aware of on their systems to NCMEC’s CyberTipline. “

So they are reporting what they find. Given the services that the above companies provide I don’t find it all that shocking that Apples numbers are low.
 
Read the EULA:

“However, Apple reserves the right at all times to determine whether Content is appropriate and in compliance with this Agreement, and may screen, move, refuse, modify and/or remove Content at any time, without prior notice and in its sole discretion, if such Content is found to be in violation of this Agreement or is otherwise objectionable.”
But what that doesn't include is "send that content and your name to the police."

Having said that, CSAM tends to be subject to mandatory reporting laws that would override the terms of service anyway. But if this were to expand to other material besides CSAM, the question of whether Apple has the right to report you to authorities starts to get more complicated.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.