Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple created this system because of veiled threats from (a) Republican senator(s).

If Europe doesn't want this system, I don't think Apple would force it. GDPR probably forbids it right now, but EU parliament just voted this summer to make an exception for child pornography scanning.

Can you site proof of that? I don't recall seeing that as the driver.
Thx!
 
  • Like
Reactions: KindJamz
Microsoft PhotoDNA uses similar algorithms. This software is used by many others also since Microsoft donated it.

Although Apple's implementation is their own, the types of algorithms they are using wasn't invented by them.

Convolutional neural networks has decades of history: https://en.wikipedia.org/wiki/Convolutional_neural_network#History

Locality sensitive hashing can be traced back to the late nineties, but a lot of work happened 2008-2012 in particular: https://en.wikipedia.org/wiki/Locality-sensitive_hashing

Hyperplane LSH which is the type of LSH algorithm Apple is using goes back to about 15 if not 20 years. Here is a dutch paper from 2017 on its effectiveness to solve the near neighbour problem: https://drops.dagstuhl.de/opus/volltexte/2017/8092/pdf/LIPIcs-MFCS-2017-7.pdf

What Apple did was innovation, taking known algorithms and technologies from others and putting into a semi-new use.
The algorithms are all similar, the way the set of algorithms is applied and parameterized is different for Apple. Again, all discussed before in past threads, not going to repeat it. I have to repeat myself all day long for my computer science and math students, don't need to do it for MR members. ;)
 
Doesn't need to be trained on CP pictures, just needs to work on similar features. https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf

I'm not going to go over all this again, have done so on past threads. Probably worth it to read the leading papers on adversarial attacks that have been published.

No, it doesn't recognise similar features in a picture.

The convoluted neural network has two outputs:
1) floating point descriptors of original/perturbed pairs and
2) floating point descriptors of original/distractor pairs.

1) is for recognising photos but 2) is for not recognising similar photos.

The hash is produced by feeding the hashing algorithm all the floating point descriptors.

It is 2) which makes the CSAM Detection System poorly suited to find pictures of similar nature.

Let's say you took picture of a police car, then moved 1 feet to either side and took another picture. If picture 1 was in the database but not picture 2, the CSAM Detection System would not recognise it even though it was almost the same picture. Again because of 2.
 
“Apple has also said it would refuse demands by authoritarian governments to expand the image-detection system”

Apple cannot refuse such demands if they are written into a nation’s law, so this is a worthless promise. The UK government has the power (since 2016) to compel Apple – amongst others – to provide technical means of obtaining the information they want. But, worse than that, Apple are not permitted to divulge the fact that any such compulsion order has been made. They must, by law, keep those measures secret. It’s all very very Big Brother.
More to the point, Apple already DOESN'T refuse demands by Chinese, Arab and Russian authorities now (and probably a lot of other countries), and hides behind the 'we have to obey the laws of that country.'
 
  • Like
Reactions: Philip_S and VulchR
Can you site proof of that? I don't recall seeing that as the driver.
Thx!

“You’re going to find a way to do this or we’re going to go do it for you,” said Senator Lindsey Graham. “We’re not going to live in a world where a bunch of child abusers have a safe haven to practice their craft. Period. End of discussion.”

 
Ran into this article which gives some great links and summarized it fairly well

#1
"CSS currently relies on one of two methods for image scanning: perceptual hashing, in which an algorithm generates a hash (a numeric identifier) that functions as a digital fingerprint and produces the same hash if the image is altered in a minor way;"

#2
"and machine learning, in which a machine learning model is trained to recognize target content, even images it hasn't seen before. Both methods, the paper points out, generate false positives, may rely on proprietary technologies that limit auditing, can be subverted, and can be evaded."

#1 is the CSAM Detection System and its ill equipped to be misused compared to #2.

Yet, almost everyone is concerned about #1 when they really should be concerned about #2.

Why would governments use method 1 when method 2 is already in all phones and can just be changed slightly to report it findings to the government?
 
If Europe doesn't want this system, I don't think Apple would force it. GDPR probably forbids it right now, but EU parliament just voted this summer to make an exception for child pornography scanning.

That exception is a temporary measure which AFAIK is considered to be legally flawed and unable to hold in court even by the lawmakers who worked on it, who stated It was done quickly and under pressure and that there is need of a significantly better solution which addresses privacy considerations.

If they want to legislate something that holds in court they would likely need to add an explicit clause to the GDPR, or find a way to successfully argue that the scanning is necessary, which from a legal point of view is actually difficult.
 
I went looking and was unable to find the specific language that says Apple has the right to scan files intended to load on the iCloud on my device. Sure, you could potentially read that into it via interpretation, but the specific verbiage? Could not find that.

You won't find it because Apple hasn't implemented the feature yet.

But you'll find Apple can scan iCloud content in the iCloud agreement.
 
  • Like
Reactions: dk001
Apple has repeatedly used the constitutional argument with the US government you can't force us to write new code and add it to iOS to provide a backdoor around end to end encryption. Once you've written the code and included it in iOS, you've lost control over how a Government might dictate what images it must scan for. The only solution now is to never install the code, and they may lose the argument against a government order compelling Apple to install it in the future, because the code already exists and Apple has demonstrated the technology integrated into iOS.

So what kind of images could the government force into such a database which could be used to catch unwanted elements.

It would have to be images which are shared directly on a large number of devices to be useful.
 
#1
"CSS currently relies on one of two methods for image scanning: perceptual hashing, in which an algorithm generates a hash (a numeric identifier) that functions as a digital fingerprint and produces the same hash if the image is altered in a minor way;"

#2
"and machine learning, in which a machine learning model is trained to recognize target content, even images it hasn't seen before. Both methods, the paper points out, generate false positives, may rely on proprietary technologies that limit auditing, can be subverted, and can be evaded."

#1 is the CSAM Detection System and its ill equipped to be misused compared to #2.

Yet, almost everyone is concerned about #1 when they really should be concerned about #2.

Why would governments use method 1 when method 2 is already in all phones and can just be changed slightly to report it findings to the government?
The capability to do #1 foretells the ability to do #2. Remember that what is new on phones are the chips optimised to do AI computations.

I am pretty sure those of us who pointed out the problems of Apple's proposed CSAM surveillanceware and were accused of being ignorant have the right to say 'We told you so'. And our concerns the governments would take up systems like Apple's and use them for surveillance seems to be coming true.

We really do need to think about where technology is going, both with surveillance and robotics. Leaving important decisions about how technology is used to engineers won't do. We are at a crossroads now and the impact of our decisions we make now will have an overwhelming impact on the quality of life of our descendants. Time to think more carefully about whom we elect to office.

PRI_204915508.jpg

Link
 
  • Like
Reactions: PC_tech and dk001
That exception is a temporary measure which AFAIK is considered to be legally flawed and unable to hold in court even by the lawmakers who worked on it, who stated It was done quickly and under pressure and that there is need of a significantly better solution which addresses privacy considerations.

If they want to legislate something that holds in court they would likely need to add an explicit clause to the GDPR, or find a way to successfully argue that the scanning is necessary, which from a legal point of view is actually difficult.

So EU is safe for this feature. That's good news, right?
 
Doesn't need to be trained on CP pictures, just needs to work on similar features. https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf

I'm not going to go over all this again, have done so on past threads. Probably worth it to read the leading papers on adversarial attacks that have been published.

It seems like a cop-out.

The generation of floating point descriptors for original/perturbed pairs by the convoluted neural network being fed into the hashing algorithm is exactly why NeuralHash is poor at recognising similar images.

You seem to think otherwise.
 
  • Like
Reactions: dk001
So EU is safe for this feature. That's good news, right?

Yes, as far as I understand, but it's clear there is some political push to try to erode the privacy protections or introduce exceptions for some types of surveillance. The first attempt was flawed and likely ineffective, but I'm sure there will be other attempts.

That's why the more coverage and backlash these kind of initiatives get the better, even if the initiative currently might not impact your jurisdiction directly.
 
Good.

If Apple want to promote themselves as Privacy focused. They deserve every bit of criticism for this ridiculous idea.

They brought it on themselves.

we know already, these researchers already know everybody in the world knows the score. They also know Apple postponed without stating anything further due to the already ‘more than enough’ backlash!

yet still none of them care to speak anything about how and why and lack of communication that Amazon, Microsoft, Facebook , and Google have done on this very same topic.

I find that like paid agents to level Apple as they turn their focus simply for excuse. Maybe researchers should become like sciences where they too should follow the scientific method and not exclude observation against existing parties.
 
  • Sad
Reactions: dk001
The use of an image blaring "CHILD SAFETY" as the featured image for this article on MacRumors shows where the sympathies of the site's editorial staff lie. Not good, folks.
 
The algorithms are all similar, the way the set of algorithms is applied and parameterized is different for Apple. Again, all discussed before in past threads, not going to repeat it. I have to repeat myself all day long for my computer science and math students, don't need to do it for MR members. ;)

I haven't seen any discussion about the distraction generation.

But even Microsofts PhotoDNA, just like Apple's CSAM Detection System, is poor at detecting similar images.

The purpose of the two systems is twofold:

1) Detect images which are exactly (or close derivatives) of the images in the CSAM databsae
2) Not detect images which are similar (and even dissimilar)

It's 2) which makes the system such a poor candidate for misuse when there already exist technologies on the iPhone who does detect similar images.
 
  • Like
Reactions: nt5672
No, it doesn't recognise similar features in a picture.

The convoluted neural network has two outputs:
1) floating point descriptors of original/perturbed pairs and
2) floating point descriptors of original/distractor pairs.

1) is for recognising photos but 2) is for not recognising similar photos.

The hash is produced by feeding the hashing algorithm all the floating point descriptors.

It is 2) which makes the CSAM Detection System poorly suited to find pictures of similar nature.

Let's say you took picture of a police car, then moved 1 feet to either side and took another picture. If picture 1 was in the database but not picture 2, the CSAM Detection System would not recognise it even though it was almost the same picture. Again because of 2.
So then a pedophile could edit a few aspects of a CSAM photo and evade the system? If so, this will just lead to an arms race between image-editing pedophiles and those providing hashes, leading to increased storage requirements on the phone.
 
The capability to do #1 foretells the ability to do #2. Remember that what is new on phones are the chips optimised to do AI computations.

#2 technology is already implemented on iPhones and have been for several years.
 
"And from the looks of the idiots here at MacRumors that want this scanning to save the children I think we are all in trouble."

Ah, Yes by all means lets save "The Children" from the Scanning...4th Amendment be dammed. But not so much when it comes to save "The Children" from the Full Metal Jackets at Skool.....Cos that little Issue would be like touching the 3rd rail.....
 
Are you sure?;
Announced in August, the planned features include client-side (i.e. on-device) scanning of users' iCloud Photos libraries for Child Sexual Abuse Material (CSAM), Communication Safety to warn children and their parents when receiving or sending sexually explicit photos, and expanded CSAM guidance in Siri and Search.
They’re talking about scanning the photos that are about to be uploaded to iCloud Photos (and apparently it’s disabled if you disable the iCloud Photo Library). Would you be happier if exactly the same photos were scanned, but the scanning was happening on Apple’s servers after the photos were uploaded?

The impression I get is that every service is already scanning, or will soon be required to scan, photos that end up on their servers, for CSAM, and Apple figured it improved user‘s privacy if that required scanning happened client-side before the photos were uploaded rather than server side after they’re uploaded.

Baking it into the OS makes it harder for a government to come along and ask for additions - it’d mean changing iOS rather than just changing a script that runs on the server. Seems like Google and others that do it all server-side would be easier for a government to arm-twist into scanning more, since they can just change one script that customers never see - and security researchers / privacy advocates can’t analyze - rather than having to add code to the OS that then needs to be updated on every user device.
 
So then a pedophile could edit a few aspects of a CSAM photo and evade the system? If so, this will just lead to an arms race between image-editing pedophiles and those providing hashes, leading to increased storage requirements on the phone.

We don't know much editing is needed to avoid detection. Probably changing into black and white, changing colours, hue, contrast, some minor cropping won't avoid detection. Maybe even mirroring is supported.

But yes, adding enough new elements to the image is a way to avoid detection.

It's not as big a problem as you might think when it comes to reducing the spread of CP through iCloud, but it clearly illustrates why it's such a stupid system for governments to use to catch "unwanted elements".
 
Last edited:
  • Like
Reactions: nt5672
we know already, these researchers already know everybody in the world knows the score. They also know Apple postponed without stating anything further due to the already ‘more than enough’ backlash!

yet still none of them care to speak anything about how and why and lack of communication that Amazon, Microsoft, Facebook , and Google have done on this very same topic.

I find that like paid agents to level Apple as they turn their focus simply for excuse. Maybe researchers should become like sciences where they too should follow the scientific method and not exclude observation against existing parties.

Sorry but I am not seeing your point. None of the afore mentioned companies do CSS.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.