Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You are wrong. Derivative and similar in grand scale of things is the same when source material number is high enough. In Apple white paper they are literately showing how they are matching the single colour image to grayscale image. This information is used in AI training process which consist of aprox. 200 000 images. You do understand that the whole CSAM database which Interpol is using has over 2.7 million images. You honestly believe that a) Apple is only looking for 200 000 images because those are the images that were used to train the AI. Tell me why are they only looking for such a small number of images and leaving out 2.5 million? B) Or after 200 000 child porn images the AI has enough info to recognise also the other 2.5 million images? C) This means the AI can recognise and correctly identify material outside the teaching material. D) Why wouldn’t they search child porn outside the base set? This might actually save children. With adjustments of Neural hash threshold this can be easily achieved.
Reread the white paper... you really don't understand it at all. Also, if you used the full 2.5 million as the training dataset, then you'd overfit your network. The goal is to be able to catch near-matches, not just kinda-similar. You might also benefit from some basic deep RNN training if you're going to respond with nonsense as if it's fact.
 
  • Disagree
Reactions: Nuvi
Because if Apple wants to start doing needle in a haystack searches on iCloud they pay the price of having no probable cause and that’s fair.

But this is in the context of governments pressuring Apple to do this.

Are you saying there are governments who are powerful enough to force Apple to do this and yet have to abide by probably cause?
 
How does any layman know what's in the database? Who decides? Are Sally Mann's photos in there? If so, how many of them?
the decision of what to include is made by the ncmec and presumably either the european counterpart or the uk's internet watch foundation, apple will include images that appear in at least 2 databases, in this way, apple will be able to avoid including non-csam images since the likelihood of non-csam images being in more than 1 database is very unlikely

this will assure that the images are actually csam

pple then creates a database by merging 2 different csam databases and then installs that with the installation of os 15

below is apple's explanation of how that database will be secured

The perceptual CSAM hash database is included, in an encrypted form, as part of the signed operating system. It is never downloaded or updated separately over the Internet or through any other mechanism. This claim is subject to code inspection by security researchers like all other iOS device-side security claims.

Since no remote updates of the database are possible, and since Apple distributes the same signed operating system image to all users worldwide, it is not possible – inadvertently or through coercion – for Apple to provide targeted users with a different CSAM database. This meets our database update transparency and database universality re- quirements.

Apple will publish a Knowledge Base article containing a root hash of the encrypted CSAM hash database included with each version of every Apple operating system that supports the feature.

Additionally, users will be able to inspect the root hash of the en- crypted database present on their device, and compare it to the expected root hash in the Knowledge Base article. That the calculation of the root hash shown to the user in Settings is accurate is subject to code inspection by security researchers like all other iOS device-side security claims.
 
  • Like
Reactions: januarydrive7
You are wrong. Derivative and similar in grand scale of things is the same when source material number is high enough. In Apple white paper they are literately showing how they are matching the single colour image to grayscale image. This information is used in AI training process which consist of aprox. 200 000 images. You do understand that the whole CSAM database which Interpol is using has over 2.7 million images. You honestly believe that a) Apple is only looking for 200 000 images because those are the images that were used to train the AI. Tell me why are they only looking for such a small number of images and leaving out 2.5 million? B) Or after 200 000 child porn images the AI has enough info to recognise also the other 2.5 million images? C) This means the AI can recognise and correctly identify material outside the teaching material. D) Why wouldn’t they search child porn outside the base set? This might actually save children. With adjustments of Neural hash threshold this can be easily achieved.

1. I haven't found any number directly from Apple how many images were used for training in the
convolutional neural network.

2. Again, I haven't found any number directly from Apple on how many images will be in the hash database in iOS. But if you look at the bottom of page 6 of the technical summary you will see that there is one hash per image and this is put into a hash table blinded. The detection will only occur against this table by using the hash of the user's photo as an index into this table. This shows that they will only check against hashes in this table.

A) Apple hasn't said at all they're using Interpol's database. One reason why they won't be looking for all images might be space restriction on the device. The other might be that they just include the one who sees most active distribution at this time.

B) The NeuralHash is designed to only find photos compared to a known list. From the technical summary: "Apple does not learn anything about images that do not match the known CSAM database", "
The hashing technology, called NeuralHash, analyzes an image and converts it to a unique number specific to that image. Only another image that appears nearly identical can produce the same number; for example, images that differ in size or transcoded quality will still have the same NeuralHash value."

Nearly identical is a far cry from similar.

C) They could create such an algorithm which recognised images of a similar nature. Apple has done so for the Photo app already and now also in the Messages app.

Apple is using the convoluted neural network to create multi-dimensional, floating-point descriptors for
original/distractor pairs which is to say they train the AI to not find similar images. This is described in the technical summary.

The AI is trained to provide two features: 1. Being able to find exact images and nearly identical images 2) Not finding images which are similar.

D) It would increase the number of false positives dramatically, I imagine. It would create many more situations where Apple had to decide if it is child pornography. By just chasing the worst of the worst, they reduce this problem to a minimum. It's an extremely pragmatic way.

Another point is to stop child pornography from spreading. Such material with no or little spreading is something which might be disregarded until you have contained the material which is shared extensively.
 
if you used the full 2.5 million as the training dataset, then you'd overfit your network.
Hm. Not sure, but since the network is not supposed to generalize, but find images in the training set only, wouldn‘t overfitting be kindof a desired feature?
 
1. Apple routinely updates its OS so that regularly updating this database and pushing it to users is trivial.

2. Apple do not have access to the original database only the hashes. If the hash for a photo of BLM was added then Apple wouldn’t have a clue and would just push it to users.

2. They would need to provide at least 30 images.

Also how would that help? Are there certain iconic images which people who support BLM has in their Photo Library? Are there more than 30 of them?
 
  • Like
Reactions: citysnaps
Didn't note that, but definitely wouldn't support that either, unless there is an option you can fully disable it.

I don't think there is and it captures the entire cabin, not only the driver.
You can of course turn off it not sharing data with Tesla just as you can with Apple.
 
macOS currently does both: generate a face hash very accurately, and at the same time characterize abstract structures (e.g. grain fields). Apple is very good in this.

I agree, so why aren't people up in arms about these features?
 
You mean like when Apple forced people who do not use icloud, to upload their photos without their consent by enabling icloud stream on by default ??? (without asking user's permission...)

It doesn't matter how you try to twist it. Apple tried to pass a very specific message about privacy all those years.

If you do not mind, please answer this. Why do you think apple should protect children from child pornography? Do you think it's apple's role?

If yes, should it protect them from searching other topics, like suicide, guns etc? I would really appreciate if you would answer those questions please.
They will not answer those questions because they can't. They know Apple's role in society is not to police the world. They know the role of that should fall on the shoulders of society itself and, when things fall within the scope of the law, the police themselves. But they can not bring themselves to admit any of this, because on the internet, winning and losing arguments with complete strangers is akin to life and death--for some inexplicable reason.
 
  • Like
Reactions: dialogos
Today I spoke to a family friend who happens to be a lawyer for IT law and who deals with big companies all the time. He said to him it looks highly unlikely that this new general scanning of user devices would be permissible by European law. However he said some European divisions of US companies would be permitted to use US standards for their European businesses as well. So it's not clear to him yet if Apple could practically roll out this in the EU.

True, but the EU parliament just last month voted for an exception to GDPR (data protection) when it comes to child pornography lasting 3 years.
 
I agree, so why aren't people up in arms about these features?
Because everyone is chemically addicted to their devices. And when you're chemically addicted to anything, you either come up with your own rationalization to protect your ego, or you're forced to accept a harsh reality.
 
What is interesting is the protestations from Apple's spokesperson that it could not be used for anything else, doesn't seem to be borne out by the concerns from a significant amount of Apple employees who are concerned like users that its a very dangerous move for Apple.

Have seen a few articles where Apple suggest having their proposed software on over a billion devices makes privacy safer for users?

That's ridiculous, it just means there is a backdoor waiting to be used for something else if employees fear it themselves, so the idea it can't happen isn't borne out by their fears either.

That argument about protecting privacy even more is unfathomable, as they've been assuring everyone its only hashes anyway? So if the system was policed via iCloud it would mean a billion plus users wouldn't have the overhead on their personal computers in the first place which is how it should be!

The concern now must be that the decision to put it on personal devices rather than the iCloud may say something different, something not safeguarding privacy at all.

Keep it on iCloud, then there's no argument. Keep it off software on Apple devices Apple do not own and there's no argument as Apple can do what they wish on their servers, and at least users will know, but under no circumstances have operating systems using users computing power they've paid for, using electricity the users have paid on computers they have paid for. etc., when logistically the idea that billion plus users having individual software on a billion plus devices spread over many Continents is safer or more efficient than having servers on iCloud seems nonsense.

Many will have the opinion rightly or wrongly that the reason Apple want all users to carry the burden on their own equipment, is just the starter, as that software can be amended to do all sorts of things, hence the concern of employees.

They know that Apple has a free pass into changing operating systems as they bypass System Integration Protection, allowing Apple to change completely the parameters of software, change the targets, copy system unique identifiers etc. etc., and all initially in the name of protecting children, a very emotive subject and sadly possibly chosen for a reason other than altruism, and which will possibly put obstacles in the way of organisations mandated by democratic governments, trying to eradicate Child Pornography.

But to suggest putting this operating system on billions of devices Apple do not own, in some ways protects users privacy, seems perverse.

I have to say this situation has really sickened me, genuinely, and I feel rather betrayed that Apple has cast aside the platform of safeguarding privacy, and is even doing it in the most damaging way by operating systems rather than their own servers, because if its anonymised as Apple suggest, then doing it via their server makes so much more sense. To have it via a billion plus devices lends itself to much more significant concerns, and where access to systems information, unique identifiers etc. etc., makes any idea of safeguarding privacy untenable, and some users, like myself would not be allowed to use Apple equipment with that sort of software at the front end, rather than off device.

Apple appear to have gone from privacy hero, to privacy zero in a matter of a few days, and where those who have pointed the finger at Facebook, Google etc., now see Apple doing what these people didn't even dare do....having USERS OWN EQUIPMENT interrogate users material, giving no choice whereas if you know its via iCloud and the software is via iCloud then you know that is the extent of the situation without the potential of having the overhead on your own kit, your own electricity and your own processing power you paid for let alone Big Brother put on your own machine where you have NO choice as no doubt with any security updates you will have no choice but to update, and you don't have to use iCloud, where by the way I'm sure every paedophile out there by now has made other arrangements, making the whole situation rather a damaging farce.


I will be surprise if Mr Zuckerberg doesn't gloat over this, if Epic doesn't turn the table in the court case over this, as Apply suggest APPS need to go via apple to protect users privacy and have even thrown off some apps that apparently didn't. Be surprised if Intel don't jump on the bandwagon over it and all the others where Apple have resisted compromising users privacy in the past.

Been a great admirer of Apple, a user of Apple kit for decades and decades, seen so many beneficial changes, and innovations from Apple, been an ambassador for Apple products and even helped thwart one of the first multi million dollar frauds for non existent Apple kit advertised on Ebay when there wasn't proper cross border cooperation, liaising with Apple, Western Union, UK fraud squad, Italian authorities and USA agencies, and certain services both in the UK, the USA and Europe (all mirrored to Government via liaising with MP so its verifiable). Perhaps I'm taking it to heart too much and I wish Apple well, but hope it finds itself again, as it seems to have got lost, lost its fundamental values. Really sad. Perhaps its the way of the world, but its not the world I want to be in. Its not the free world I want for my grandchildren and whilst I applaud real efforts to clamp down on child abuse, and child pornography, I don't believe this does so and the downside is much much greater than the upside.
 
Last edited:
Nope - don't have access to their code or 100% understand every last detail of the technology (just like most of the software and tech on the iPhone already). Do YOU? YOU'RE the one accusing them of wrongdoing, not I, so the onus is on YOU to prove so or at least provide some evidence. In the real world, we don't convict and hang people based on irrational suspicion fueled by paranoia, at least not in the civilized parts of the world.
I don't have access to their code either, but I'm a software engineer with over 4 decades of experience. I also know corporate gobbledygook when I see it.
 
One question I have is, who is pressuring Apple to implement this? Governments? Activist groups?

This doesn't sound like a hornet's nest Apple would want to kick willingly unless they were being pressured to. This whole CSAM thing is basically throwing years of their "privacy first" culture down the drain. There must be some entity pushing hard for this.

And if so, what other motivations do they have? It's odd, because who saves porn (illegal or otherwise) to their photo library on their phone? Your phone's photo library is meant for photos you took, not stuff downloaded from the Internet. Can you imagine swiping through photos to show someone something and having your porn show up?

It just all seems so suspicious.
 
  • Like
Reactions: BurgDog
nope. chances of a mistake in flagging your account is 1 in a trillion and only visual derivatives of matching photos are seen. if Apple never sees the original photos, it's factually NOT a total invasion of your privacy, even if all photos of your photo library are matched.
That one in a trillion figure is pretty interesting. If they were doing a straight cryptographic hash, you could derive that figure analytically and be pretty confident that it's correct. However, by their own admission that's not what they're doing. They are instead using a neural hash - essentially doing an image analysis using a proprietary technology and deriving a hash which expresses feature similarity to known CSAM, This is how they catch images that have been cropped or slightly altered. By its very nature, this sort of hashing doesn't permit the same sort of error probability calculation as a cryptographic hash. The only way I can see to state a figure like this with confidence is if they tested against substantially more than a trillion actual images (and I'd restrict that to images of people, to be extra sure). This is, to put it mildly, highly unlikely. I think some PR guy pulled what he viewed as a really big number out of a convenient orifice and just went with it. I'd be delighted if Apple published the analysis that proves that wrong, but I suspect they would already have done that if they could.
 
  • Like
Reactions: IG88 and BurgDog
Weird, because I haven’t seen a single coherent explanation of why it’s a problem if your own device scans your photos for child porn, and only does so if you are trying to upload onto apple’s servers, and only produces information to Apple if you have at least thirty child porn photos that you are trying to upload.
It’s a problem because it is going through your private information. Whether it’s a person or an algorithm they are still looking at your private information. They have no business going there ever.

it’s no different than them having a camera on you in the shower or in bed that scans to see if you are breaking any laws, and then gets an apple employee to check of something is found, ‘just in case’.

It’s also no different from them using microphones to listen to your private conversations to detect if u break the law.

there is a book called 1984 that describes exactly this. This is nothing more than policing thoughtspeak.
 
You'd need to have a photo (or 30) exact photos of CCP opposition leaders (not just photos containing those CCP opposition leaders, but particular photos). Unless you're the one taking those photos, then disseminating them widely enough that China sees them and then forces Apple to upload the hashes to the database to match against, then you wouldn't get arrested. But seriously, the what-ifs are getting more and more far-fetched.
We don't actually know that, because we don't have details of the proprietary neural hashing. We know that it involves determining if images are "similar", because it supposedly works even if images are cropped or individual pixels are changed. Were that not the case, it would be ridiculously easy for even the densest of CSAM aficionados to circumvent.

The question of what "similarity" means is a big issue in the AI/ML world, where facial recognition systems have proven to have significant biases (for example, they do very poorly with members of some ethnic groups). Images that some AI/ML system "thinks" are similar may not be similar at all to a human being (hence the human verification step before Apple risks considerable legal liability by reporting an innocent image to the authorities in the US).
 
  • Like
Reactions: BurgDog
It took a very long time for one of the thousands to stand up. Very long for a democratic free nation. That is my concern, and that makes others so calm.
A whistleblower in a government potentially has the government denying their claims on stuff that is often dreamed classified along with the government putting you in jail and possibly executing you.

Company employees that are whistleblowers could face denial from the company but the technology isn’t classified by the government and if it were true could be exposed much more easily in a legal system. Plus exposing a companies lies doesn’t come with the penalty of treason. Just the possibility of never getting to work in that industry again. Different types of de-motivators. Neither are great but I think whistleblowing on your government is, in my opinion, much more dangerous.

Also if a government gets caught doing illegal stuff the people involved are usually the ones who pay. A company that does something illegal the whole company is penalized. So if you’re Apple you would probably not make these types of decisions from a brand perspective.

You’re right that it sadly takes a long time to get exposed. However when it is exposed it affects governments and businesses differently. I just think that if they were doing something illegal, it will eventually get exposed and that just isn’t something I think they would take the risk on.
 
No, I'm saying all of your photos have been scanned since day 1 of iPhone. Granted, they were scanned for different reasons (applying effects, indexing, etc.), but they've been scanned since day 1.
There's a significant difference between "identifying pictures of people and collecting them in an album for your convenience" and "flagging material in order to report you to the authorities".
 
I just think some stuff is creepy. Imagine a 12 yo gay boy seeing a nude on internet and their parents getting notified about that. About the CSAM, even if 1 person gets wrongfully tagged in 100 years still would be too much. I would sue Apple if I knew I was tagged.
 
The only way I can see to state a figure like this with confidence is if they tested against substantially more than a trillion actual images (and I'd restrict that to images of people, to be extra sure).

Well no, it's 1 in a trillion of mistakenly *FLAGGING* an account. Chances of matching ONE unrelated image to an image in CSAM database is much lower, so they set a threshold of match count to the extent that it effectively makes it 1 in a trillion chance before an account is misflagged.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.