Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yikes. Reading the Technical Summary yields masterpieces of double-talk such as:


"Nearly identical" doesn't mean "identical". If an image that is only nearly identical can generate the same number, then the number isn't "a unique number specific to that image". If you've encountered hashes as a way of verifying the authenticity of downloads or while reading about blockchain, that's not what is happening here. OK, they're talking about images that differ in size and quality so maybe you could call that "nearly identical" but that "nearly" makes a huge difference in the likelihood of a false match.


OK, so let's just trust that Apple have read about Sally Clark and understand the difference between independent events (tossing a fair coin) and possibly correlated events (e.g. if one of your photos triggers a false match, how likely is it that there will be other "nearly identical" photos in your collection?) and haven't just multiplied the probability of a hash collision by the number of matches (... which would work but for that pesky "nearly").


Which is not the same as "reviews each report to confirm that the match really has found CSAM and, if so, disables the account and sends a report ti NCMEC". If that's what they mean, why not say it clearly?


and


...so ignore the technicalities (which aren't technical enough to recreate and critique the process) and focus on how terms like "number unique to the image" or "identical" have gradually morphed via "nearly identical" and "perceptually and semantically similar" into "visually similar"... and that we're suddenly talking about analysing the features of the image (which is precisely what some people here are saying isn't happening "because hash").

Then we follow up with the truly impressive and reassuring demonstration that a colour picture of a palm tree generates the same hash as exactly the same image converted to monochrome but a completely different cityscape (with nary a palm tree in sight) generates a different hash. Wow. Anybody reading this critically would be asking "what about a different picture containing palm trees, or maybe a similarly composed picture of a cypress tree? How about some examples of cropped/resized images which couldn't be spotted by simply turning the image to B&W before hashing?" Maybe the system can cope with that - if so, why not show it rather than a trivial Sesame Street "one of these three things is not the same" example?

I'm not questioning whether the technology makes a good effort at a very difficult task (matching images without being fooled by inconsequential changes) but the summary reeks of "positive spin" and avoiding the difficult questions: and for any technology like this the #1 question has to be "what are the dangers of a false match" and is the risk justified by the rate of successful matches?

...and will people please, please stop saying "it's not scanning your images, it's only checking hashes" - that's a distinction without a difference even before you replace "hashes" with Apple(R) NeuralHashes(TM).
It would be hilarious, if someone sent a picture of the front page of the US constitution to all members of the US Congress, and soon after all members were flagged as “paedo-suspects” due to an Apple programming error (QAnon would have a feast ;-)

Regards
 
Those of you saying this is a hash match are generally doing so incorrectly. I blame the fact non-engineers and non-scientists only learned about hashing as a function of the E2EE and cryptocurrency. Thus, they only learned about cryptographic hashes. There’s a whole world of other hashes out there and Apple is NOT using a cryptographic hash. Apple is using a neural network hash that’s generally called a perceptual hash.

What is a perceptual hash? Well first, it lacks two critical features of a cryptographic hash.

1) it does NOT require exact matches; it “perceives” the images salient features (layman’s terms, the content) and then creates a hash of that. That hash is then run through an algorithm that computes the distance from a bad hash to your hash. The distance, I.e. how close two images must be to match, is entirely user selectable. However, to be useful, it’s generally set very wide and thus generates a high false positive rate. The point of this is that, using a cryptographic hash is pointless because one image transform or pixel change and the match would fail…a perceptual hash still matches because the content is close enough according to the algorithm.

2) cryptographic hashes are designed to be non-reversible, but perceptual hashes are not. They’re reversible, but only to a scaled down, lower quality version. This is an interesting point as it’s ehy getting acces to Apples docs or the other competeting technology, PhotoDNA and the hash database, is very hard and requires background checks and NDAs. Reversing a NCMEC hash would put you in possession of CSAM and that’s an instant felony.

The NCMEC database is 100% unaudited because to view the images and know they are illegal content is in of itself, illegal. Apple can’t have a copy to compare against during the manual review because, that would put them in possession of actual CSAM. That leaves two possibilities for reviewing the offending hashes. First, Apple could view your original image, which at that point, isn’t known CSAM because the perceptual hash match isn’t exact and then make a determination. Or they could use your hash, reverse it to get a scaled, lower quality image and then make a determination. In practice, all such systems degrade into just reporting and have limited to no actual user protection; see Grand Jury.

Please educate yourself on the differences between the cryptographic hashes you are used too and these perceptual hashes. The false positive rates are very high. None of the experts in this AI filed have put forth any plausible math that gets Apple anywhere near close enough for the 1 in trillion claim.

FYI, there’s also a terrorism hash database which is used as well. Not widely known, but it catches up journalist and humanitarian workers documenting human rights abuses all the time. Not because their new images are exact matches for old attacks, but rather because the perceptual hash determines that they’re close enough.

It’s not that this technology could be expanded beyond CSAM, it already has. https://gifct.org/tech-innovation/
 
Last edited:
Do you really want me to explain how it is impossible for your personal photos (even if they were child pornography) are NOT part of the database of hashed images Apple is putting on your phone??

Okay...the hashed images, as clearly stated in every document Apple has shared but you have failed to either read or understand, only come from the CSAM database.

How does an image get into the CSAM database? Well, you could look that up for yourself and spend about 3 minutes reading...but okay...

It has been around for more than 2 decades and collects images from KNOWN child pornography sources, including those self-reported and investigated by the organization (imagine some poor teenager who has inappropriate images of them spread online...happens every day). Simply sharing your naked baby pics or personal sex vids/pics online is not enough for them to be purposefully or accidentally added to the database, much less, included in what Apple is checking against.

As I mention above, EVEN IF you are creating child pornography and sharing online to your friends, those images, just by being online, are not added to the database. They must be reported either by someone who has received them so the individual(s) can be investigated and prosecuted or if law enforcement happens to come across them through normal investigation.

But again, this is all clearly laid out in the 5 minute FAQ read or if you want more details on how the hashed images are compared, Apple has that online as well in a series of white papers on the technology.
We don’t believe the public facing record.
 
It would be hilarious, if someone sent a picture of the front page of the US constitution to all members of the US Congress, and soon after all members were flagged as “paedo-suspects” due to an Apple programming error (QAnon would have a feast ;-)

Regards
Q-boomers are a presumed secondary target. They ARE a threat according to the DOJ.
 
  • Haha
Reactions: MozMan68
Apple doesn't care if anyone has illegal contents stored in their devices. Apple cares if anyone intents to upload those content to their servers, because if they do not do anything about it, they get into trouble with the law.
Then they should scan it before saving it on their servers. NOT on the user's device.
 
So from what I understand, Apple takes the image hash and compares it to an image hash database of known ‘blacklisted’ images. Now I haven’t looked into the CSAM feature too deeply, but there’s many ways of changing an image hash without ‘changing’ the image itself. For example, you could use steganography to make one simple change to the image and this would change the hash.
That's not how it works. Consider Google's reverse image search, you can flip the image, rotate it, resize it, and even slightly modify it, and still be able to determine if it's visually similar enough to be the same image. Given that the CSAM dataset is a subset of "All images on the internet", the dataset is smaller, and you can do even more computationally heavy processing to catch even more variance. Even more so considering the CPU cycles can be run overnight on your phone, and not at Apple's expense.

Ultimately this means that an image like the Tiananmen Square image, cropped, resized, embedded with a block of text, would still get flagged as wrong-think, and the CCP police would come knocking.
 
  • Like
Reactions: BurgDog and DanTSX
I wonder what other spying Apple is capable of engaging in that they aren’t acknowledging because it isn’t repugnant child porn? I wonder what the NSA’s, the FBI’s, the CIA’s or any of the 17 government intelligence agency’s involvement is? How would anyone even know? It’s not as if FISA courts emphatically don’t allow spying on Americans. all they need is a “good reason” and the reason doesn’t even have to be true.
Theoretically, anything leaving the confines of your mind/body/soul…..

Only a will and a reason to go-around archaic concepts like “Muh constitution” are required to achieve this.
 
"Human review"

A while back I found a way/vulnerability/bad UI design/ that allowed another user on a Mac to access files on an external drive that was formatted with only mounting with a password... without knowing the password of the other user. I was mindblown at how easy it was when I accidentally stumbled upon it. I'm no computer pro, but I know my way around a Mac and worked at the Apple Store around 2010.

I went to the security/bug bounty page and did all the contact stuff, made videos, was in talks with someone from Apple Security. The whole deal lasted several weeks, lots of emails and video uploads...

And then when I had finished helping them with all that I knew to show, stopped getting replied to, didn't get the first dime or dollar though the category fit their 'up to $250,000' level on the bounty page.

Hopefully these "human reviews" will go better than my interaction with Apple Security.
Human reviewers can be outsourced to ‘competent outfits‘ manned by locals situated in Bangalore India.

The odd thing is, that it might even be less prone to racism, than a similar selection of ‘modern’ similarly low payed US citizens. Not much, but maybe significant, when push come to shove ;-)

Smile.
 
Human reviewers can be outsourced to ‘competent outfits‘ manned by locals situated in Bangalore India.

The odd thing is, that it might even be less prone to racism, than a similar selection of ‘modern’ similarly low payed US citizens. Not much, but maybe significant, when push come to shove ;-)

Smile.
We have allies we can trust!
 
You had an Apple I?
No I missed out on the Apple I and it was October 1976 that I was first interested, but had the Apple II and many of the other machines that followed. In fact I was originally involved in technology, especially media related, worked on many phototypesetters and in doing that was referred to lads who were to watch...Apple. There were clear limitations especially with graphics but interesting nonetheless and so I had Apple II to play with.

For me Steve was the catalyst that made Apple what it is today, or sadly perhaps what it was before this crazy announcement on CSAM.

Wasn't just Steve though, as Chuck Geschke also played a big role in the development of a basic computer to a very usable computer system with decent graphics. I spoke to Chuck too, a really passionate guy about what he was doing in working on a little page description language called PostScript, but where funny enough the spark of thought for that was in 1976 and ironically it was the same company that Apple used as a basis for its computers Xerox PARC.

The real breakthrough for Apple came after Steve left Apple as his vision was for a more powerful and usable computer. In 1985 it was clear his vision did not appear to be shared by Apple, but he left, and where his life was not made easy, with rumours he was sacked although public record seems to demonstrate Steve intended to leave.

He set up NeXT and used $11.8m of his own money, and his first computer was considered a failure financially, but yet it assisted the setting up of the Internet via Tim Berners-Lee (although it was his boss that picked NeXT computer), and was to herald what we now know as Apple Mac and all the other devices that followed right up until now.

Meanwhile Apple nearly went bust during his absence, and in 1997 after Steve had come back he had an investment of £150m from none other than Microsoft!

Perhaps I'm just old fool reminiscing, but its horrendous for me to see Apple squandering its ethical stance on PRIVACY and engaging in SURVEILLANCE using machines they have sold to customers.

Had a lot of tragedy too, and nothing can hurt me more than losing my son, which is a lesson to you all. Never take family for granted, I used to work 7 days a week, taking my kids with me on occasions...I WAS FOOL.

What I would give for just a few minutes more with my son. The day he died he took half of me with him.

In 1997 Apple's financial situation was dire and under Steve by 1998 Apple were back on track.

Steve used DPS (hence the link with PostScript) and object oriented programming and whilst his computers were never a financial success at NeXt, his operating system was to save Apple and to this day the operating systems at Apple reflect Steve's move from Apple to NeXT.

Eventually Apple got Steve back, in 1997 paying him $426m for everything NeXT including NeXTSTEP plus $1.5m of Apple stock, and retrospectively this transformed Apple, both in terms of usability and financially.

I note how many people compare Steve with Tim, but sadly and I'm sure Tim would agree, Steve Jobs was in a totally different league, as without him coming back to Apple, it would not exist today.

I hope Apple remember some of his quotes on privacy and surveillance as this latest idea is certainly not in keeping with his publicly stated views.

Apologies for being long winded, its been an interesting life, was at the outset of computing/home computing, system director mainly Wintel....with my first association with computing being a newspaper computer in a 12ft. x 8ft. room sealed environment with smoke cloak with a tiny computing power.

Went from publishing after my views on it were confirmed hence the revolution in newspaper and publishing but even that only really possible when vectored graphics entered the scene.

Apple give up on this awful idea.
 
Do you really want me to explain how it is impossible for your personal photos (even if they were child pornography) are NOT part of the database of hashed images Apple is putting on your phone??

Okay...the hashed images, as clearly stated in every document Apple has shared but you have failed to either read or understand, only come from the CSAM database.

How does an image get into the CSAM database? Well, you could look that up for yourself and spend about 3 minutes reading...but okay...

It has been around for more than 2 decades and collects images from KNOWN child pornography sources, including those self-reported and investigated by the organization (imagine some poor teenager who has inappropriate images of them spread online...happens every day). Simply sharing your naked baby pics or personal sex vids/pics online is not enough for them to be purposefully or accidentally added to the database, much less, included in what Apple is checking against.

As I mention above, EVEN IF you are creating child pornography and sharing online to your friends, those images, just by being online, are not added to the database. They must be reported either by someone who has received them so the individual(s) can be investigated and prosecuted or if law enforcement happens to come across them through normal investigation.

But again, this is all clearly laid out in the 5 minute FAQ read or if you want more details on how the hashed images are compared, Apple has that online as well in a series of white papers on the technology.
I fully understand how all of that works. But again, you're forgetting the OTHER part of these new "features"; where if a teen tries to send a nude over iMessage, it identifies it (some sort of image AI), asks them if they REALLY want to send it, and if they say "Yes", it sends it to their parents.

So yes, with that technology in place, there is a lot of opportunity for false positives, or for abuse.

Regardless, I should have an expectation of privacy on a piece of equipment that I purchase.
 
  • Like
Reactions: Violet_Antelope
Then they should scan it before saving it on their servers. NOT on the user's device.
Not if they want to implement E2EE. When E2EE for iCloud Photo becomes a reality, then not even Apple will be able to decrypt the photos stored in their servers. No amount of arm twisting from any government can make Apple decypt those photos. Users' photos will then be truly private to those who has the keys to decrypt them. Keep in mind that Apple has to answer to the authorities that their servers are not storing illegal contents, and this is Apple's method of ensuring that.

If anyone is OK with Apple scanning for such contents in iCloud, what Apple is proposing for on-device hashing photos to be uploaded to iCloud Photo is no different.
 
Apple doesn't care if anyone has illegal contents stored in their devices. Apple cares if anyone intents to upload those content to their servers, because if they do not do anything about it, they get into trouble with the law.
This 100%.
That's not how it works. Consider Google's reverse image search, you can flip the image, rotate it, resize it, and even slightly modify it, and still be able to determine if it's visually similar enough to be the same image. Given that the CSAM dataset is a subset of "All images on the internet", the dataset is smaller, and you can do even more computationally heavy processing to catch even more variance. Even more so considering the CPU cycles can be run overnight on your phone, and not at Apple's expense.

Ultimately this means that an image like the Tiananmen Square image, cropped, resized, embedded with a block of text, would still get flagged as wrong-think, and the CCP police would come knocking.
They are called "fuzzy hashes" and Apple is neither the first nor the only one to employ them for CSAM: https://blog.cloudflare.com/the-csam-scanning-tool/
 
Apple doesn't care if anyone has illegal contents stored in their devices. Apple cares if anyone intents to upload those content to their servers, because if they do not do anything about it, they get into trouble with the law.
Yes but they could do they already. They are probably doing that anyway!
Why move it to the device? (I'm actually intrigued by that question).
 
I fully understand how all of that works. But again, you're forgetting the OTHER part of these new "features"; where if a teen tries to send a nude over iMessage, it identifies it (some sort of image AI), asks them if they REALLY want to send it, and if they say "Yes", it sends it to their parents.

So yes, with that technology in place, there is a lot of opportunity for false positives, or for abuse.

Regardless, I should have an expectation of privacy on a piece of equipment that I purchase.
So don't have a phone owned by someone else (your parents) that has a child's profile configured on it.
 


Apple employees are now joining the choir of individuals raising concerns over Apple's plans to scan iPhone users' photo libraries for CSAM or child sexual abuse material, reportedly speaking out internally about how the technology could be used to scan users' photos for other types of content, according to a report from Reuters.

apple-park-drone-june-2018-2.jpg

According to Reuters, an unspecified number of Apple employees have taken to internal Slack channels to raise concerns over CSAM detection. Specifically, employees are concerned that governments could force Apple to use the technology for censorship by finding content other than CSAM. Some employees are worried that Apple is damaging its industry-leading privacy reputation.
Apple employees in roles pertaining to user security are not thought to have been part of the internal protest, according to the report.

Ever since its announcement last week, Apple has been bombarded with criticism over its CSAM detection plans, which are still expected to roll out with iOS 15 and iPadOS 15 this fall. Concerns mainly revolve around how the technology could present a slippery slope for future implementations by oppressive governments and regimes.

Apple has firmly pushed back against the idea that the on-device technology used for detecting CSAM material could be used for any other purpose. In a published FAQ document, the company says it will vehemently refuse any such demand by governments.
An open letter criticizing Apple and calling upon the company to immediately halt it's plan to deploy CSAM detection has gained more than 7,000 signatures at the time of writing. The head of WhatsApp has also weighed into the debate.

Article Link: Apple Employees Internally Raising Concerns Over CSAM Detection Plans
What kind of horrific job it would be to be the human that has to manually review the suspect images all day to verify them. I imagine many people would need therapy after a short time on the job.
 
Yes but they could do they already. They are probably doing that anyway!
Why move it to the device? (I'm actually intrigued by that question).
Fun fact: it is always done on device before the upload. How do you think their systems can check for corrupted images and report failed uploads?
Source: I worked for a secure file sharing company and this was de rigueur for all uploads, regardless of which tool you used. Check byte length and hash. This is mildly different in that the system uses "fuzzy hashes" to spot "close enough" images without needing to actually be able to reconstitute the file. This is similar to how your actual physical fingerprints need to only match 12-20 markers to be considered a match without being able to recreate you from them.
 
I don't understand why anyone would.

If you don't have this material, then all it's doing is wasting your CPU and battery to determine you are innocent.

If you do have this material, then I assume you don't want to be caught (and thus don't want it turned on).

🤷‍♂️
So we are now guilty until proven innocent?
 

Attachments

  • big-brother-1984.jpg
    big-brother-1984.jpg
    267.4 KB · Views: 54
At this point I am not concerned about CSAM, I am concerned about what else they are doing behind the scenes that no one has detected yet. They could be scanning for other things and not reporting it to the public. Once they get caught they will just say "Oops" and pay a $500K fine and go on their way continuously breaching privacy. This is what happened with Google and FB and others.

Stallman was right.
 
Apple give up on this awful idea.

It’s not something they wanted to do.

It’s a reaction to things that are going on with external forces putting pressure on them.

If third party app stores and side loading is forced on to their devices by the criminal cartel of a16z, drw, Thiel, Musk, etc then we are going to see a chunk of users who download apps outside of Apple‘s control.

Those apps would be capable of downloading highly illicit content, the type of content that appears on the darkweb.

The criminal cartel above has profited from that people who purchase and share that illicit content for a decade. They have so far gotten away with it because of gray areas of the law and lack of regulation of virtual currencies.

They don’t want to stop there. They want to flood iPhones and Android phones with their schemes and secondary payment systems because if they can be successful at that then they would become so powerfully wealthy from their investments and holdings that they would become literal oligarchs controlling everything and have tremendous access to private and financial data on millions of people.

So with that in mind Apple has to add layers of security on their devices. If users of third party app stores download illicit porn on their phones and then share those images on iCloud and iMessage, it would make Apple legally complicit. Apple would be holding child abuse on their servers.

There’s no way they can allow themselves to be put in such an expensive and immoral legal situation. So that’s why they are implementing this system.

Opening up to third party stores is going to cause major societal pains in the coming years if the criminal cartel are successful at lobbying for this change.

Only the wisest and forward looking people understand the massive downsides that will come. From criminal finance to third party apps spying deep into every corner of your lives.

So the first step Apple should take is prevent the sharing of illicit and abusive images.

The second step Apple should take is enforcing a very solid sandbox so that any apps downloading from third party stores can’t access your photos, file manager or banking apps. They will have to use their own in build file managers.

We are in the middle of a war. Please note this well. It’s a war between extremists and the old school classicists.
 
Yes but they could do they already. They are probably doing that anyway!
Why move it to the device? (I'm actually intrigued by that question).
They do do it on the device. Every cloud sharing company does it. All they are moving to your device is a private ledger that consists of known CSAM content. They are just modifying the hashing algorithm from a more traditional one to a "fuzzy hash" a la Cloudflare (I've shares the link enough times - look it up if you are genuinely curious).
 
They do do it on the device. Every cloud sharing company does it. All they are moving to your device is a private ledger that consists of known CSAM content. They are just modifying the hashing algorithm from a more traditional one to a "fuzzy hash" a la Cloudflare (I've shares the link enough times - look it up if you are genuinely curious).
The question that remains in the midst of all of this outrage is concern over what is “known CSAM content”….who maintains that? What is the oversight?

Any other argument is irrelevant.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.