Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It's forever been a dream of some powerful folk that they can essentially 'kill' someone without actually ending their life by simply disabling their digital presence. Imagine a future with no cash and going anywhere requires the use of a digital identity controlled by third parties, you simply wouldn't be able to live without the use of that digital identity.
You mean like...

...China?
 
Apple cannot access encrypted stuff in iCloud without the encryption key. Anything you think it can, they either have the key available or it's done on device.

Nobody says that stuff doesn't exist. It does. But the assumption here is wrong. This isn't a mystery (and never has been) and it's all documented (and has been for years)! This is not an "aha!" moment.
They can't access encrypted "stuff" on your phone (supposedly). There are many things that are easily accessible by Apple once uploaded to their servers, such as iMessages and obviously photos. Even if you turn iMessages off from uploading to iCloud, if the person you are texting has iMessage set to upload to iCloud, then both of your messages will be available to Apple to read. Also, Apple has said a lot of things, but has gotten caught doing the opposite and had to "fix" their oopsies. So we're really just taking Apple at their word regarding privacy. There are things that seem private and then there are things that are a blatant violation of privacy on Apple's part and it makes it very hard to trust a company that flip flops so hard. Apple does things that are shady and only addresses the issue when they're called out. That's not a good business practice for a "privacy" oriented company.
 
  • Like
Reactions: TakeshimaIslands
Statement by the EFF:

That sums up my exact concerns.

It is interesting that apparently only photos being uploaded to iCloud will be scanned. Does that mean the pervs can just keep the photos locally or use another cloud service to get around this? Seems like it will be pretty easy to defeat then.
 
We already living in a dystopian season. We crossed the line a few years back. Its just gonna get worst for all good intentions.

“Dystopia: A futuristic, imagined universe offering the illusion of a perfect society maintained through corporate, bureaucratic, technological, moral, and control systems.”
It’s not the dystopia we wanted, but it is the dystopia that we deserve…

All I wanted was District 9, “They” wanted Demolition Man. Instead we pretty much have Running Man…😬😵‍💫
 
In Europe, it’s customary to take family pictures of 10-year-old girls topless on the beach. In the US, this would be considered child porn and you can go to prison for 20 years if they find a picture like that on your phone.
As someone who really liked Playboy magazine as a young guy I was extremely disturbed to find out fairly recently that they published topless images of Brooke Shields when she was only 10 years old.
 
That sums up my exact concerns.

It is interesting that apparently only photos being uploaded to iCloud will be scanned. Does that mean the pervs can just keep the photos locally or use another cloud service to get around this? Seems like it will be pretty easy to defeat then.
This is usually when we have some tech exec slip up under congressional testimony that they have some greaseball clause that some weasel worded clause and routing allows someone to look at the local copy of each handset…
 
As someone who really liked Playboy magazine as a young guy I was extremely disturbed to find out fairly recently that they published topless images of Brooke Shields when she was only 10 years old.
What’s worse is that it was her Mom’s idea.

Yet It’s still fashionable to denounce anyone who identifies high level child trafficking as a conspiracy loon😒
 
This literally takes a hash of the file and compares the hash to known abuse photo databases. It’s not looking at your picture nor evaluating its contents. If you change a pixel on the picture the hash will calculate to something completely different and “defeat” this technology.

People who have an issue with this have their hearts in the right place, but minds in the wrong place.
 
This literally takes a hash of the file and compares the hash to known abuse photo databases. It’s not looking at your picture nor evaluating its contents. If you change a pixel on the picture the hash will calculate to something completely different and “defeat” this technology.

People who have an issue with this have their hearts in the right place, but minds in the wrong place.


Maybe they should develop the technology with something useful and less invasive then?

I’m sure a simple pixel change can defeat the technology entirely. The perverts have likely known about that for a while.

Im wondering if there is a further financial ends to this such as detecting crypto transfer ?
 
  • Like
Reactions: turbineseaplane
That sums up my exact concerns.

It is interesting that apparently only photos being uploaded to iCloud will be scanned. Does that mean the pervs can just keep the photos locally or use another cloud service to get around this? Seems like it will be pretty easy to defeat then.
I think this is irrelevant. They are installing software on the user's device with the sole purpose of snooping in the user's data and reporting them to authorities if they find something "suspicious". It's a corporation playing police without a warrant. This is the start of a new level of surveillance far beyond scanning stuff on their own servers that people have voluntarily uploaded.
 
Oh, and in case anyone missed it:

"These features are coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey."

That's right, your Mac too will soon scan your files without your consent and perhaps report you.
 
  • Like
Reactions: -DMN- and Shirasaki
Oh, and in case anyone missed it:

"These features are coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey."

That's right, your Mac too will soon scan your files without your consent and perhaps report you.
From the article and something people should pay attention to.

Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated."
 
  • Disagree
Reactions: -DMN-
From the article and something people should pay attention to.

Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated."
Here's a quote from the EFF statement that hits the nail on the head:

" Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor. "

And here's another one that expresses well how I feel about this:

"To say that we are disappointed by Apple’s plans is an understatement. Apple has historically been a champion of end-to-end encryption, for all of the same reasons that EFF has articulated time and time again. Apple’s compromise on end-to-end encryption may appease government agencies in the U.S. and abroad, but it is a shocking about-face for users who have relied on the company’s leadership in privacy and security. "

To me it feels like a betrayal.
 
Here's a quote from the EFF statement that hits the nail on the head:

" Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor. "

And here's another one that expresses well how I feel about this:

"To say that we are disappointed by Apple’s plans is an understatement. Apple has historically been a champion of end-to-end encryption, for all of the same reasons that EFF has articulated time and time again. Apple’s compromise on end-to-end encryption may appease government agencies in the U.S. and abroad, but it is a shocking about-face for users who have relied on the company’s leadership in privacy and security. "

To me it feels like a betrayal.
I agree with that sentiment from the EFF.
 
Maybe they should develop the technology with something useful and less invasive then?

I’m sure a simple pixel change can defeat the technology entirely. The perverts have likely known about that for a while.

Im wondering if there is a further financial ends to this such as detecting crypto transfer ?
The fact that you’re saying “less invasive” proves you either didn’t read anything I typed or didn’t understand it
 
  • Haha
Reactions: DanTSX
This literally takes a hash of the file and compares the hash to known abuse photo databases. It’s not looking at your picture nor evaluating its contents. If you change a pixel on the picture the hash will calculate to something completely different and “defeat” this technology.

People who have an issue with this have their hearts in the right place, but minds in the wrong place.
This is wrong. It's a perceptual hash that does not only match bit-identical files. From Apple's own summary:

"NeuralHash is a perceptual hashing function that maps images to numbers. Perceptual hashing bases this number on features of the image instead of the precise values of pixels in the image. "

It gets worse: the hashing function uses a proprietary ML network trained by Apple. This means that it is very difficult or impossible to audit the algorithm's accuracy.
 
Somebody that is good with statistics tell me if I’m right or wrong.

In Apple’s document here https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf they claim an account has only a one in one trillion chance of being falsely flagged. Because there is nowhere near one trillion iCloud accounts we can probably assume there has never been and probably never will be an account incorrectly flagged. But by the same token, how would they know the one in one trillion figure is accurate? Because the instance has never occurred.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.