Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If it can detect a nude photo taken and alert a parent, who else could it alert?
How heavy was that goal post you tried to move there? The point I was responding to is that the specific, separate feature of concern was specific to child profile accounts on phones they [likely] did not purchase. Treat it the same way an adult would company-issued property: company owned and subject to their monitoring - meaning no expectation of privacy.
Now, if you have any evidence that this feature applies outside that clearly defined scope then we can continue to discuss this without resorting to logical fallacies.
 
No I missed out on the Apple I and it was October 1976 that I was first interested, but had the Apple II and many of the other machines that followed. In fact I was originally involved in technology, especially media related, worked on many phototypesetters and in doing that was referred to lads who were to watch...Apple. There were clear limitations especially with graphics but interesting nonetheless and so I had Apple II to play with.

For me Steve was the catalyst that made Apple what it is today, or sadly perhaps what it was before this crazy announcement on CSAM.

Wasn't just Steve though, as Chuck Geschke also played a big role in the development of a basic computer to a very usable computer system with decent graphics. I spoke to Chuck too, a really passionate guy about what he was doing in working on a little page description language called PostScript, but where funny enough the spark of thought for that was in 1976 and ironically it was the same company that Apple used as a basis for its computers Xerox PARC.

The real breakthrough for Apple came after Steve left Apple as his vision was for a more powerful and usable computer. In 1985 it was clear his vision did not appear to be shared by Apple, but he left, and where his life was not made easy, with rumours he was sacked although public record seems to demonstrate Steve intended to leave.

He set up NeXT and used $11.8m of his own money, and his first computer was considered a failure financially, but yet it assisted the setting up of the Internet via Tim Berners-Lee (although it was his boss that picked NeXT computer), and was to herald what we now know as Apple Mac and all the other devices that followed right up until now.

Meanwhile Apple nearly went bust during his absence, and in 1997 after Steve had come back he had an investment of £150m from none other than Microsoft!

Perhaps I'm just old fool reminiscing, but its horrendous for me to see Apple squandering its ethical stance on PRIVACY and engaging in SURVEILLANCE using machines they have sold to customers.

Had a lot of tragedy too, and nothing can hurt me more than losing my son, which is a lesson to you all. Never take family for granted, I used to work 7 days a week, taking my kids with me on occasions...I WAS FOOL.

What I would give for just a few minutes more with my son. The day he died he took half of me with him.

In 1997 Apple's financial situation was dire and under Steve by 1998 Apple were back on track.

Steve used DPS (hence the link with PostScript) and object oriented programming and whilst his computers were never a financial success at NeXt, his operating system was to save Apple and to this day the operating systems at Apple reflect Steve's move from Apple to NeXT.

Eventually Apple got Steve back, in 1997 paying him $426m for everything NeXT including NeXTSTEP plus $1.5m of Apple stock, and retrospectively this transformed Apple, both in terms of usability and financially.

I note how many people compare Steve with Tim, but sadly and I'm sure Tim would agree, Steve Jobs was in a totally different league, as without him coming back to Apple, it would not exist today.

I hope Apple remember some of his quotes on privacy and surveillance as this latest idea is certainly not in keeping with his publicly stated views.

Apologies for being long winded, its been an interesting life, was at the outset of computing/home computing, system director mainly Wintel....with my first association with computing being a newspaper computer in a 12ft. x 8ft. room sealed environment with smoke cloak with a tiny computing power.

Went from publishing after my views on it were confirmed hence the revolution in newspaper and publishing but even that only really possible when vectored graphics entered the scene.

Apple give up on this awful idea.
I hope my road ahead is longer than your memories. ;-)
 
The question that remains in the midst of all of this outrage is concern over what is “known CSAM content”….who maintains that? What is the oversight?

Any other argument is irrelevant.
And that is clearly and openly available information.
"the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations."
Which is the same database Cloudflare makes available to ALL of their customers for free: https://blog.cloudflare.com/the-csam-scanning-tool/
 
How heavy was that goal post you tried to move there? The point I was responding to is that the specific, separate feature of concern was specific to child profile accounts on phones they [likely] did not purchase. Treat it the same way an adult would company-issued property: company owned and subject to their monitoring - meaning no expectation of privacy.
Now, if you have any evidence that this feature applies outside that clearly defined scope then we can continue to discuss this without resorting to logical fallacies.
We don’t believe it will be used within the clearly defined scope.

evidence is historically cases of abuse of big tech systems and lies by omission.

There is zero reason to think that Apple will do EXACTLY what they say. And there is no reason to think we are getting the full story.

I understand your appeal to reason. But you are using the lack of specific evidence of Malintent in this specific instance to be a crutch when other real abuse of our rights is recent and not easily forgotten
 
Last edited:
Yes but they could do they already. They are probably doing that anyway!
Why move it to the device? (I'm actually intrigued by that question).
Yes, Apple is already doing CSAM scans for photos uploaded to iCloud Photo.

My take is that Apple is using the on-device implementation as a POC to show the authorities that they have a way to ensure no illegal contents gets stored in their servers. Once that point is reached, they can then implement E2EE for iCloud Photos. When that happens, not even Apple have the ability to decrypt their users' photos in iCloud Photo, no matter what governments in the world wants, because Apple just do not have the keys to decrypt the photos. Isn't this what end users wants?

What they are suggesting to do now is no different compare to when they do it in their servers. The CSAM image hash and comparison will only be done for photos that the user uploaded to iCloud Photo. A security voucher will only be generated if a hash matches those in the CSAM hash database. Otherwise they photo is uploaded without the voucher. If there's no upload to iCloud Photo, no hash will be computed. According to Apple, they do not scan all photos on devices. Actually, scan would be the incorrect word to be used here. The hash function is only invoked for photos uploaded to iCloud Photo. So there's no scanning in this case.
 
And that is clearly and openly available information.
"the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations."
Which is the same database Cloudflare makes available to ALL of their customers for free: https://blog.cloudflare.com/the-csam-scanning-tool/
I don’t believe that the CSAM criteria is exhaustive or complete no matter what they say.

fool me once…..
 
Stop thinking about this system as something for child abuse. The simplest description is Government(s) specify forbidden files via a hash. Apple then must report users with those hash values to the Government(s).
You're exactly right. That is what the technology does.
When someone says "Think of the children!" they're appealing to your parental instinct. Look at the false news against vaping, all in the name of the children (but that's a different discussion).
But you're correct. This is Apple looking for "Forbidden files". It just HAPPENS to be CSAM files, because you know, think of the children. BUT... it could be any files. Images of a rebel flag. Images of Trump in a tutu. Whatever. The technology is there, and all they would have to do is look for a different hash...

Oh, and it's not an illegal, warrantless search because you gave permission through the EULA when you installed iOS 15. And sure, you could refuse to install iOS 15... until all your apps say "This requires iOS 15 or later" (like my old iPad 2 which doesn't run anything any longer). So don't try to tell me that I can always refuse to update; I really don't have a choice about updating if I want to be protected from bugs / hackers / etc.
 
Do account holders get notified when Apple reviews their photos?

Also, why this and why now? Has Apple conducted some kind of review of people's iCloud accounts and determined there's enough bad content that they should implement the technology?

Apple should encrypt the content so NOBODY, including them, can view account holder's data. I'm not condoning bad or illegal behavior, I'm saying privacy is the primary.

And the "We're doing it for the children" argument is a red flag. In the end it really just means somebody is violating rights and snooping where they don't belong.
 
iPhone just works, with android it's a fragmented, bloated mess, have to run around in circles through pointless settings disabling a 100 things just to set your phone up and hope it works properly, that's just the beginning. 🤦‍♂️
Wow, seems I have been using Android devices the wrong way!
 
Yes, Apple is already doing CSAM scans for photos uploaded to iCloud Photo.

My take is that Apple is using the on-device implementation as a POC to show the authorities that they have a way to ensure no illegal contents gets stored in their servers. Once that point is reached, they can then implement E2EE for iCloud Photos. When that happens, not even Apple have the ability to decrypt their users' photos in iCloud Photo, no matter what governments in the world wants, because Apple just do not have the keys to decrypt the photos. Isn't this what end users wants?

What they are suggesting to do now is no different compare to when they do it in their servers. The CSAM image hash and comparison will only be done for photos that the user uploaded to iCloud Photo. A security voucher will only be generated if a hash matches those in the CSAM hash database. Otherwise they photo is uploaded without the voucher. If there's no upload to iCloud Photo, no hash will be computed. According to Apple, they do not scan all photos on devices. Actually, scan would be the incorrect word to be used here. The hash function is only invoked for photos uploaded to iCloud Photo. So there's no scanning in this case.
I like this answer! Actually makes sense. No one else has come up with this idea, well done!

I think in the long term this will make iCloud more secure than any other service and also mean governments worldwide will not be able to demand access to users content in the cloud. Which ironically makes Apple again, far more secure than anyone else.

This is indeed a long game. Skate to where the puck will be as they say...
 
Good, but sorry it's too late for me to stay with Apple.

I'm done with big tech. Bought a Nokia 5310 (2020) for calls and texts. That'll do.

I also have a wifi-only degoogled android for some apps when I'm at the house.

We'll see how it goes. I may take the degoogled android as the main phone in future, but for now, I'm going low-tech.
Serious question, how do you degoogle android? Probably too open ended, but I guess my main question comes from not thinking that was even possible.
 
  • Love
Reactions: DanTSX
That is correct but a physical Apple employee (that is a human being) will be reviewing the pictures once it gets flagged. That does not sound like a AI to me.

View attachment 1818083
But this has no relation to your original argument:
Pretty sure you don't want Apple to be looking/scanning/analyzing/identifying your wife's pictures.
Because those pictures aren’t in the CSAM database, they’re never flagged and thus never sent to Apple. Thus, no, Apple never looks at or identifies those pictures.
 
I think Apple forgot their employees are humans with iPhones as well. This affects them and everyone else. It’s easily the dumbest move they’ve ever done.
Bingo. Nobody wants their photos to be scanned, including Apple employees. Keep fighting back Apple employees! In the meantime, we as consumers have to make it clear to Apple that we find this unacceptable:
https://www.apple.com/feedback/iphone/thankyou/

And don’t forget about this next week.
 
Apple employees are now joining the choir of individuals raising concerns over Apple's plans to scan iPhone users' photo libraries for NSA, Федеральная служба безопасности Российской Федерации , 中華人民共和國國家安全部 / 中华人民共和国国家安全部 or so called child sexual abuse material ;):cool:
 
The question that remains in the midst of all of this outrage is concern over what is “known CSAM content”….who maintains that? What is the oversight?

Any other argument is irrelevant.
It's pretty clear how that works if you take 5 seconds to do a Google search....or Craig clarified in the updated story added today. Broad overview of course, but an organization that has been doing this successfully for 2 decades with multiple toll gates in place should be trusted on some level I think.

Or you can choose not to trust anything or anyone and assume everyone is out to get you. Have fun with that...
 
  • Like
Reactions: StyxMaker and mw360
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.