Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yup all cloud providers have been doing this for a good 5 years or more now.

The big difference is the servers in the cloud are owned by someone else. Hence they can scan your stuff for whatever they want. Also there is a hard line, in that the cloud cannot actually scan anything you don't give to the cloud.

Scanning your personal property is a much more tricky situation. The best analogy would be if Apple announced they planned to come into your home and search your house/garage/property once a week....all without a court order, probable cause, etc.

It has nothing to do with the servers being owned by someone else. It has to do with the USER AGREEMENT. All cloud services require users to agree to their terms of service before they can upload anything. Apple does the same and Apple's user agreement always included language about Apple reserving the right to scan files going to the cloud.

It doesn't actually matter if the scanning is done on device or in the cloud. The files that are going to the cloud are the same in both instances and the user can't use the cloud without agreeing to the terms of service that allow file scanning.
 
And then there is this from the 9to5Mac side….
FYI: hash scanning originated in the 1940s. It isn't anything new. Totalitarian governments have had 80+ years to devise ways to abuse hash scanning, so 9to5 Mac and MacRumors are juuuuust a bit late in terms of their worries about the technology.

 
FYI: hash scanning originated in the 1940s. It isn't anything new. Totalitarian governments have had 80+ years to devise ways to abuse hash scanning, so 9to5 Mac and MacRumors are juuuuust a bit late in terms of their worries about the technology.

The worry is not about the technology itself, it's about Apple's decision to introduce the technology into its ecosystem and on its consumers' devices and which consequences this decision might lead to.

For a government it would be a much lower hurdle to compel Apple to abuse such technology if the technology is already in-place. If the technology is not in-place, a government wanting to abuse it would need to compel Apple to implement it first.
 
Critics of the system have every right to be concerned. Remember, after 9/11, London bombings and others, governments around the world, especially the US & UK brought in new anti-terrorism laws to combat the fight against terrorism and even back then critics of the new laws came forward and said the new laws will be open to abuse because the police will be able to use the new laws to carry out unrelated checks, searches and arrests but use the new law as their justification for doing so. Both the US & UK governments said this would not happen but guess what, it did and for many years the tabloid press reported on incidents of police abusing their powers in using the new ant-terrorism laws to arrest people for things that was not actually terroism related.

Based on the above, critics of CSAM know that once released, it will be open to abuse, just as the terrorism laws were open to abuse. China is a very good example of why critics are worried. Look at what Google and Apple did when China said to operate in their country, they must comply with their laws. did Google and Apple say they no?, no they didn't, they just made excuses that they have to comply with the countries law. Many of China's law curtail freedoms of their citizens which means it would be more than likely CSAM would be open to abuse.
 
Apple bungled this so badly. I’d love to hear the backstory on all of this.
 
  • Like
Reactions: dk001
The worry is not about the technology itself, it's about Apple's decision to introduce the technology into its ecosystem and on its consumers' devices and which consequences this decision might lead to.

For a government it would be a much lower hurdle to compel Apple to abuse such technology if the technology is already in-place. If the technology is not in-place, a government wanting to abuse it would need to compel Apple to implement it first.
Hash scanning was already in the ecosystem. Hash scanning is a core technology in computing and has been for at least 50 years. So worrying about the implications of hash scanning in 2021 is patently ridiculous. It's nothing new and governments/corporations have had the ability to use it for good/bad for half a century or more.

As for the "on device" part, the files that go to the cloud don't change so it doesn't actually matter from a legal or rights perspective. If you're an iCloud user, you first have to agree to Apple's terms of service. Those terms include specific language about Apple reserving the right to scan files that are intended for the cloud. Next, you would then choose which applications would be using iCloud for file storage. Once you make those choices, all files from that application will be scanned per the user agreement with Apple. It doesn't matter whether they're on device or on a server. The exact same files will be scanned.
 
It has nothing to do with the servers being owned by someone else. It has to do with the USER AGREEMENT. All cloud services require users to agree to their terms of service before they can upload anything. Apple does the same and Apple's user agreement always included language about Apple reserving the right to scan files going to the cloud.

What Apple can actually do does not only depend on the EULA, but also on the jurisdiction.

As example, in the EU Apple would need to receive explicit opt-in consent to perform these scans. Since such scanning is not necessary for the functional aspects of iCloud, Apple would also be prevented to tie consent to access to iCloud itself, meaning that Apple would have to allow iCloud access to non-consenting users without detriment.
 
  • Like
Reactions: dk001
FYI: hash scanning originated in the 1940s. It isn't anything new. Totalitarian governments have had 80+ years to devise ways to abuse hash scanning, so 9to5 Mac and MacRumors are juuuuust a bit late in terms of their worries about the technology.
Except the Apple version isn't working with pixel/byte based hash values. The hash value is generated by extracted features for which a neural network is used. This can be a simple feature detector, but also an object detector. These features could also be based on semantic image content. Bit of a difference there to the good old hashing.
 
If it's good for the government and business, the EU will jump all over this. They'll force apple to install scanning software, use usb-c on iphones, open up the nfc chip...
 
Hash scanning was already in the ecosystem. Hash scanning is a core technology in computing and has been for at least 50 years. So worrying about the implications of hash scanning in 2021 is patently ridiculous. It's nothing new and governments/corporations have had the ability to use it for good/bad for half a century or more.

Apple itself clearly stated they wanted to introduce a new technology. Arguing that it's not a "new technology" because the underlying concept already exists makes no sense. Of course hashing is nothing new, but hashing implemented in a framework designed to detect and report specific content to authorities is definitely new on Apple's devices.

As for the "on device" part, the files that go to the cloud don't change so it doesn't actually matter from a legal or rights perspective. If you're an iCloud user, you first have to agree to Apple's terms of service. Those terms include specific language about Apple reserving the right to scan files that are intended for the cloud. Next, you would then choose which applications would be using iCloud for file storage. Once you make those choices, all files from that application will be scanned per the user agreement with Apple. It doesn't matter whether they're on device or on a server. The exact same files will be scanned.

I have answered about the EULA and legality aspects above. My guess is that Apple can do what they want in the USA, but not so much in the EU.
 
  • Like
Reactions: turbineseaplane
What Apple can actually do does not only depend on the EULA, but also on the jurisdiction.

As example, in the EU Apple would need to receive explicit opt-in consent to perform these scans. Since such scanning is not necessary for the functional aspects of iCloud, Apple would also be prevented to tie consent to access to iCloud itself, meaning that Apple would have to allow iCloud access to non-consenting users without detriment.
The explicit "opt-in" = user agrees to cloud terms of service + chooses which applications send files to cloud. That's why cloud services operate in Europe and elsewhere. It's not the fault of the cloud service if the user fails to actually read the terms of service.
 
Apple itself clearly stated they wanted to introduce a new technology. Arguing that it's not a "new technology" because the underlying concept already exists makes no sense. Of course hashing is nothing new, but hashing implemented in a framework designed to detect and report specific content to authorities is definitely new on Apple's devices.
No, it isn't new. Apple's terms of service for iCloud always specified that it should not be used for illegal activity and that Apple reserved the right to scan files as a result. Major cloud services have the same terms. The user agrees to Apple's terms of service and then personally chooses which apps have files backed up in the cloud. There's no doubt about the user "opting in".
 
One of the arguments put forward by some members is that having CSAM on one's phone will not stop child poronography. What these people fail to understand in my opinion is that if CSAM catches some people, some of those caught will have a history of looking at such illegal images which means the police will be able to investigate that history. Things like where else did the person go to look at images, names of people who helped with the images. You catch the small players to get to the big players and I feel that is what will happen with CSAM. Those who are caught will eventually lead the police to the big players who are creating the material.

The police has the same principle with the drugs trade, go after the dealers to get to the creators of the drugs. Remove the creators, you remove the problem. Same with child porn, go after the watchers to get to the creators. Remove the creators you remove the problem.
 
  • Like
Reactions: jamajuel
Expanding the scope already, and Apple hasn’t even released it yet.
All this should tell people is that Apple was trying to get ahead of the mandates that all the major countries have been implementing to try to tamp down on the impending unrest coming soon. Globally it’s been decided that we’re just going to “manage” the populations response to economic and environmental collapse instead of changing anything.
 
lot of confusion/scare mongering here, they are not scanning phones. They are looking at photos that are on Apple servers. How can anyone object to this unless you have something to hide, why would you object when it helps children?
Not quite. They are doing local scanning of files that would be and are scanned on the server side by most other services (Cloudflare started offering it free to all customers a few years back) to generate a fuzzy hash and compare that to a known list of CSAM material. The difference, as another poster pointed out, is that Apple is end-to-end encrypted, so server-side scanning isn't really an option (while unconfirmed, this makes sense since it only applies when iCloud Photos is enabled). That isn't stopping people from clutching their pearls as if Apple is doing anything different or new that isn't pretty much standard practice - except they do it locally with a local ledger instead of server-side.
Reference to once again show that this is not new: https://blog.cloudflare.com/the-csam-scanning-tool/
 
The explicit "opt-in" = user agrees to cloud terms of service + chooses which applications send files to cloud. That's why cloud services operate in Europe and elsewhere. It's not the fault of the cloud service if the user fails to actually read the terms of service.

Nope: that only works for data processing which is deemed necessary for the service being provided. If the data processing is not necessary for the service, it needs consent via explicit opt-in and said consent needs to be freely given. The EU does not consider said opt-in consent "freely given" if not providing the consent prevents the user from using the service.

Note that consent would need to be specific: if Apple has the consent e.g. to process the data to perform backups, it does not mean they can use said consent for a different reason.
 
No, it isn't new.
Scanning isn't new. The way Apples implementation of scanning/detection works is new. One can clearly see this when reading the technical documentation summary document from Apple, where they describe how the technology works. Not in full detail, but deep enough to understand it's new. If you don't agree that it's new, please let us know what other provider (cloud service or not) is using the same underlying algorithms/technology as Apple to scan for CSAM.
 
  • Like
Reactions: Grey Area
The difference, as another poster pointed out, is that Apple is end-to-end encrypted, so server-side scanning isn't really an option (while unconfirmed, this makes sense since it only applies when iCloud Photos is enabled).
Photos on iCloud are not E2EE! Other things are, but photos are not. They could scan these on servers only. See: https://support.apple.com/en-us/HT202303
 
  • Like
Reactions: Philip_S and dk001
One of the arguments put forward by some members is that having CSAM on one's phone will not stop child poronography. What these people fail to understand in my opinion is that if CSAM catches some people, some of those caught will have a history of looking at such illegal images which means the police will be able to investigate that history. Things like where else did the person go to look at images, names of people who helped with the images. You catch the small players to get to the big players and I feel that is what will happen with CSAM. Those who are caught will eventually lead the police to the big players who are creating the material.

I have no doubt such measure would catch some criminals: the question is at what price and whether the same or better results could be achieved with alternative means. The appeal to emotion "think of the children" is often used to argue that no matter the price, it's worth it, but actually it's not the case.

The police has the same principle with the drugs trade, go after the dealers to get to the creators of the drugs. Remove the creators, you remove the problem. Same with child porn, go after the watchers to get to the creators. Remove the creators you remove the problem.

The war on drugs is a huge failure though: most western countries have long moved towards a different approach for combating drugs because the simplistic approach "make it illegal and arrest all dealers" doesn't actually work in practice.
 
  • Like
Reactions: KindJamz
So that is not how Apple's proposed system works. It does not look for nudity (the way AI looks for nudity on Facebook for example). Using AI to identify nudity is still seriously flawed and the list of things that get banned is hilarious (like a photo of a bowl of fruit for example).

This system will only look for matches of known child pornagraphy. So if you make a film of child abuse - it will not be flagged by Apple as this image/video has not yet been viewed and entered into the database of known child pornagraphy. If you never share this video on the internet, it will probably never enter the database and never be flagged by Apple.

That is one of the main problems with this system. By the time a photo enters the database, those who abused/filmed the child are long gone. So in the end you don't actually catch anyone abusing children. You just catch those sickos who want to watch the stuff. Is that better than nothing? Perhaps. But I would rather have a system that actually catches the people abusing the kid so they can never do it again.
Understood. Thanks for the clarity on this! I agree, seems about 90% useless as it stands.
 
If government ask you to permanently drive under 30km/h because “it is safe for Children” even on high way, would you do it?

This is an opt-in feature that parents can enable or disable. So, your analogy is not analogous.

A correct analogy would be a speedometer app that tells you if you are driving over 30 km/hr, runs on your phone, does but not share your speed to the government or Apple, and can be disabled if you don’t want it.

You might still think that’s a bad idea, but don’t misrepresent what it is.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.