Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So after all this, I find almost all posters fall into one of these categories:

1. Cool!!! This is a great thing!
2. Meh. When’s the new iPhone coming out?
3. I’m going to keep my eye on this…
4. I’m keeping my eye on this. No updates or purchases for me until Apple cancels this.
5. I have or I’m in the process of dumping Apple.

It’s a far cry from where most of us were likely a couple of months ago.
That's for sure! I was going to buy a new M 14" MBP as fully decked out as I could, and an iPhone 13, probably a pro max. Now there's no way I'd buy them. If they cancel this stuff, then I'll revisit possible purchases next year. If they keep it, no new purchases ever.

I actually like my Flip 3 more than my iPhone, so there's a silver lining for me...
 
Usually I can understand at least the point you are coming from even if I don’t agree with it.
This time you have lost me.

Not sure if you're just being rhetorical for effect or actually serious. Not sure how I could have possibly made my point any clearer. What's not to understand? All the examples I gave you were situations where checks are being made to ensure no illegal or otherwise dishonest/unethical activity is taking place, yet most people recognize that these checks are not tantamount to accusations of wrongdoing or personal distrust in individual people. Same goes for Apple checking to be sure no one is uploading illegal material to iCloud. Again, Apple (and every other larger company) doesn't know each of their users on a deep, personal level, and therefore everyone is subject to the same "scrutiny". Heck, I even work with kids where we aren't allowed to release them to their parents without the parents presenting a pickup tag that matches the child's. 99% of parents understand and appreciate that level of security and concern for the safety of children. They don't say, "So you think I'm some kidnapper or molester?! You don't trust me?!"

The safest and wisest way of dealing with issues like this is to have one set of rules for everyone instead of making individual exceptions or removing all safeguards.
 
Last edited:
Simply reporting illegal activity is the duty of every good citizen or company. Law enforcement's realm is to investigate and prosecute if there are legal grounds. I manage a property that has around 50 security cameras installed. If we catch something illegal on these and report it to the police, are we acting as law enforcement? Of course not. We're not arresting or detaining anyone.

It is indeed a feature. I know in marketing jargon we think of features as you describe them ("what cool thing can it do for me?"), but in reality, a feature is simply something that the operating system does, even something we don't think is "exciting."

So first off the property you manage is owned by the same folks that own the cameras right? Or perhaps they contract them from a service? Regardless your analogy is poor, of course you can monitor your own cameras, surveying your property, for illegal activity and report it. You are not installing code onto cameras owned by others and viewing edit: analyzing their footage looking for illegal activity.

Code specifically meant to catch someone doing something illegal is not a feature.

I posted this response in another thread in response to someone who brought up Apple's "moral obligation". They had no response, maybe you have something to say.

Spying code is not a "feature" in my book.

These claims that Apple has a moral and legal duty to prevent CSAM from being stored on their servers is a sham!

Every business would claim the same if asked, no business or person wants CSAM material on their property but:
  • My bank doesn't search items in my safe deposit box for CSAM.
  • Parking garages do not search my car for CSAM before I park in them.
  • Hotels do not search my suitcase for CSAM when I check-in.
  • Towns/Cities/Apartments do not search my moving truck for CSAM when I move in.
  • U-Store-It businesses do not search boxes for CSAM before they are stored in their facilities.
How is my phone any different? This is a gateway to further invasions of privacy.

You mentioned that you manage a property, I would assume having CSAM on the property would be illegal and the owners would frown upon it.... do you search every vehicle that enters the property for CSAM? Do you search every persons phone that enters the property for CSAM? What is the difference between Apple and your property?
 
Last edited:
Didn't EFF see the video? It was a bungle in the keynote... Their not looking at photos.

This seems more like "You only get 1 chance at this, and if you screw up then try to correct it... it won't fly" we'll just keep hamming because it was "mentioned" in the first place,, and ignore any future corrections afterwards.
 
Simply reporting illegal activity is the duty of every good citizen or company. Law enforcement's realm is to investigate and prosecute if there are legal grounds. I manage a property that has around 50 security cameras installed. If we catch something illegal on these and report it to the police, are we acting as law enforcement? Of course not. We're not arresting or detaining anyone.

Fair point, and good perspective.

Well, again, I disagree with your assessment there, but at this point we're going to be going around in circles, so I'll leave it to rest here. I do appreciate your restrained, balanced language, though! Some on this forum have consigned Apple to the depths of Hades, LOL!

Agreed!
 
This discussion is over.
Some of us, with knowledge outside Apple echo chamber, shared logical and rational point of view based on facts and realities.

For me and my colleagues this decision from Apple is the final straw. We will no longer be part of Apple ecosystem, personally and professionally. We moved forward to Linux and FOSS solutions.

I am glad that this event is happened.
As a result I will have more privacy and more financial efficiency.

There will be no critical loss for Apple, the core user base is not capable of understanding this decision outreach and will be happy to continue giving trust "just because".

So be it.

So in other words, you and your "colleagues" are right and everyone who disagrees is dumb. Sounds mature. :rolleyes:
 
So first off the property you manage is owned by the same folks that own the cameras right? Or perhaps they contract them from a service? Regardless your analogy is poor, of course you can monitor your own cameras, surveying your property, for illegal activity and report it. You are not installing code onto cameras owned by others and viewing edit: analyzing their footage looking for illegal activity.

My point there was that simply reporting illegal activity you discovered (regardless of how you discovered it, assuming your method was legal) is not "acting in place of law enforcement".

Code specifically meant to catch someone doing something illegal is not a feature.

Technically, a feature is simply a function that a piece of software performs. Not all features make it to the box art or ad copy - only the ones that they think will impress the consumers will.

I posted this response in another thread in response to someone who brought up Apple's "moral obligation". They had no response, maybe you have something to say.

You mentioned that you manage a property, I would assume having CSAM on the property would be illegal and the owners would frown upon it.... do you search every vehicle that enters the property for CSAM? Do you search every persons phone that enters the property for CSAM? What is the difference between Apple and your property?

In this day and age, I would think it's far rarer for hard copies of CSAM to be printed, collected, and stored, and many of the places you mentioned aren't even places where things would be "stored" anyhow. So it wouldn't make much sense to dedicate time and resources to search for CSAM in those places. And of course it would be hard (and stupid) to set up distribution of CSAM in those places, even if the images were on digital storage media. Now, digital storage services are totally different, which is why other cloud services scan for CSAM and why Apple is now moving that direction. These are hot spots for the storage and distribution of CSAM. So it makes total sense to scan for it.

But also, physical searches (which is the only way possible in the examples you gave) are not at all what's happening with Apple's proposed method. That's the whole point of why they've designed it the way they did - to minimize its invasiveness to just about the bare minimum you can - so that they see nothing except illegal material.
 
Yep. This is it. You nailed it. Thank you for you insightful contribution.

Thank you for your snark, condescension, and maturity. You're really winning followers with that attitude. Way to go 👍
 
My point there was that simply reporting illegal activity you discovered (regardless of how you discovered it, assuming your method was legal) is not "acting in place of law enforcement".

My point was that using your own equipment to look for illegal activity is far more acceptable than using someone else's. If Apple wants to scan for CSAM let them do it on their hardware, not mine. Using the analogy of the property you manage, you use the cameras you own to watch for illegal activity, you do not install "spyware" or "features" on the phones of people visiting your property and use their video feeds to watch the property.

Technically, a feature is simply a function that a piece of software performs. Not all features make it to the box art or ad copy - only the ones that they think will impress the consumers will.

Wordsmithing... if we were to ask any reasonable group of people if Apple's CSAM solution is a "feature" of iOS 15 I'm fairly sure most would be on my side of this discussion but it really isn't all that relevant.

In this day and age, I would think it's far rarer for hard copies of CSAM to be printed, collected, and stored, and many of the places you mentioned aren't even places where things would be "stored" anyhow. So it wouldn't make much sense to dedicate time and resources to search for CSAM in those places. And of course it would be hard (and stupid) to set up distribution of CSAM in those places, even if the images were on digital storage media. Now, digital storage services are totally different, which is why other cloud services scan for CSAM and why Apple is now moving that direction. These are hot spots for the storage and distribution of CSAM. So it makes total sense to scan for it.

But also, physical searches (which is the only way possible in the examples you gave) are not at all what's happening with Apple's proposed method. That's the whole point of why they've designed it the way they did - to minimize its invasiveness to just about the bare minimum you can - so that they see nothing except illegal material.

Perhaps but many in the CSAM threads members have claimed that Apple either "has to" or "morally wants to" search for CSAM so that they are not "housing" said material on their servers and my point is no company or person wants to so shouldn't they be doing all they can to prevent said materials from being housed on or in their property?

I will re-adjust a bit.

When someone checks into a hotel and logs in to their wifi they are now using their infrastructure and if they were dealing in CSAM I'm fairly certain the hotel chain would be horrified that their property was used in this crime. Shouldn't they do all they can to prevent this? Are you suggesting that hotels, as a part of agreeing to use their wifi, gets to install the very some code on everyone's laptops/phones/ipads to scan them for CSAM before connecting to their wifi? What about coffee shops and restaurants that offer free wifi, shouldn't they perform these checks as well? What about the public library?

I hear you that cloud storage services are a more fertile ground for potentially finding these pictures but where does it stop?

Another point I have made in other threads is what do Ring, Eufy and Logitech do about video stored on their cloud services? They certainly don't want to host illegal materials, CSAM or other, so how do they prevent that? Do they start having AI listen to the audio for certain sounds and phrases and report back that a child might be in danger in X household? Would you be ok with that? Remember its AI doing the listening and the only time a human will hear your private recordings is if something objectionable has triggered a response.
 
  • Like
Reactions: eltoslightfoot
Thank you for your snark, condescension, and maturity. You're really winning followers with that attitude. Way to go 👍
This is not my problem or responsibility. My responsibility as an individual is to share my consumer reaction, after all I am Apple user more than 20 years.

My responsibilities as a technologically educated professional and tech business owner are to mitigate risks for my business, employees and customers. And I have done it. In record time my production workflow is not dependent on any Apple software decision or "innovation for the children".

I don't operate in the realm of "winning followers".
I collect and analyze technical data and make decisions based on personal experience combined with solid expertise of security and machine learning professionals.

Sharing here is motivated only from sentimental reasons towards a lot of professionals and privacy conscious Apple users. But this is over now. You are free to proceed with next Applelogism and rhetorical gymnastics,
or switch to another more "exciting" topic like "advancements in AR" or "new hashtag for Apple Event".

After all we are "screeching minority" and our "uninformed opinion" is "overblown", so this will not affect Apple expansion in any way or form.
 
My point was that using your own equipment to look for illegal activity is far more acceptable than using someone else's. If Apple wants to scan for CSAM let them do it on their hardware, not mine. Using the analogy of the property you manage, you use the cameras you own to watch for illegal activity, you do not install "spyware" or "features" on the phones of people visiting your property and use their video feeds to watch the property.

We seem to be talking past each other on this point. You're arguing about the acceptability of one method of "surveillance" over another. I'm saying that regardless of what LEGAL method you use (i.e. not spyware), you're not acting in the place of law enforcement by simply looking for and reporting illegal activity. I'm making a very broad point.

Wordsmithing... if we were to ask any reasonable group of people if Apple's CSAM solution is a "feature" of iOS 15 I'm fairly sure most would be on my side of this discussion but it really isn't all that relevant.

Well, seeing as a "wordsmith" is simply someone skilled with words, I appreciate the compliment ;)

When someone checks into a hotel and logs in to their wifi they are now using their infrastructure and if they were dealing in CSAM I'm fairly certain the hotel chain would be horrified that their property was used in this crime. Shouldn't they do all they can to prevent this? Are you suggesting that hotels, as a part of agreeing to use their wifi, gets to install the very some code on everyone's laptops/phones/ipads to scan them for CSAM before connecting to their wifi?

I would have no problem with that. Their service; their rules. They probably don't because there's not an easily deployable software solution for this (at least not that I'm aware of). Obviously hotels aren't software companies, so they would have to rely on an outside software company to provide software to their industry for this purpose that would be cross-platform. That's a totally different situation from Apple and their own very tightly-controlled environment of iOS. Obviously Apple IS a software company and is working with their own software installed on phones they designed.

What about coffee shops and restaurants that offer free wifi, shouldn't they perform these checks as well? What about the public library?

Same answer as above.

I hear you that cloud storage services are a more fertile ground for potentially finding these pictures but where does it stop?

Again, if it were feasible for hotels, coffee shops, etc. to scan in the same manner Apple is proposing, then I'd be all for that.

Another point I have made in other threads is what do Ring, Eufy and Logitech do about video stored on their cloud services? They certainly don't want to host illegal materials, CSAM or other, so how do they prevent that? Do they start having AI listen to the audio for certain sounds and phrases and report back that a child might be in danger in X household? Would you be ok with that? Remember its AI doing the listening and the only time a human will hear your private recordings is if something objectionable has triggered a response.

Ok, maybe I'm totally out of the loop here, but who is using doorbell and security cameras to produce CSAM? Also, audio monitoring is a no-go for one reason: lack of context. How would AI reliably determine if something was said in jest, sarcasm, or was simply someone quoting someone else (or a TV show or movie playing)? That simply doesn't even come close to comparing with what Apple is doing.
 
This is not my problem or responsibility. My responsibility as an individual is to share my consumer reaction, after all I am Apple user more than 20 years.

My responsibilities as a technologically educated professional and tech business owner are to mitigate risks for my business, employees and customers. And I have done it. In record time my production workflow is not dependent on any Apple software decision or "innovation for the children".

I don't operate in the realm of "winning followers".
I collect and analyze technical data and make decisions based on personal experience combined with solid expertise of security and machine learning professionals.

Sharing here is motivated only from sentimental reasons towards a lot of professionals and privacy conscious Apple users. But this is over now. You are free to proceed with next Applelogism and rhetorical gymnastics,
or switch to another more "exciting" topic like "advancements in AR" or "new hashtag for Apple Event".

After all we are "screeching minority" and our "uninformed opinion" is "overblown", so this will not affect Apple expansion in any way or form.

Yay you! I have no problem with you doing what you think is best for you and your business. My problem is with you basically calling everyone else stupid for not agreeing with you ("not capable of understanding"). Obviously Apple employs many people who are also "technologically educated professionals" who don't share your opinion.
 
Not sure if you're just being rhetorical for effect or actually serious. Not sure how I could have possibly made my point any clearer. What's not to understand? All the examples I gave you were situations where checks are being made to ensure no illegal or otherwise dishonest/unethical activity is taking place, yet most people recognize that these checks are not tantamount to accusations of wrongdoing or personal distrust in individual people. Same goes for Apple checking to be sure no one is uploading illegal material to iCloud. Again, Apple (and every other larger company) doesn't know each of their users on a deep, personal level, and therefore everyone is subject to the same "scrutiny". Heck, I even work with kids where we aren't allowed to release them to their parents without the parents presenting a pickup tag that matches the child's. 99% of parents understand and appreciate that level of security and concern for the safety of children. They don't say, "So you think I'm some kidnapper or molester?! You don't trust me?!"

The safest and wisest way of dealing with issues like this is to have one set of rules for everyone instead of making individual exceptions or removing all safeguards.

Serious.
I am seeing this as spyware/nannyware being force installed for reason “X” with no benefit to the user. I actually see it as a detriment.
I am looking at setting the term “CSAM” aside.

Reading your examples, these look like “legal” verifications for most driven by law or some other “rule”. You know when it is happening and you can make a decision to comply, deny, or challenge.

I do not see that here. This is in the dark, out of sight, you have no clue what it is doing, if it is doing it correctly, and what the impact is unless you get a LEO visit. You don’t even know if it is on or off. Apple says if you do “Y” it won’t run. But you have no way to verify that. Heck, you can’t even tell if it works.

Assuming I am understanding your viewpoint, the supplied examples are nothing like what Apple is proposing. They apply more to the Message check for children or the Siri check: an event happens, you are informed, you make a decision.
 
Last edited:
Serious.
I am seeing this as spyware/nannyware being force installed for reason “X” with no benefit to the user. I actually see it as a detriment.
I am looking at setting the term “CSAM” aside.

Reading our examples, these look like “legal” verifications for most driven by law or some other “rule”. You know when it is happening and you can make a decision to comply, deny, or challenge.

I do not see that here. This is in the dark, out of sight, you have no clue what it is doing, if it is doing it correctly, and what the impact is unless you get a LEO visit.

Assuming I am understanding your viewpoint, the supplied examples are nothing like what Apple is proposing. They apply more to the Message check for children or the Siri check: an event happens, you are informed, you make a decision.

You're all over the place here. I was responding to a single assertion that someone made: that Apple scanning for CSAM indicates they don't trust their users. I was simply showing by way of multiple analogies that providing safeguards against illegal/dishonest/unethical activity that apply to everybody equally is not an indication of distrust of individual people. In other words, there's no reason to take it personally, just like you (hopefully) don't take it personally when your bank asks for ID (or even takes your thumbprint) when you cash a check (for example).
 
You're all over the place here. I was responding to a single assertion that someone made: that Apple scanning for CSAM indicates they don't trust their users. I was simply showing by way of multiple analogies that providing safeguards against illegal/dishonest/unethical activity that apply to everybody equally is not an indication of distrust of individual people. In other words, there's no reason to take it personally, just like you (hopefully) don't take it personally when your bank asks for ID (or even takes your thumbprint) when you cash a check (for example).

I apparently didn’t read back far enough.
I have never looked at it as “personal”. Matter of fact I still have no idea why Apple is even proposing this.
Thx.
 
I apparently didn’t read back far enough.
I have never looked at it as “personal”. Matter of fact I still have no idea why Apple is even proposing this.
Thx.

They are proposing this to prevent the storage and distribution of CSAM on their servers. Simple as that. I think the vast majority on this forum at least agree with that - they simply disagree about the chosen prevention method. There are some on the fringe (imo) that think Apple has some evil ulterior motive which "CSAM" is just a cover story for, but I think that's nonsense.
 
They are proposing this to prevent the storage and distribution of CSAM on their servers. Simple as that. I think the vast majority on this forum at least agree with that - they simply disagree about the chosen prevention method. There are some on the fringe (imo) that think Apple has some evil ulterior motive which "CSAM" is just a cover story for, but I think that's nonsense.

That makes little sense from MPOV. I’ve run a few major tech projects and cannot see the ROI based on what we know. This process does not seem risk adverse.

There is no legal reason to do so.
If Apple is concerned about optics, scan on share in cloud like most others.

There is a significant piece of this puzzle that is missing.
 
  • Like
Reactions: Pummers and scvrx
There is no legal reason to do so.
If Apple is concerned about optics, scan on share in cloud like most others.

The whole point of the on-device scanning is to hide scan results from Apple so that they see nothing but scanning info on illegal images. This is their way of compromising between not scanning at all and scanning in a far more invasive way.
 
So after all this, I find almost all posters fall into one of these categories:

1. Cool!!! This is a great thing!
2. Meh. When’s the new iPhone coming out?
3. I’m going to keep my eye on this…
4. I’m keeping my eye on this. No updates or purchases for me until Apple cancels this.
5. I have or I’m in the process of dumping Apple.

It’s a far cry from where most of us were likely a couple of months ago.
People say a lot of brave things on the internet. This release will follow the same upgrade numbers as the past ones... Most will update in the first few weeks.
 
But is it ok with you that you can never delete emails, post's, tweets and more from Apple, Google, Twitter and others?
No, which is why I don't use Google (as far as I can possibly avoid them) or Twitter, and use Apple's unencrypted cloud services only for things I want to make public or for communication with them.
Someone will be able to get into whatever you think you are protecting. Your location, everything.. its all tracked. You're fighting for an illusion.
Since privacy is an illusion, please send me photos of your credit card, birth certificate, and passport, film any hot members of your household engaged in sexual activities, and let me have a rummage through your papers and IT equipment in case there's anything interesting there.
Yes it is apples job. Apple isn’t legally allowed to store child abuse images. They are responsible for ensuring this.
This means it’s Apples job to scan for abuse photos before they are uploaded to the cloud..
Private companies are already required BY LAW to monitor their users.
They are required to report images or videos of child abuse (and related images) if they discover it, but they have no obligation to go looking.
The law you are referring to here is the CyberTipLine Modernization act of 2018.
That expanded the previous law from (arguably) pictures to (explicitly) videos, and allowed them to report suspected imminent violations, but it didn't create a new mandate to go looking.
Google and Facebook already report 1000s of illegal images to authorities every day.

I don’t recall any huge issue with the topic until now.
I for one don't send them any of my pictures.
It will protect children by catching perpetrators who make illegal child images
Once those pictures are widely-enough circulated to come to law enforcement's attention, and then can be tracked back by upload date.
Apple, Google, MS and Facebook have been running server side scans for ages. But it appears that Apple doesn’t seem to catch anywhere near as many images as the others. So either pedophiles don’t use iCloud (unlikely) or Apple’s server side scan doesn’t work as well as the competition.
Google, Facebook, and MS all do content-analysis scans on images uploaded to their servers, and have much better training data because they incorporate all user data into the learning system. They've probably discovered that some of their categories represent child porn (and certainly some faces would raise red flags, eg Traci Lords). Apple only scans against the NCMEC database and foreign equivalents (albeit using a less sophisticated hashing algorithm than NeuralHash).
If a government wanted to abuse the phone in your pocket then they would just do it. The existence of this system would have zero bearing on it.
In America (and a few other countries) the government can require companies use capabilities and technologies they have to help law enforcement, but they don't have to go out of their way to create new capabilities (eg insecure versions of iOS to use to try to unlock an iPhone).
Apple would have to sanction any "horrible" use of this technology, which--while obviously possible--I have absolutely no fears that they will.
Until they get a court order, National Security Letter, or similar.
How does that relate to the topic?
That's an example of a service that tried to protect user privacy first being required to collect user data they didn't want to, then being forced to hand it over (to prevent "terrorism").

What exactly do you think those countries could ask Apple to scan for?
How about photos that appear in the press showing war crimes that were secret, or photos taken on farms that are protected by Ag-Gag laws, or the photos of protesters that got put into FBI anti-terrorism datasets so they can identify who was behind the camera as well as in front?
This has to be the most ridiculous most customer hostile idea apple has ever come up with.
Allowing apps running on M1 macs to individually encrypt files without the user being able to decrypt them except with that app might turn out to be worse. I assume the point is for DRM keys, but you can image how Adobe or someone worse might use it, and it could be a disaster for interoperability when used thoughtlessly.
Apple’s solution is a more privacy oriented solution IF CSAM SCANS MUST BE DONE
But there isn't any such obligation.
Apple is progressively moving towards fully end-to-end encrypted iCloud.
Then why didn't they announce that first, and say this was to prevent it becoming a haven for CP? For that matter, why aren't they concerned about people sharing CP in iCloud Drive?
Heck, I even work with kids where we aren't allowed to release them to their parents without the parents presenting a pickup tag that matches the child's. 99% of parents understand and appreciate that level of security and concern for the safety of children. They don't say, "So you think I'm some kidnapper or molester?! You don't trust me?!"
There's a difference between satisfying your obligations and playing at detectives.
I bet you don't download lists of stolen cars and do ANPR just to check them all, or whatever.
 
They are proposing this to prevent the storage and distribution of CSAM on their servers. Simple as that. I think the vast majority on this forum at least agree with that - they simply disagree about the chosen prevention method. There are some on the fringe (imo) that think Apple has some evil ulterior motive which "CSAM" is just a cover story for, but I think that's nonsense.
Then look for it on their servers. Not my phone. Well, former phone.
 
  • Like
Reactions: zkap and Pummers
Not sure if you're just being rhetorical for effect or actually serious. Not sure how I could have possibly made my point any clearer. What's not to understand? All the examples I gave you were situations where checks are being made to ensure no illegal or otherwise dishonest/unethical activity is taking place, yet most people recognize that these checks are not tantamount to accusations of wrongdoing or personal distrust in individual people. Same goes for Apple checking to be sure no one is uploading illegal material to iCloud. Again, Apple (and every other larger company) doesn't know each of their users on a deep, personal level, and therefore everyone is subject to the same "scrutiny". Heck, I even work with kids where we aren't allowed to release them to their parents without the parents presenting a pickup tag that matches the child's. 99% of parents understand and appreciate that level of security and concern for the safety of children. They don't say, "So you think I'm some kidnapper or molester?! You don't trust me?!"

The safest and wisest way of dealing with issues like this is to have one set of rules for everyone instead of making individual exceptions or removing all safeguards.
Yes and that one set of rules should be don't do it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.