Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I've upgraded as Apple hasn't started doing their on-device surveillance and I do have some hope they won't. I'll likely sell my Apple stuff and move on if they actually start doing it. Unfortunately there may be no place to go if Apple starts this, the others will follow and may be forced to by law.
Hopefully they don't, If people can keep the topic alive, it'll put Apple on notice. Well, If that opportunity arises what phone do you have your sights on ?
 
  • Like
Reactions: dk001 and 1284814
Right now, I'd argue the best real world choices for privacy are probably Linux for computers, and deGoogled Android. But...these choices may or may not work for a given person.

I use Linux--but it helps that all my needs are met by the available software. One thing that I've considered if my needs change is having a secondary system just to handle certain tasks I can't do with Linux.

I currently use a feature phone--and it's good enough for what I actually need. But I can say I've heard some people swtiched to an iPhone from a deGoogled Android phone. They recognized that the deGoogled Android was better for privacy--but the iPhone worked better for their particular needs.
I actually never used linux before and was always curious about checking it out. I heard its somewhat difficult to navigate what do you think? but a deGoogled phone sounds so odd. I feel the one thing that would be quite the adjustment is familiarizing yourself with a whole new ecosystem; ie. Android. I suppose thats with anything but nonetheless, that'll take some getting use too. What phone would you get if you had to choose?
 
  • Like
Reactions: dk001
I actually never used linux before and was always curious about checking it out. I heard its somewhat difficult to navigate what do you think?

I don't think Linux is as bad as the stories claim! I started using it about 2005. For me, it was (at worst) no harder to live with than Windows of that era. (Admittedly, that wasn't a terribly high bar...) Linux has continued to evolve since that time. Back then, I kept a Macintosh around for productivity (mainly word processing)--but software choices for Linux improved enough that by the early 2010s I was able to use Linux pretty much exclusively day to day.

One thing that helped me--I've pretty much stuck with Linux distributions that are aimed at "normal" people.

but a deGoogled phone sounds so odd. I feel the one thing that would be quite the adjustment is familiarizing yourself with a whole new ecosystem; ie. Android. I suppose thats with anything but nonetheless, that'll take some getting use too.
Yes

What phone would you get if you had to choose?

To be honest, I have no idea... At some point, I'll probably have to replace the current phone. I've though of this the last few months, and I've flip flopped among the various options... One day I think "iPhone." Another I think deGoogled Android. Then there are times when I even think regular Android might be good enough, given that I wouldn't be using it as anything more than a secondary device. (And that I'd take as much care to maximize privacy.)
 
  • Like
Reactions: dk001
I actually never used linux before and was always curious about checking it out. I heard its somewhat difficult to navigate what do you think? but a deGoogled phone sounds so odd. I feel the one thing that would be quite the adjustment is familiarizing yourself with a whole new ecosystem; ie. Android. I suppose thats with anything but nonetheless, that'll take some getting use too. What phone would you get if you had to choose?
I made the decision that I didn’t want an ecosystem (honestly, how many people are still on iOS because they want their chat bubbles to be blue when they text their friends?) and picked the best tool for the job. For me, that’s Linux for the laptop and GrapheneOS on a pixel. No Apple/Google IDs or ecosystems. Bring your own email address, get your stuff done with no subscription model but the most privacy and purity available.

Give it a shot. It’s not a sacrifice or a workaround. It’s liberating.
 
Seeing all the comments and the continued narrative of CSAM still makes me think, well where is the alternative? If I decide to stop using apple what do I use? Android? Windows? Etc.
FOSS. Free Open Source Software. Code that is transparent and can be audited by anyone.

Start with Linux and a de-Googled Android
 
Okay, so the more reliable it is the more likely they’ll lower it. Seems fine unless I’m missing something.
In general lowering thresholds leads to false positives, and in this it depends on how one defines 'reliability' or 'accuracy'. The system can catch every instance of illegal CSAM material by flagging all files, but of course the cost would be a huge increase in false positives. It could avoid all false positives by never flagging a file, but the cost would be false negatives. The system will be between these two extremes, but how one defines the 'optimum' threshold along the continuum depends on the value placed on correct identification of CSAM material versus avoiding false positives. There will always be an inescapable trade-off between the two unless Apple's system becomes perfect. And remember Apple is still engaging in an educated guess about the false positive rate if the system is ever released into the wild, and it has not indicated what the nature of the false positives are likely to be (will they always be pictures of kids, exposed skin, etc?).

I am glad Apple is reconsidering, but I wish they would just jettison the idea outright. It wasn't thought through sufficiently.
 
  • Like
Reactions: dk001 and BurgDog
Okay, so the more reliable it is the more likely they’ll lower it. Seems fine unless I’m missing something.

You’re not. Apple is unsure as to the accuracy of their solution and chose 30 for the initial launch. Up or down, will Apple even let us know, if they implement this.

I remind myself that it could as easily go up too.

One of the big challenges today is that NCMEC has indicated that the majority of the reports they get are false positives. Apple adding to that is not going to help things.
 
Last edited:
  • Like
Reactions: Pummers
Emphasis on your last point, it wont be an easy by any stretch. It almost seems inevitable and not many places to turn to. Aren't most of these companies already running some sort of software like this ? I feel like the further we move into the future with technology, the less privacy we have.

Many if not all of the Cloud provider’s are running scans on shared data. Others like Apple and Google also scan email. I am not aware of any other doing it client side (on device). Google did look at it a few years back but dropped it as too risky and overly invasive.

That potential loss of privacy is going to be a challenge. Personally I feel we need to be open to change yet firmly draw a line between public and private. Cloud vs Device. Personal vs Company vs Government.
 
  • Like
Reactions: BurgDog
I made the decision that I didn’t want an ecosystem (honestly, how many people are still on iOS because they want their chat bubbles to be blue when they text their friends?) and picked the best tool for the job. For me, that’s Linux for the laptop and GrapheneOS on a pixel. No Apple/Google IDs or ecosystems. Bring your own email address, get your stuff done with no subscription model but the most privacy and purity available.

Give it a shot. It’s not a sacrifice or a workaround. It’s liberating.

That is the challenge though. There is a reason I have two phones.
For my main job we use Windows, Android, and iOS.
For my Contract job I need Windows and MacOS.
For private use I have Windows, Linux, and a fairly locked down Android.
I may look at something like GrapheneOS once the Pixel 6 comes out.

Been looking at a Linux phone however they have a bit more to go. It is more of a beta device at the moment. I do like where it is going though.

 
  • Like
Reactions: WriteNow
I made the decision that I didn’t want an ecosystem (honestly, how many people are still on iOS because they want their chat bubbles to be blue when they text their friends?)
I do. Because there are some nice functionalities that are part of the blue bubble universe.
and picked the best tool for the job.
For me this is ios for the phone.
For me, that’s Linux for the laptop and GrapheneOS on a pixel. No Apple/Google IDs or ecosystems.
Yes you are "in control" of your phone, kind of and laptop. But you still leave bread crumbs on the internet and can't hide. And you have to build your own offsite "cloud" to back things up.
Bring your own email address, get your stuff done with no subscription model but the most privacy and purity available.
It's not worth the hassle for me. But YMMV.
Give it a shot. It’s not a sacrifice or a workaround. It’s liberating.
It's not liberating when I can't get my job done, or play the games I want or share the content I want to share, easily.
 
As tech advances (and has BEEN advancing), there will be very little choice to opt out of privacy invasion/surveillance. There's a dark side to Face ID, AI, machine learning, fingerprinting, GPS, etc..., y'know? Whatever conveniences they provide, they can also be co-opted for nefarious use.

Recently I learned something new. Did you know most printers add secret "tracking dots" onto your prints, so what you print can be tracked back to your printer's serial number, and ultimately back to you? The government compelled printer companies to add this tech to prevent money forgery -- but nothing has prevented the government from using it to arrest leakers & whistleblowers. The EFF reported on it.

The point is: never believe what they say. Apple can claim from the mountaintops it'll keep tight control over this, but when the cat's out of the bag & a precedent is set, NOTHING will prevent overreach into other categories.
 
In general lowering thresholds leads to false positives, and in this it depends on how one defines 'reliability' or 'accuracy'. The system can catch every instance of illegal CSAM material by flagging all files, but of course the cost would be a huge increase in false positives. It could avoid all false positives by never flagging a file, but the cost would be false negatives. The system will be between these two extremes, but how one defines the 'optimum' threshold along the continuum depends on the value placed on correct identification of CSAM material versus avoiding false positives. There will always be an inescapable trade-off between the two unless Apple's system becomes perfect. And remember Apple is still engaging in an educated guess about the false positive rate if the system is ever released into the wild, and it has not indicated what the nature of the false positives are likely to be (will they always be pictures of kids, exposed skin, etc?).

I am glad Apple is reconsidering, but I wish they would just jettison the idea outright. It wasn't thought through sufficiently.
Trillion dollar company thousands of software engineers and other people they bring in to analyze and measure the effectiveness. I know I come across as a bit rude and I can tell you’ve put a good amount of thought and reading into this but I can guarantee that apple has put way more thought and energy into this than all of this forums collective discussion combined. Apple is not perfect. However they have always corrected errors in their code and policies. When they were slow to do so it was often the route forward required more thought.

Also even if they were to lower the threshold they would still have human review to catch false negatives. They would collect the data and determine if they need to make improvements. This isn’t a ‘we made it once and we’ll never update it again’ kind of thing.
 
  • Sad
  • Haha
Reactions: KindJamz and dk001
You’re not. Apple is unsure as to the accuracy of their solution and chose 30 for the initial launch. Up or down, will Apple even let us know, if they implement this.

I remind myself that it could as easily go up too.

One of the big challenges today is that NCMEC has indicated that the majority of the reports they get are false positives. Apple adding to that is not going to help things.
I don’t seem to understand your argument. If there are false positives that is why they have human review. I doubt 30 was some arbitrary number they pulled out of a hat.

NCMEC can report as many false positives as they like, that doesn’t change the fact that Apple is still checking with human review.

Apple doesn’t report any photos to authorities unless they find CSAM. That means those images are verified illegal already. If their system flagged 30 false positives for review (which they stated is one in one trillion in a year, just think of how astronomical high that number is) then a human would need to view those photos and since they were all false positives, a.k.a. not CSAM, then nothing would happen.
 
Trillion dollar company thousands of software engineers and other people they bring in to analyze and measure the effectiveness. I know I come across as a bit rude and I can tell you’ve put a good amount of thought and reading into this but I can guarantee that apple has put way more thought and energy into this than all of this forums collective discussion combined. Apple is not perfect. However they have always corrected errors in their code and policies. When they were slow to do so it was often the route forward required more thought.

Also even if they were to lower the threshold they would still have human review to catch false negatives. They would collect the data and determine if they need to make improvements. This isn’t a ‘we made it once and we’ll never update it again’ kind of thing.

That is the part that makes little sense; why interject an Apple employee into the review process? Folks at NCMEC are far more qualified And do this kind of stuff as part of their “job”.
 
I don’t seem to understand your argument. If there are false positives that is why they have human review. I doubt 30 was some arbitrary number they pulled out of a hat.

NCMEC can report as many false positives as they like, that doesn’t change the fact that Apple is still checking with human review.

Apple doesn’t report any photos to authorities unless they find CSAM. That means those images are verified illegal already. If their system flagged 30 false positives for review (which they stated is one in one trillion in a year, just think of how astronomical high that number is) then a human would need to view those photos and since they were all false positives, a.k.a. not CSAM, then nothing would happen.

I have not found anywhere that this is specifically called out - why 30?
Is anApple employee matching photo to photo specific. Why interject an Apple Employee? Close hold the actual incidence of false positives?

So if the Apple employee looks at the “30” and finds 2 “match” and 28 “no match”, do all 30 get passed? Just the 2? Or….? No that Apple knows the 2 are there they have to report by law.

In an effort to limit reporting false positives, Apple has initially chosen 30 as their threshold. If implemented, this could be too low, just right, or too high. Right now mathematically Apple is saying “1:1,000,000,000,000” however we have no access to the math or testing to support this claim. Based on the initial value of 30, this mathematically leans toward the high probability of numerous false positives.
 
  • Like
Reactions: VulchR and BurgDog
That is the part that makes little sense; why interject an Apple employee into the review process? Folks at NCMEC are far more qualified And do this kind of stuff as part of their “job”.
Simple, Apple wants to verify that their system is working as they intend it to. They know that outside systems may or may not be affective. Since they can review the instances themselves they will do so for now to monitor and update their system as needed. Much faster and way more efficient. NCMEC may in the long run turn out to be a better review process. If so maybe they will hand of review to them. Apple doing their own independent checks doesn’t invalidate the work NCMEC does and only educates Apple on how to do this better going forward.

Also if NCMEC, as others have stated, find so many false positives then their system of review has issues. Maybe Apple wasn’t happy with their standards, if they even considered them at all. Apple internal review doesn’t have a negative affect at this time. Adding doesn’t affect NCMEC really either way at the moment.
 
I have not found anywhere that this is specifically called out - why 30?
Is anApple employee matching photo to photo specific. Why interject an Apple Employee? Close hold the actual incidence of false positives?

So if the Apple employee looks at the “30” and finds 2 “match” and 28 “no match”, do all 30 get passed? Just the 2? Or….? No that Apple knows the 2 are there they have to report by law.

In an effort to limit reporting false positives, Apple has initially chosen 30 as their threshold. If implemented, this could be too low, just right, or too high. Right now mathematically Apple is saying “1:1,000,000,000,000” however we have no access to the math or testing to support this claim. Based on the initial value of 30, this mathematically leans toward the high probability of numerous false positives.
I don’t know why they chose 30 but it is safe to assume that moving the number to high will allow to many to fall through and make the system completely ineffective. Its safe the think that 30 was a number they came to that they felt would catch bad actors. However I agree it could slide either way upon them getting even more data. The goal is to make the system effective.

I can not verify but I feel comfortable that if you have even one CSAM image it gets reported. Remember they are illegal images. You are not allowed to have them. So in theory by by law Apple has to report them. At that point Apple locks the account and puts it to authorities to do their investigating. Apple probably (speculative) doesn’t do anything much after that.

From the side of the other 28 photos that ended up being false positives than that’s exactly what they are. Apple is not submitting images to be put in the CSAM database but its possible their efforts may discover more images that get added to the CSAM database through investigation done by those investigative authorities, not Apple. Apple, nor you or me, can control it but I do hope those investigative authorities do a good job of identifying them properly. Apple has only tried to help catch those people more effectively.

You’re right we don’t have access to the math. I doubt they would lie (my opinion) and something tells me that information could be requested by a court if needed for validation. Once again I don’t think they just made up a number. I’m willing to bet they ran many tests that gave them that number. Since Apple isn’t perfect and can make mistakes, they have human review to verify their results in a real world scenario to determine if it meets or exceeds lab testing. If they don‘t they‘ll likely either report the difference and if they are okay with the new risk of false positive or actually fix the algorithm to meet the goal.
 
  • Like
Reactions: dk001
Simple, Apple wants to verify that their system is working as they intend it to. They know that outside systems may or may not be affective. Since they can review the instances themselves they will do so for now to monitor and update their system as needed. Much faster and way more efficient. NCMEC may in the long run turn out to be a better review process. If so maybe they will hand of review to them. Apple doing their own independent checks doesn’t invalidate the work NCMEC does and only educates Apple on how to do this better going forward.

Also if NCMEC, as others have stated, find so many false positives then their system of review has issues. Maybe Apple wasn’t happy with their standards, if they even considered them at all. Apple internal review doesn’t have a negative affect at this time. Adding doesn’t affect NCMEC really either way at the moment.

But it does have an impact in two important ways:
1. The Apple employee(s) who are subjected to the reviews
2. The chain of custody impact - that aspect will likely require a legal challenge of some kind.

Then again this is all supposition until Apple implements it.
 
But it does have an impact in two important ways:
1. The Apple employee(s) who are subjected to the reviews
2. The chain of custody impact - that aspect will likely require a legal challenge of some kind.

Then again this is all supposition until Apple implements it.
It sucks to be the Apple employee who has to review this kind of material. I hope that Apple took proper steps to prepair potential employees of the psychological impact this could have on them and provide them with appropriate resources to cope if any. My opinion, but I doubt they brushed over this detail lightly.

I think legal challenges are healthy. These results should be questioned and scrutinized. I want the system to be affective and this is an important part of the process. Im obviously for the system but am not ignorant to the fact that it could be flawed and may need work. We need more involvement in this large issue. Brushing it away is not going to address it. I invite this kind of debate.
 
  • Like
Reactions: dk001
I don’t know why they chose 30 but it is safe to assume that moving the number to high will allow to many to fall through and make the system completely ineffective. Its safe the think that 30 was a number they came to that they felt would catch bad actors. However I agree it could slide either way upon them getting even more data. The goal is to make the system effective.

I can not verify but I feel comfortable that if you have even one CSAM image it gets reported. Remember they are illegal images. You are not allowed to have them. So in theory by by law Apple has to report them. At that point Apple locks the account and puts it to authorities to do their investigating. Apple probably (speculative) doesn’t do anything much after that.

From the side of the other 28 photos that ended up being false positives than that’s exactly what they are. Apple is not submitting images to be put in the CSAM database but its possible their efforts may discover more images that get added to the CSAM database through investigation done by those investigative authorities, not Apple. Apple, nor you or me, can control it but I do hope those investigative authorities do a good job of identifying them properly. Apple has only tried to help catch those people more effectively.

You’re right we don’t have access to the math. I doubt they would lie (my opinion) and something tells me that information could be requested by a court if needed for validation. Once again I don’t think they just made up a number. I’m willing to bet they ran many tests that gave them that number. Since Apple isn’t perfect and can make mistakes, they have human review to verify their results in a real world scenario to determine if it meets or exceeds lab testing. If they don‘t they‘ll likely either report the difference and if they are okay with the new risk of false positive or actually fix the algorithm to meet the goal.

In the same boat.
I can see the 2 and not the 28... legally they need to disclose the 2.
I don't think Apple is lying about anything however it would not surprise me if they are deliberately making it look "better" or more "appealing" minimize actual risk published.

Just wish there was a bit more transparency
 
Trillion dollar company thousands of software engineers and other people they bring in to analyze and measure the effectiveness. I know I come across as a bit rude and I can tell you’ve put a good amount of thought and reading into this but I can guarantee that apple has put way more thought and energy into this than all of this forums collective discussion combined. Apple is not perfect. However they have always corrected errors in their code and policies. When they were slow to do so it was often the route forward required more thought.

Also even if they were to lower the threshold they would still have human review to catch false negatives. They would collect the data and determine if they need to make improvements. This isn’t a ‘we made it once and we’ll never update it again’ kind of thing.
I guess my point is that people other than software engineers and mathematicians needed to look at this: ethicists, civil rights experts, behavioural scientists, and political scientists. Most of the information coming from Apple has been technical, as though only engineers were involved in the proposal, and the public comments by Apple executives just look outright naive.

I understand your point about human review of the 'hits' from the system. However, most of us will never have CSAM material on our phones or computers, but we might have pictures of our kids, nephews and nieces, etc. The moment the threshold is crossed and an Apple employee reviews a photo that turns out to be a false positive, privacy has been shattered. And remember if somebody takes a series of picture of the same subject, if one is flagged as a false positive, other are likely to be. Apple hasn't told us what the false positives from the system look like. Are the false positives going to be pictures mostly of kids? Is Apple going to alert users that pictures were reviewed by a human and which pictures were reviewed? If they don't it, will be creepy to think that somebody could look at your private photos without informing you.
 
  • Like
Reactions: BurgDog and dk001
I guess my point is that people other than software engineers and mathematicians needed to look at this: ethicists, civil rights experts, behavioural scientists, and political scientists. Most of the information coming from Apple has been technical, as though only engineers were involved in the proposal, and the public comments by Apple executives just look outright naive.

I understand your point about human review of the 'hits' from the system. However, most of us will never have CSAM material on our phones or computers, but we might have pictures of our kids, nephews and nieces, etc. The moment the threshold is crossed and an Apple employee reviews a photo that turns out to be a false positive, privacy has been shattered. And remember if somebody takes a series of picture of the same subject, if one is flagged as a false positive, other are likely to be. Apple hasn't told us what the false positives from the system look like. Are the false positives going to be pictures mostly of kids? Is Apple going to alert users that pictures were reviewed by a human and which pictures were reviewed? If they don't it, will be creepy to think that somebody could look at your private photos without informing you.
Terribly sorry for my long response.

I wrote this last but moved it to the first paragraph because I think you make a good point and want to address it now. Will Apple inform you that your photos have been reviewed? I don’t know but my guess is probably no. However I do think that is a very good point. I personally don’t care (I don’t speak for the majority I think) if they do. This is for many reasons, I have no kids, pictures of my nieces and nephews are all clothed, and generally I think the risk of someone seeing these doesn’t outweigh the benefit of catching the criminals. Also if you were willing to take those photos of your kids you probably were okay with showing them to at least some people, obviously not everyone, but people you trust and thought had mature minds (my opinion for sure). All that being said I think the answer to whether Apple should inform you should be debated and maybe even be yes they should tell you if for no other reason than to be transparent with people who are using their iCloud photos backup.

At some point having to many hands in the pot means you don’t get the product done and out the door and often ruins the product. While I agree that having the process scrutinized by ethicists to political scientists is good I’d be willing to wager similar discussion did happen internally with their own experts and legal teams to determine if they should move forward or halt. With those internal teams they had, they have put forward what we are seeing. I hope external experts punch holes in the process, I just hope they are based in facts and not political agendas. Its even a touchy subject internally if we believe the reports about Apple internal employees. I feel I’ve seen every Apple executive’s comments since the initial proposal was put forward and I personally didn’t feel they were naive but that might be how I view it from my stance on the issue. Someone who‘s already opposed to it may see their comments in a different light.

To address your question about folks who may get a false positive of one photo of a series of photos would the others get flagged as well? This is where understanding of Apple‘s Neural Has process comes into play. Now given my understanding of the methods it seems pretty much impossible for this to happen, however let me get a bit into the technical side. What happens is when a photo is taken its ran through an algorithm that spits out a long string of characters (numbers). That long string of characters (which is really long) has a couple of advantages. First its much smaller in size than the original photo. This is how Apple can hold a list of the illegal CSAM hashes on your device for comparison without also loading the illegal photos on your phone which no one would want anyway. Second the long string of characters if given to someone couldn’t be reversed backwards into the original image so you don’t have to worry about someone taking your hash and running it backwards through the algorithm to see your photos. Third and probably the most important is two photos taken in a series (lets say burst mode on your phone) would result in two completely different character hashes. Since Apple is matching hashes against know CSAM hashes they have to be an exact match. So by definition I can’t see how a series of photos could all match 1 or even a few CSAM images. Now that being said it does try to account for the same image that might have been cropped or a filter added to it ie: black and white filter, cropping image to cut some of the boarder out. The whole process is here and specifically page 5. So that is also a good reason to have a human review.

I apologize if you already knew all that but if you consider the odds they have published of one in one trillion in a year than it would seem virtually impossible for it to make a mistake. However, even with that very strong set of checks Apple will still have someone visually verify the results when triggered. It almost certainly guarantees, the whole system including human review part, will only catch illegal photos. This method doesn’t catch new illegal images since they have not been added to the data base. However if someone does have an illegal photo that is in the CSAM data base then hopefully after the authorities do their investigation the other new illegal images will get added to the data base and will have a viral affect if they’ve been sharing the illegal images they have been creating with others.
 
Last edited:
  • Like
Reactions: VulchR
I'm curious to see how this will unfold. Personally, the implementation of this "security" system will have no impact on me but it is something I will not accept. Scanning documents on iCloud for illegal images is fine by me but implementing it on the device (no matter the limited scope) is beyond the pale in my opinion no matter what the intended purpose is. However, Apple certainly has the right to implement the feature if it doesn't violate any laws. And consumers have the right to choose. The government shouldn't intervene and let consumers choose. Personally, I will opt out and sell any Apple devices that implement this "feature".
 
...

To address your question about folks who may get a false positive of one photo of a series of photos would the others get flagged as well? This is where understanding of Apple‘s Neural Has process comes into play. Now given my understanding of the methods it seems pretty much impossible for this to happen, ...

This is the part that concerns me. If the Neural Hashing and Matching operated as you have described, the incidence of false positives would be low. Having set the match to 30 plays against this assumption. It lends credence to the concern that it is not as accurate as communicated. Whether this could be applied to a series of photos or just the random shot, unless Apple is willing to disclose the details or let the user know about the flagged photos, we won't know.

That is a concern.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.