Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yes, you are right... but that's not the heart of the argument. The argument that Tim Cook / Apple presented when writing that open letter about the FBI backdoor is... if the backdoor tool exists, it can be used and abused. If the tool doesn't exist, then it can't be used or abused.

There's a large gap between asking Apple to write you a tool that doesn't exist, vs asking Apple to use a fully developed and deployed tool.

In this case, the tool now exists.
Maybe the tool has always existed. Absolute privacy has never been a reality. I just trust Apple more than others including our own government although not absolutely. If there is a better alternative, I would consider it. I am huge proponent of personal encryption that is beyond the scope of anyone to easily penetrate.
 
The issue will be when someone's life is destroyed by an AI recognition system that is pursued by the powers and proves to be false yet they are irreversibly damaged in the process of the actions taken against them. The costs can be reclaimed, the damage to ones life cannot... once you are caught in the cycle of defense to prove your innocence you will truly understand the how this exceeds the monetary compensation to right the wrong.
 
The issue will be when someone's life is destroyed by an AI recognition system that is pursued by the powers and proves to be false yet they are irreversibly damaged in the process of the actions taken against them. The costs can be reclaimed, the damage to ones life cannot... once you are caught in the cycle of defense to prove your innocence you will truly understand the how this exceeds the monetary compensation to right the wrong.
The AI does the scan but a human will review any positive results before forwarding to law enforcement. If truly concerned maybe wait for few point releases before upgrading to iOS 15.
 
I would hope that it works out for the best for everyone. However, I refuse to worry about the possibility or use it as basis of support for Apple. As long as the United States remains a constitutional republic, there will be no negative effacts from Apple to me. If not, I have more to worry about than Apple.
Okay. Well, I appreciate you making that clear for me. Now I know not to waste my breath with you on this topic.
 
One last though. I believe Apple is committed to privacy of your data contained on your physical device only. Do whatever you want but don’t make it Aoole’s problem. The moment that you use iCloud and store your data on their servers you potentially make it their problem to fix. The truth is there people out there that Apple doesn’t need as customers; no company needs these people.
 
I’m definitely holding out for a few releases. I’m in no rush to even jump on the beta band wagon.
That’s an interesting point. Have all of the iOS 15 dev and public beta testers had this “child safety feature” enabled on their devices (or at minimum had this “feature” made available to their device in such a manner that it will be enabled later)?

If yes, and this was not notified to them, that’s an absolute debacle of a move by Apple.

I’m still catching up on this story so apologies if asked and answered but am very curious now.
 
That’s an interesting point. Have all of the iOS 15 dev and public beta testers had this “child safety feature” enabled on their devices (or at minimum had this “feature” made available to their device in such a manner that it will be enabled later)?

If yes, and this was not notified to them, that’s an absolute debacle of a move by Apple.

I’m still catching up on this story so apologies if asked and answered but am very curious now.
I wouldn’t be surprised if there’s a lawsuit coming for the lack of approval from the developers or the public before hand.

It all sounds like Apple failed to be TRANSPARENT, showed lack of COMMUNICATION and failed to INFORM after IOS 15 got dropped.

This privacy feature didn’t happen overnight it is well planned. Apple has been working on this behind the scenes since last year. Thus, Apple decided to inform the developers now since we’re only 1 month away from the IOS 15 to be officially released and weekly beta will be starting soon this month.
 
Last edited:
That’s an interesting point. Have all of the iOS 15 dev and public beta testers had this “child safety feature” enabled on their devices (or at minimum had this “feature” made available to their device in such a manner that it will be enabled later)?

If yes, and this was not notified to them, that’s an absolute debacle of a move by Apple.

I’m still catching up on this story so apologies if asked and answered but am very curious now.
Yes I’m replying to myself but I just scanned the most recent article and comments and don’t see an answer to this question nor how Apple is limiting this to the “U.S.”

I’m assuming they’re using a combination of restricting via SIM, geolocation and AppleID but I can’t tell. Of all the extremely valid concerns being raised so far, to me this “geo creep” is both the most concerning and where I suspect the first abuses of this (both by “bad guys” and by “bad govs”) will occur.

Especially given the way most of the reciprocity and intel-sharing agreements currently in-place between US and other countries (it’s well beyond just the 5 eyes).

Finally, stupid question but this is just for iOS? Or iOS and iPadOS? What about TVOS and WatchOS? And of course MacOS? I’ll check again as I’m sure this one is answered somewhere but was wondering if anyone knew off-hand.
 
I wouldn’t be surprised if there’s a lawsuit coming for the lack of approval from the developers or the public before hand.

It all sounds like Apple failed to be TRANSPARENT, showed lack of COMMUNICATION and failed to INFORM after IOS 15 got dropped.

This privacy feature didn’t happen overnight it is well planned. Apple has been working on this behind the scenes since last year. Thus, Apple decided to inform the developers now since we’re only 1 month from the IOS 15 to be officially released and weekly beta will be starting soon this month.
That’s insane! I’d have my lawyer on the phone ASAP.

How can they have been working on this since last year and not tell developers or beta testers or even mention at WWDC!

I seriously wonder if they weren’t going to announce this and because they get caught up in Pegasus they decided to self-disclose after all (or it was decided for them and they were “allowed” to mention to us plebeians).
 
That’s insane! I’d have my lawyer on the phone ASAP.

How can they have been working on this since last year and not tell developers or beta testers or even mention at WWDC!

I seriously wonder if they weren’t going to announce this and because they get caught up in Pegasus they decided to self-disclose after all (or it was decided for them and they were “allowed” to mention to us plebeians).
That’s a very good point too. It’ll be interesting what Apple decides to do. Trust me, they are on the radar now.
 
‘give the finger’ and continue to operate in the country?
that doesn’t happen anywhere
Good for Apple. Tim Cook still caved to the Chinese government, despite Erik Neuenschwander pretending otherwise.

You kid yourself if you think Google didn't give up huge profits by exiting China, instead of going whole hog after greasing the gears of the CCP. If Apple had followed Google's lead out of China, maybe we would have seen competitive Apple web services for once.
 
  • Like
Reactions: peanuts_of_pathos
That’s a very good point too. It’ll be interesting what Apple decides to do. Trust me, they are on the radar now.
I hope so, I’m actually really disappointed with MacRumors coverage on this. I just read the Apple overview and their linked white papers.

They are fairly well done but I’m seriously convinced that two of the three Child Safety features were going to be announced (the first is the on-device warning that your child is sending / receiving materials of a potentially sensitive nature and the third is the apparent hard block on using Siri and Search for materials involving child sexual abuse.)

I have some nitpicks on both (especially the restrictions on Siri and Search) but nothing earth-shattering.

The on-device / client vs iCloud / server CSAM techniques though definitely elegant in design are an absolute joke in terms of application to this problem. And the focus on cryptography ignores both the very real risk of scope creep (e.g geocreep) and the potential for abuse of the process (not the individual techniques).

I didn’t know until reading the CSAM white paper that they’re not even directly comparing to NCEMC database, it’s instead a database created by Apple using images provided by NCEMC (and apparently other organizations but I haven’t found those names yet).

In addition, that database once loaded to my phone will be updated as a black box during unspecified iOS updates. Neither Apple nor us as end-users will have anyway of knowing what is in this “NeuralHash” database.

And while I’m sure Apple will have a process to ensure integrity of the process (I.e that [set] can be used to correctly and accurately reproduce pdata) this is a very easy spot where a bad guy OR a bad gov can force Apple (or even easier just bribe a few key employees) to introduce non-CSAM materials or otherwise abuse the process (not the protocols or techniques) with neither NCEMC / etc nor us as end-users being any wiser.

At minimum, this can be used to justify be immediately and very likely permanently locking someone out of their account. Bribe or co-opt just one or two people in the process be it law enforcement orgs or at NCEMC / “other orgs” and you can easily destroy someone’s life.

I also did not then, now nor ever will consent to Apple assigning a persistent “safety voucher” to my images on-device. I could understand a hash being persistently attached but not a safety voucher. Putting aside the NeuralHash database, these vouchers are even more problematic to me especially because of (NOT despite) the TSS technique. Frankly this entire process seems explicitly designed not for privacy but to allow / enable entrapment or blackmailing. I do applaud the steps forward to apply these techniques in the interest of user safety (I really do and with modifications I can see this process working and could for instance be used for revenge porn!) but the process “as is” currently is ripe for abuse

More to the point, the only non-malicious reason I can ascribe to Apple’s decision to use on-device cryptography reasons is that while this approach really is elegant in design…all steps to the PSI, TSS, “safety vouchers” and NeuralHash should all be server side and that Apple isn’t doing that is due to performance and resource constraints. I.e Apple is deliberately crippling their own commitment to security and privacy because the iCloud upload would take longer if this was all server side. That the synthetic vouchers are server side makes me a bit skeptical of this argument though but I’m definitely not an expert.

But being a bit more cynical here this second “CSAM detection” technique and notably the lack of transparency on it’s rollout, the obvious potential for it to be exploited by bad govs (or even just bad actors in good govs…FISA abuse happened here in the USA after all) as well as the fact very, very few people use iCloud to swap their CSAM materials in the clear makes me think this process has been deliberately designed to be exploited but with maximum plausible deniability by all parties (Apple employee or contractor? <—> or NCEMC [or other org]? <—> or local law enforcement? <—> or other?) and that someone(s) completely blindsided Tim Cook and senior management. There is no way that Tim Cook would’ve signed off on this if the potential for abuse had been made clear to him. And I say this as someone who is not really a “fan” of his.

So this is my long-winded way of saying the obfuscation of the fact that NeuralHash is a proprietary Apple database and especially the total lack of discussion around the black box updating of said database and the equally black box like “safety vouchers”, and what I am assuming is the complete and total unacceptability of this process to Apple’s senior management are why I believe this wasn’t going to be announced to end users and that someone forced Apple to disclose all three child safety “features”.

I may also be reading WAY too much into this but there is a very obvious typo in the “CSAM Detection” white paper (s/b a 9 instead of a 10 in the TSS section) and certain aspects of the risk assessments don’t conform to my general expectations (though again I’m not an expert).

In fact the risk assessments explicitly mention how us as end users will not see any impact and have no insight to the process as well as the techniques and protocols and will be unaware and these are cited as benefits of all this!

Sorry for brain-vomiting on you but I’m just absolutely floored at what happened. I may write my “baby’s first scathing letter to Tim Apple” tomorrow. Thank you for responding btw, it was very helpful!

 
I hope so, I’m actually really disappointed with MacRumors coverage on this. I just read the Apple overview and their linked white papers.

They are fairly well done but I’m seriously convinced that two of the three Child Safety features were going to be announced (the first is the on-device warning that your child is sending / receiving materials of a potentially sensitive nature and the third is the apparent hard block on using Siri and Search for materials involving child sexual abuse.)

I have some nitpicks on both (especially the restrictions on Siri and Search) but nothing earth-shattering.

The on-device / client vs iCloud / server CSAM techniques though definitely elegant in design are an absolute joke in terms of application to this problem. And the focus on cryptography ignores both the very real risk of scope creep (e.g geocreep) and the potential for abuse of the process (not the individual techniques).

I didn’t know until reading the CSAM white paper that they’re not even directly comparing to NCEMC database, it’s instead a database created by Apple using images provided by NCEMC (and apparently other organizations but I haven’t found those names yet).

In addition, that database once loaded to my phone will be updated as a black box during unspecified iOS updates. Neither Apple nor us as end-users will have anyway of knowing what is in this “NeuralHash” database.

And while I’m sure Apple will have a process to ensure integrity of the process (I.e that [set] can be used to correctly and accurately reproduce pdata) this is a very easy spot where a bad guy OR a bad gov can force Apple (or even easier just bribe a few key employees) to introduce non-CSAM materials or otherwise abuse the process (not the protocols or techniques) with neither NCEMC / etc nor us as end-users being any wiser.

At minimum, this can be used to justify be immediately and very likely permanently locking someone out of their account. Bribe or co-opt just one or two people in the process be it law enforcement orgs or at NCEMC / “other orgs” and you can easily destroy someone’s life.

I also did not then, now nor ever will consent to Apple assigning a persistent “safety voucher” to my images on-device. I could understand a hash being persistently attached but not a safety voucher. Putting aside the NeuralHash database, these vouchers are even more problematic to me especially because of (NOT despite) the TSS technique. Frankly this entire process seems explicitly designed not for privacy but to allow / enable entrapment or blackmailing. I do applaud the steps forward to apply these techniques in the interest of user safety (I really do and with modifications I can see this process working and could for instance be used for revenge porn!) but the process “as is” currently is ripe for abuse

More to the point, the only non-malicious reason I can ascribe to Apple’s decision to use on-device cryptography reasons is that while this approach really is elegant in design…all steps to the PSI, TSS, “safety vouchers” and NeuralHash should all be server side and that Apple isn’t doing that is due to performance and resource constraints. I.e Apple is deliberately crippling their own commitment to security and privacy because the iCloud upload would take longer if this was all server side. That the synthetic vouchers are server side makes me a bit skeptical of this argument though but I’m definitely not an expert.

But being a bit more cynical here this second “CSAM detection” technique and notably the lack of transparency on it’s rollout, the obvious potential for it to be exploited by bad govs (or even just bad actors in good govs…FISA abuse happened here in the USA after all) as well as the fact very, very few people use iCloud to swap their CSAM materials in the clear makes me think this process has been deliberately designed to be exploited but with maximum plausible deniability by all parties (Apple employee or contractor? <—> or NCEMC [or other org]? <—> or local law enforcement? <—> or other?) and that someone(s) completely blindsided Tim Cook and senior management. There is no way that Tim Cook would’ve signed off on this if the potential for abuse had been made clear to him. And I say this as someone who is not really a “fan” of his.

So this is my long-winded way of saying the obfuscation of the fact that NeuralHash is a proprietary Apple database and especially the total lack of discussion around the black box updating of said database and the equally black box like “safety vouchers”, and what I am assuming is the complete and total unacceptability of this process to Apple’s senior management are why I believe this wasn’t going to be announced to end users and that someone forced Apple to disclose all three child safety “features”.

I may also be reading WAY too much into this but there is a very obvious typo in the “CSAM Detection” white paper (s/b a 9 instead of a 10 in the TSS section) and certain aspects of the risk assessments don’t conform to my general expectations (though again I’m not an expert).

In fact the risk assessments explicitly mention how us as end users will not see any impact and have no insight to the process as well as the techniques and protocols and will be unaware and these are cited as benefits of all this!

Sorry for brain-vomiting on you but I’m just absolutely floored at what happened. I may write my “baby’s first scathing letter to Tim Apple” tomorrow. Thank you for responding btw, it was very helpful!

No problem. Wonderfully written and good job elaborating on your points. Good luck writing a letter to Tim Apple.

Make sure you write a letter and physically mail it and put an AirTag inside the envelope/letter so you can track the progress. Trust me, Apple will appreciate it. :)

Mailing Address: Tim Cook, One Apple Park Way, Cupertino, CA 95014.

This might also help :) Good Luck and keep us posted, please.

 
Last edited:
I doubt the company is going anywhere so I don't see it as the end, but it invites a lot of frightening questions.

1. Do Android and Samsung already do this without letting anyone know? Or do they do it openly and Apple is just catching up? I honestly don't know.

2. In the U.S. it's "for the children", but what about people in China and other dictatorships where Apple feels compelled to "comply with local law"? Will Apple be turning over the results of their phone scans to those governments? Will anyone in China with a Winnie the Pooh photo, or a meme about the Uighur genocide on their phone be flagged and reported to the Chinese government?

3. How about in places like Russia where homosexuality is virtually criminalized? Will Apple be reporting who is sexting their same-sex partner to the Russian authorities? You might laugh, but the minute Apple rolls this out you better believe these dictatorships will start requiring Apple to use this technology to "comply with local law" if they want to remain in the market.

4. If I took a topless photo of a woman I dated 5 years ago, and it's still on the cloud, and it gets flagged, who looks at it? And if she's wrongly judged to be underaged by whoever on Apple's staff does the looking at photos, what recourse do I have when my phone is suddenly locked and the FBI is breaking down my door? Even if the charges are eventually dropped once the ex comes forward to prove she was an adult when the photo was taken, at this point the suspect has almost certainly lost their job and been shunned by the people in their life.

5. Will Apple pay damages for lost wages, pain and suffering, and attorney fees when a false positive leads to law enforcement action? Will Apple's Terms of Service include a provision where you agree not to sue them if they wrongly get you locked up?

6. How are they judging message content? Are they reading and judging fantasy sexts between adults? Are they only looking at messages that go to Apple customers who are under 18? How can I be sure that if I called my adult girlfriend a "bad little girl" in a joke text message, that the FBI won't be reading it the next day and deciding whether it's actionable?

7. Regarding reexamining tech options - not a bad idea. I've been in the Apple ecosystem so long, even to the point of enthusiastically buying Apple stock when I started investing, that I have no idea what's goin on with Samsung or Pixel phones or how a Windows start page even looks these days. It's past time I caught up on the outside world. But again, are those companies just doing the same thing too?

A lot of these points remind me of what people were saying about Airtags - let me attempt to explain what I mean:

Firstly people do not properly understand how the new thing works - they see what it does, then start making false assumptions and making up a lot of seemingly plausible, but false scenarios. It is actually important to understand how they are scanning photos and what they are really doing.

Yes, we could argue that they shouldn’t be doing any scanning of our photos at all. I’d agree with that, but on the other hand, the method used to do this scanning is supposedly completely done blindly and is only comparing to a known database list of images already known to have been circulated - it’s almost the same thing as looking for a digital copyright watermark. It’s not doing facial recognition or booby recognition at all - it’s trying to find a match from a known database of harmful illegal images and that’s it. At least for now.

I somewhat do agree that it could be a slippery slope - but maybe it’s worthwhile to allow Apple to do this. I think it’s good for society to fight child pornography - and they apparently have a method to do it without invading our privacy.

I could be wrong here - but I’m saying it’s worth it.

They are not “looking” at your photos. They are only matching up “digital watermarks”

I wonder what the EFF “Electronic Freedom Frontier” opinion is about this? Did apple solve a complicated problem and preserve our rights to privacy with this? Or is it going to be a slippery slope?

I’m not claiming to know absolutely - but I don’t want to read obviously incorrect comments about old girlfriend booby pictures being somehow mistaken for a database of basically digital watermarked images.

My main point is that many comments will be made here by people who don’t know or understand how this works - and they will make some incorrect extrapolations.
 
A lot of these points remind me of what people were saying about Airtags - let me attempt to explain what I mean:

Firstly people do not properly understand how the new thing works - they see what it does, then start making false assumptions and making up a lot of seemingly plausible, but false scenarios. It is actually important to understand how they are scanning photos and what they are really doing.

Yes, we could argue that they shouldn’t be doing any scanning of our photos at all. I’d agree with that, but on the other hand, the method used to do this scanning is supposedly completely done blindly and is only comparing to a known database list of images already known to have been circulated - it’s almost the same thing as looking for a digital copyright watermark. It’s not doing facial recognition or booby recognition at all - it’s trying to find a match from a known database of harmful illegal images and that’s it. At least for now.

I somewhat do agree that it could be a slippery slope - but maybe it’s worthwhile to allow Apple to do this. I think it’s good for society to fight child pornography - and they apparently have a method to do it without invading our privacy.

I could be wrong here - but I’m saying it’s worth it.

They are not “looking” at your photos. They are only matching up “digital watermarks”

I wonder what the EFF “Electronic Freedom Frontier” opinion is about this? Did apple solve a complicated problem and preserve our rights to privacy with this? Or is it going to be a slippery slope?
They are not a fan

 
They are not a fan


oh ****. :)

hah. I see that now. That’s funny the EFF even used the “slippery slope” wording.

OK - well, I do respect the EFF. Tim Cook does too, I would think.

I guess I’m back on the side of privacy as the most important thing - even if it means terrorists and illegal pornographers end up preferring iOS.

It’s a really tough thing to figure out.

Privacy is #1, and Maybe until just now Apple was the only company in the world who had a chance to fight for privacy for all people.

So maybe my previous post was not right. I mean - I said what I feel - but just now I am stepping back on that and leaning towards protection of privacy.

Dang - it’s a real dilemma.
 
  • Like
Reactions: peanuts_of_pathos
So far I have now disabled iCloud photo - but I think next step is to find a good ol' flip-phone and resort to bringing my Nikon camera for outings.

And about hashed image comparison - it is only as good as the source + the source code. It WILL generate a lot of false positives. It will also spawn a lot of hackers AND "jokesters" putting known "bad" images on peoples phones. The photo equivalent of "Swatting"

I do think people posting pictures of any form of abuse should face harsh punishment. But no search of a private persons property should take place without a proper PRIOR issued warrant. I have worked with police forces catching people trading "not very nice" pictures. But in every case a proper warrant was in place before even starting to listen to any conversation in both analogue and digital formats.

Just because you "CAN" do something does not mean you should (just like surveys after each purchase or website visit...)
 
So far I have now disabled iCloud photo - but I think next step is to find a good ol' flip-phone and resort to bringing my Nikon camera for outings.

And about hashed image comparison - it is only as good as the source + the source code. It WILL generate a lot of false positives. It will also spawn a lot of hackers AND "jokesters" putting known "bad" images on peoples phones. The photo equivalent of "Swatting"

I do think people posting pictures of any form of abuse should face harsh punishment. But no search of a private persons property should take place without a proper PRIOR issued warrant. I have worked with police forces catching people trading "not very nice" pictures. But in every case a proper warrant was in place before even starting to listen to any conversation in both analogue and digital formats.

Just because you "CAN" do something does not mean you should (just like surveys after each purchase or website visit...)
Same here, I explained the situation to my other family members and they immediately told me, that they are gonna disable iCloud photos. This will safe us 2,99€ per month now \o/ Thanks Apple for being evil!

They knew that most users “depend“ of iCloud Photos, that’s why they tied it to this knob, and not to some extra setting.

But yeah that’s Apple as we know, the didital Chupacabra.
 
Last edited:
So basically if one wanted to set somebody up for a world of hurt the recipe is pretty simple:

1. download a few known kiddie porn images on your computer (a PC is probably safer as the Suede Denim Secret Police will monitor your MacOS device soon enough as well)
2. quietly acquire their phone for a few mins (don't even need to unlock it, just use the quick camera shortcut on unlock screen)
3. take a few pics of your screen with different kiddie porn images on it
4. nonchalantly slip it back to the owner
5. ??????????
6. PROFIT!!!!!

They get tagged, lose their account, get dragged off by the cops while you enjoy the fruits of your labor sipping a latte. Go Apple!
 
So basically if one wanted to set somebody up for a world of hurt the recipe is pretty simple:

1. download a few known kiddie porn images on your computer (a PC is probably safer as the Suede Denim Secret Police will monitor your MacOS device soon enough as well)
2. quietly acquire their phone for a few mins (don't even need to unlock it, just use the quick camera shortcut on unlock screen)
3. take a few pics of your screen with different kiddie porn images on it
4. nonchalantly slip it back to the owner
5. ??????????
6. PROFIT!!!!!

They get tagged, lose their account, get dragged off by the cops while you enjoy the fruits of your labor sipping a latte. Go Apple!

In short - yes. The hash calculation might leak - making it possible to "construct" images that will trigger. It will then need manual review at Apple - stealing resources from Apple.
 
  • Like
Reactions: Synchromesh
I doubt the company is going anywhere so I don't see it as the end, but it invites a lot of frightening questions.

1. Do Android and Samsung already do this without letting anyone know? Or do they do it openly and Apple is just catching up? I honestly don't know.

2. In the U.S. it's "for the children", but what about people in China and other dictatorships where Apple feels compelled to "comply with local law"? Will Apple be turning over the results of their phone scans to those governments? Will anyone in China with a Winnie the Pooh photo, or a meme about the Uighur genocide on their phone be flagged and reported to the Chinese government?

3. How about in places like Russia where homosexuality is virtually criminalized? Will Apple be reporting who is sexting their same-sex partner to the Russian authorities? You might laugh, but the minute Apple rolls this out you better believe these dictatorships will start requiring Apple to use this technology to "comply with local law" if they want to remain in the market.

4. If I took a topless photo of a woman I dated 5 years ago, and it's still on the cloud, and it gets flagged, who looks at it? And if she's wrongly judged to be underaged by whoever on Apple's staff does the looking at photos, what recourse do I have when my phone is suddenly locked and the FBI is breaking down my door? Even if the charges are eventually dropped once the ex comes forward to prove she was an adult when the photo was taken, at this point the suspect has almost certainly lost their job and been shunned by the people in their life.

5. Will Apple pay damages for lost wages, pain and suffering, and attorney fees when a false positive leads to law enforcement action? Will Apple's Terms of Service include a provision where you agree not to sue them if they wrongly get you locked up?

6. How are they judging message content? Are they reading and judging fantasy sexts between adults? Are they only looking at messages that go to Apple customers who are under 18? How can I be sure that if I called my adult girlfriend a "bad little girl" in a joke text message, that the FBI won't be reading it the next day and deciding whether it's actionable?

7. Regarding reexamining tech options - not a bad idea. I've been in the Apple ecosystem so long, even to the point of enthusiastically buying Apple stock when I started investing, that I have no idea what's goin on with Samsung or Pixel phones or how a Windows start page even looks these days. It's past time I caught up on the outside world. But again, are those companies just doing the same thing too?
You read my mind. I think also they've just told criminals and perverts not to use Apple products.
 
  • Like
Reactions: Wildkraut
You read my mind. I think also they've just told criminals and perverts not to use Apple products.
Didn’t Apple got sued for checking Employees Bags outside their work time, assuming all their Employees are criminals?!
Well, they are doing the same with their users-base now.

It’s time for “I’m a Criminal and I’m a PC” adverts.
 
I doubt the company is going anywhere so I don't see it as the end, but it invites a lot of frightening questions.

1. Do Android and Samsung already do this without letting anyone know? Or do they do it openly and Apple is just catching up? I honestly don't know.

2. In the U.S. it's "for the children", but what about people in China and other dictatorships where Apple feels compelled to "comply with local law"? Will Apple be turning over the results of their phone scans to those governments? Will anyone in China with a Winnie the Pooh photo, or a meme about the Uighur genocide on their phone be flagged and reported to the Chinese government?

3. How about in places like Russia where homosexuality is virtually criminalized? Will Apple be reporting who is sexting their same-sex partner to the Russian authorities? You might laugh, but the minute Apple rolls this out you better believe these dictatorships will start requiring Apple to use this technology to "comply with local law" if they want to remain in the market.

4. If I took a topless photo of a woman I dated 5 years ago, and it's still on the cloud, and it gets flagged, who looks at it? And if she's wrongly judged to be underaged by whoever on Apple's staff does the looking at photos, what recourse do I have when my phone is suddenly locked and the FBI is breaking down my door? Even if the charges are eventually dropped once the ex comes forward to prove she was an adult when the photo was taken, at this point the suspect has almost certainly lost their job and been shunned by the people in their life.

5. Will Apple pay damages for lost wages, pain and suffering, and attorney fees when a false positive leads to law enforcement action? Will Apple's Terms of Service include a provision where you agree not to sue them if they wrongly get you locked up?

6. How are they judging message content? Are they reading and judging fantasy sexts between adults? Are they only looking at messages that go to Apple customers who are under 18? How can I be sure that if I called my adult girlfriend a "bad little girl" in a joke text message, that the FBI won't be reading it the next day and deciding whether it's actionable?

7. Regarding reexamining tech options - not a bad idea. I've been in the Apple ecosystem so long, even to the point of enthusiastically buying Apple stock when I started investing, that I have no idea what's goin on with Samsung or Pixel phones or how a Windows start page even looks these days. It's past time I caught up on the outside world. But again, are those companies just doing the same thing too?

My big worry is children - who are naturally curious about their private parts - they DO take pictures.

I had to erase about 200 "backside" pictures my son had made on his Nikon camera. He said he could not see his backside - and wanted to know what it looked like. 190 out of focus images but a few "in focus" and revealing.

As our children sometimes "borrow" parents phones to take about 1000 pictures before you can say "Where is my phone?" - it could get messy really fast

I know Apple claims they only tag previously "known" images. I do not trust that statement at all - it does not make sense as you would prefer to capture the images at the "source" (aka the "dealer/maker" - not at the "user" level)
 
  • Like
Reactions: Wildkraut
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.