Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Craig has to go. He’s so tone deaf and out of touch, it’s unbelievable.

First, he saw no problem with implementing a spyware functional architecture. This debate has been going on for decades and even Google/Facebook know that such a functional architecture is too far, too easy to abuse, and incompatible with a free society. Being ignorant, willfully or otherwise, is unacceptable for someone in his position.

Second, he tried to silence and bully us with the “screeching minority” leak. Yes, the zealots at NCMEC wrote that, but there’s no way that comment was used without consent and concurrence of Apples senior leadership. Silencing and bullying your customers is not only bad business, it’s unacceptable, it’s unethical.

Third, he’s gaslit us multiple times during his clarification interviews. He’s claimed we’re “confused,” despite most of his opponents being long term privacy and security experts. E.g. the guy who started this, Matthew Green, is a computer scientist and encryption researcher at John Hopkins. If you’re a techie. You’ve been reading the original papers and analysis from experts like him and are are not confused.

There are plenty of people who are indeed confusing the three features (mostly news orgs trying to beat each other to the scoop) and others are confused about perceptual hashes and how this works at a deep technical level (mostly proponents who immediately jumped in to defend Apple without having read any of the technical docs). Those generally are NOT the screech minority, Craig.

#FireCraig
User joined August 13 with #FireCraig hashtag. Smells like a troll
 
  • Like
Reactions: hlfway2anywhere
Again I ask.

if the only data available for the iOS device to scan is a 1-1 duplicate of the iCloud data what is the slippery slope when governments have been demanding cloud data from tech companies and tech companies providing them since 2005 INCLUDING Apple.

If the scan for CSAM occurs on device when the data availabe to scan is 1-1 duplicate of iCloud data what is the difference between if the scan occurs on Servers vs iOS device. What NEW information is on device that is not available for governments for people who are saying Governments will want more information. Why was this more information not asked for since 2005 when cloud drives include iCLOUD have been given to government since 2005.

The outrage is 1. slippery slope. Slippery slope of what exactly if the data is a duplicate of cloud data. 2. governments will ask to scan more than csam. Ok why hasn't this been raised before August 5 with similar fervor when Apple HAS been giving all iCloud data of users to governments all over the world when they demand it. and I mean ALL data.

the users saying this will be exploited. Ok, if the data available to scan is a duplicate of iCloud data why has this exploitation missing on Cloud scanning since 2005?

I mean.. think logically. the outrage is illogical

if you are concerned that NOTHING should be scanned, cloud or device thats legitimate and a valid opinion but these slippery slope arguments and concerns seemed wholly manufactured when you realize the only available data to scan on device is a duplicate of cloud data. nothing more and nothing less
Here's some "screeching voices" that are considerably more eminent that mine...for your answer: https://appleprivacyletter.com/
 
If you care for your privacy and Apple not installing an all seeing eye for you and your family in the future, I suggest saving your next iphone purchase money and donate something to FOSS projects like CalyxOS and LinuxMint... you know, for people who actually DO care about your privacy so much so they work for free.
 
Get your facts straight. Apple has already been scanning photos for CSAM on iCloud since 2019. No whine-fest here. There is a massive difference between scanning data on their device (iCloud servers) and on device owned by a client.

OK, thanks, I will. Please post a link from a credible source that states that's been happening since 2019.
 
I'm shocked 99% of them actually don't bother getting to know how it works, they even talk about backdoors without even knowing how it would possible lol but sure, hop on the trend and say you don't like this feature

you check my files?
thats enough, I don't like it. I do not need to know how it works. I refuse that you check my files. do not TOUCH my property.
 
as I mentioned before. Your whole argument is built on the narrative of conspiracy theory rather than actual proof of privacy being breached


For All new folks reading lets just reiterate.

1. Apple says it will not expand scanning categories beyond CSAM. If you dont agree with that you don't trust them. Move on to a new ecosystem​

2. Apple says it will not share allow governments to expand what can be done on the Phone. If you agree you don't trust them. move on to a new ecosystem​

3. Device AI scans the hashes of your images in transition to iCloud . Your privacy remains intact unless you are a Pedophile because 30 positive image hash threshold is only shared with Apple if a user meets it. Apple does not receive ANY communication of what the device scanned unless its 30 positive image hashes.​

4. If Apple scanned image hashes only on iCloud it would have the user information for each hash that was scanned unlike on device where the user information is only sent to Apple if you are a pedophile.​


Thats it.

All the slippery slope arguments don't fly because its speculation and illogical because the same folks who applied this argument didnt use this same fervor when cloud data from all tech companies has been openly shared with authorities since the early 2000s. The same argument could be made for categories beyond CSAM,
Dive deeper and remember E. Snoden‘s warnings (maybee he is an expert).
Simply let's speculate how politics works: Apple is faced with the alternative from the highest level: ‚we strengthen the resistance against your ‚walled app store‘ (with propaganda that can easily be fanned, also at MacRumors) or we annoy you with child porn prevention to see our ideas implemented.‘ What can Apple do about it?

And you can see for yourself that we would rather give up the benefits of image analysis already implemented by Apple (faces, cereals, etc.) than to think that an interface is in preparation that carries a lot of danger and definitely discredits Apple, thus severely damaging some of Apple's future business models. Hence the outcry of many Apple employees.

 
Last edited:
They might not (be able to) tell you.

If Apple expands these capabilities, they get caught. Again, go back and read my post about Apple's Research Security Program. The only way they do not get caught is if all these processes happen away from the device, like in a server (where they can run any code they want that's completely blind to you) ... Which is EXACTLY what Apple did not do.

I trust this system more because we can know for sure which root hashes are contained the local database and whether the match calculations are accurate, and whether they apply only to photos being uploaded on iCloud. If Apple implemented this on a server side, we wouldn't know for sure any of this.

Simply let's speculate how politics works: apple is faced with the alternative from the highest level: we strengthen the resistance against a closed app store (with propaganda that can easily be fanned, also at macrumors) or we annoy you with child porn prevention to see our ideas implemented. What can apple do about it?

Read my response to the other user. If Apple wanted to collude with some government, this is literally the worst way to do it. The best way would be to do anything on the server, without any possible audit.
 
  • Like
Reactions: hlfway2anywhere
  1. If Image hashes occur on device, The only time anyone other than you knowing about the image hashes is the algorithm
  2. If Image hashes occur on cloud, every single image hash scan would be tied to user data and apple would know about each scan
  3. if Image hashes occur on device, Apple is notified if 30 positive match exactly as the database provided by multiple child protection groups. This is the ONLY time Apple is given USER information from the scan. Thus only highly likely Pedophile information is sent to Apple, not Innocent users.
  4. If the scan is only done by device without notifying apple, neither is this a back door, nor is this spying or is this privacy breach.
  5. If you are a pedophile with CSAM on device, this is a backdoor for you, this is spying for you and this is a privacy breach for you because ONLY your information is shared with an entity, not innocent users.
  6. This in summary means that for most users, having the device scan for hashes is more secure and privacy driven than having iCloud scan
 
You mean on the iCloud servers? If so,then they're already preventing storage and distribution, no? Why did they only reported 265 pictures to the NCMEC last year? Did they detect CSAM without reporting it to the NCMEC?

I'm not a spokesperson for Apple. If you want specific details, I suggest you contact Apple.
 
Dive deeper and remember E. Snoden‘s warnings (maybee he is an expert).
Simply let's speculate how politics works: Apple is faced with the alternative from the highest level: ‚we strengthen the resistance against your ‚walled app store‘ (with propaganda that can easily be fanned, also at MacRumors) or we annoy you with child porn prevention to see our ideas implemented.‘ What can Apple do about it?
This can be applied to EVERY tech company sharing cloud data but yet the fervor is not the same for them. You could argue that you think only Apple has talked about privacy but the government holding a tech company hostage scenario is only being applied to Apple for some reason and given past precedent, It hasn't and will not occur, unless you dont trust Apple. And if you don't trust apple you are in the wrong ecosystem
 
Last edited:
If Apple expands these capabilities, they get caught. Again, go back and read my post about Apple's Research Security Program. The only way they do not get caught is if all these processes happen away from the device, like in a server (where they can run any code they want that's completely blind to you) ... Which is EXACTLY what Apple did not do.

I trust this system more because we can know for sure which root hashes are contained the local database and whether the match calculations are accurate, and whether they apply only to photos being uploaded on iCloud. If Apple implemented this on a server side, we wouldn't know for sure any of this.



Read my response to the other user. If Apple wanted to collude with some government, this is literally the worst way to do it. The best way would be to do anything on the server, without any possible audit.
Yes, it‘s not the best way, but seems to be a compromise…
 
Apple had iphoto stream activated by default for quite a while! Even for users who were not using icloud photos. You had to manually disable it. That was without user's permission. Many medical photos for example (or other sensitive material) were uploaded to their servers. You are more than welcome to highly doubt anything you want.

You haven't answered my questions but it was expected anyway.

Again, link? And you just confirmed it WASN'T forced (you said it could be manually disabled), so my doubts were correct.

I answered your questions. You just didn't comprehend the answers. Let me try again:

1. "Apple" is not the one protecting children from indecent images. "Parents" are, by using technology Apple has made available.

2. Your second question was based on "if yes" to the first. Since the first was based on your misunderstanding of the safety features in Messages, the second question is irrelevant. But I can make it a better question and answer THAT for you: "Should Apple have an optional parental control feature that protects children if they search for potentially harmful topics?" And my answer is YES. I mean, who on earth would object to that, especially since it would be an optional parental control feature?
 
This can be applied to EVERY tech company sharing cloud data but yet the fervor is not the same for them. You could argue that you think only Apple has talked about privacy but the government holding a tech company hostage scenario is only being applied to Apple for some reason and given past precedent, It hasn't and will not occur, unless you dont trust Apple.
I believe, Apple does just the best compromise. And is ordered to fool his customers. As I love Apple, these are dark days for me.


In addition, I speculate that the other cloud providers have become equally compliant in exchange for payment.

I'm looking forward to the governments putting in their hash, only they're not naked children but politically persecuted people.
 
Last edited:
You know a much easier solution which Apple could have used for a decade?

Turn on iCloud backup secretly and you get access to almost everything.

Why have you not worried about iCloud backup?

Because if Apple wants to start doing needle in a haystack searches on iCloud they pay the price of having no probable cause and that’s fair.

Instead they made a system so every user is paying the price of constantly searching themselves and then reporting themselves to the authorities.

I’m not guilty. But I don’t want to bear the cost of constantly proving to Apple Inc. that I’m not guilty. They have neither the moral nor legal authority.

How about Apple gets to scan my photos and I get to scan their tax returns?
 
I believe, Apple does just the best compromise. And is ordered to fool his customers. As I love Apple, these are dark days for me.

The argument of future scenarios are entirely illogical because they didnt apply to apple when it comes to iCloud scanning so its illogical to think magically they will apply to device scanning when iCloud scanning is more intrusive than device scanning.

Also these scenarios are worst case. They were worst case even before Apple announced CSAM scanning but for some reason scanning for pedophiles starts these conspiracy theories not any prior stuff
 
Again, link? And you just confirmed it WASN'T forced (you said it could be manually disabled), so my doubts were correct.

I answered your questions. You just didn't comprehend the answers. Let me try again:

1. "Apple" is not the one protecting children from indecent images. "Parents" are, by using technology Apple has made available.

2. Your second question was based on "if yes" to the first. Since the first was based on your misunderstanding of the safety features in Messages, the second question is irrelevant. But I can make it a better question and answer THAT for you: "Should Apple have an optional parental control feature that protects children if they search for potentially harmful topics?" And my answer is YES. I mean, who on earth would object to that, especially since it would be an optional parental control feature?
you are just impossible!!!!! You have some serious tendency to support Apple and to easily put the blame to users.. don't you..

Yes you could disable it after someone had informed you about this! Many users didn't even know it because they were NOT using icloud photos. Is it so hard to understand? What kind of link do you want me to give you?? If you have ever used an iphone like most of us , you would already know that icloud stream was on by default.

So where do we stop after children? Checking messages for domestic violence, political messages etc?
Please discuss your justice warrior opinions with someone else. Let's agree that we totally disagree.
 
  • Like
Reactions: 09872738
I was. Not now. This should teach us about apple. Anytime you put a seriously demorat accountant in charge of the 2nd largest company in the world you are going to have problems

He's not an accountant, holding both Engineering and MBA degrees.

And he's propelled the company to being one of the most successful in the world, creating products and services customers (many repeat) want to buy, paying premium prices, year after year after year.
 
If Apple expands these capabilities, they get caught. Again, go back and read my post about Apple's Research Security Program. The only way they do not get caught is if all these processes happen away from the device, like in a server (where they can run any code they want that's completely blind to you) ... Which is EXACTLY what Apple did not do.

I trust this system more because we can know for sure which root hashes are contained the local database and whether the match calculations are accurate, and whether they apply only to photos being uploaded on iCloud. If Apple implemented this on a server side, we wouldn't know for sure any of this.



Read my response to the other user. If Apple wanted to collude with some government, this is literally the worst way to do it. The best way would be to do anything on the server, without any possible audit.
Seems I missed the iPhone OS open source announcement. Mea culpa. ;-)
I expect them to not sell my data or use it for advertising, but these are the only privacy expectations I (and I bet most people) have for icloud-stored data.
 
  • Like
Reactions: 09872738
Because if Apple wants to start doing needle in a haystack searches on iCloud they pay the price of having no probable cause and that’s fair.

Instead they made a system so every user is paying the price of constantly searching themselves and then reporting themselves to the authorities.
God forbid pedophiles are reported to authorities 🙄
 
Seems I missed the iPhone OS open source announcement. Mea culpa. ;-)

It is not open source, but anyone enrolled on Apple's Security Research Device Program can audit all these processes. This is the third time I say this and somehow you're ignoring it. Apple's documentation on CSAM even states this multiple times:

> The perceptual CSAM hash database is included, in an encrypted form, as part of the signed operating system. It is never downloaded or updated separately over the Internet or through any other mechanism. This claim is subject to code inspection by security researchers like all other iOS device-side security claims.

> That the calculation of the root hash shown to the user in Settings is accurate is subject to code inspection by security researchers like all other iOS device-side security claims.
 
Google security researchers will find out in 10 seconds if iOS is scanning private user data beyond CSAM. They often report system flaws to Apple and to public at same time so its really silly to freak out if Apple does something behind the scenes without users knowing
 
The problem is, all you need after this as a tyranical government is to insert certain hashes as Cp and push them out to tech companies to detect....it's not like Apple is going to verify by looking at that filth.
You obviously did not read the material. First they only take CSAM that has been provided by multiple agencies of different countries to reduce that kind of attack. Then the hashes have to be on the device and the individual, because of how hashes work, have to be EXACT hashes. Not close to or similar but exact. So if you have a hash that’s was a confederate flag image before it got converted to a hash and you are wearing a confederate flag t-shirt in some of your images they won’t match and won’t get flagged. Then there needs to be 30 of those hashes before it is flagged for review. At which time an Apple employee has to actually look at the images, they don’t look at any other images in the account, just the flagged ones. So if they are all political images and not CP then nothing gets reported. They covered all of this in their documentation.

I’m not saying there shouldn’t be discussion however before putting your opinion to the screen please try to be educated on what you’re talking about. If your issue is that you don’t trust that they are being honest at least say that. What you said is not how it works and could not function like that. You don’t add value to either side of the argument when you aren’t educated.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.