Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Number 2 is intentional to protect Apple themselves. They basically have plausible deniability when things go south as all they have are hashes. Really smart of Apple.
I'm not sure how they're going to claim plausible deniability. They implemented the system for it to happen. It's sort of like a getaway driver's excuse that (s)he didn't know a bank was going to be robbed. Apple in this case are the ones bring the "transport".
 
Vote with your wallet and find both happiness and a phone that meets your requirements.
I have. An iPhone is not my primary phone. Don't get me wrong, I don't trust the "other guys" either which is why my data isn't held by anyone but me in my own cloud.
 
Craig does not lie well. Like a deer caught in the headlights. So deplorable.
You didnt see the right interview did y
No… they're not. They're going to retain ***ALL*** hashes as new hashes can be added to the other side at any time and they'll need to be able to compare those. This won't be a use once and delete.

And to ask… where did they write that they will not retain hash data?
Read it again

APPLE
Is
Not
Informed
of
any
user
data
unless
the
30
image
hash
threshhold
is
met.

The only time user information is sent to Apple is when you are a pedophile.
 
  • Haha
Reactions: Morgenland
According to Apple the probability of a false positive in 30 matches is 1 in a trillion.
Yes, with macOS I also always notice: Apple's facial recognition is really very good.

The following organizations are pleased…
Федеральная служба безопасности Российской Федерации
中華人民共和國國家安全部 / 中华人民共和国国家安全部
(…)

 
Last edited:
  • Like
Reactions: CriticalThoughtDrop
I'm not sure how they're going to claim plausible deniability. They implemented the system for it to happen. It's sort of like a getaway driver's excuse that (s)he didn't know a bank was going to be robbed. Apple in this case are the ones bring the "transport".
Because all Apple see are hashes. Apple themselves don't know what's in the database. All they do is matching hashes.

The getaway driver in your analogy still physically saw the people inside his/her getaway car. Apple is like a blind and deaf driver who has constant route from A to B. Whether the passengers are bank robbers or not, Apple wouldn't know.
 
  • Like
Reactions: Morgenland
APPLE
Is
Not
Informed
of
any
user
data
unless
the
30
image
hash
threshhold
is
met.

The only time user information is sent to Apple is when you are a pedophile.
The postal service doesn't inform me of what's in my letter box until I go looking at it. Just because they not informed of data and/or data matches has nothing to do with it being there.

EDIT : And by the way, you didn't answer my question of where it is said/written that the hash data from the user's side is only ever compared the once again a database?
 
I did see the article but many of the replies are pretty naive as in humans are robots that follow the rules. Snowden covered this stuff multiple times in his talk about the NSA, CIA's involvement in the vaccination programs that were also international organizations, the list goes on. Many of those so called agencies still function under the same governments that Snowden talked about or have no means to defend against high level cyber interference.
I try to respect what Snowden and others like him say. However he is a perfect example of how not all people are robots. Companies are businesses which often get associated with robot mentalities. Not unfair but also not universally true. Companies are made up of people, which makes a culture that is different from company to company. Apple is not a small company either. To make malicious code you would need to convince all the software developers at Apple to make that code. That would already be a pretty hard task.

People can make mistakes in code. I’m certain there are bugs they caught and some they have not. I can bet that if the system is flagging stuff incorrectly they will address them as they come to make it even more accurate.

Also lets assume that the execs at Apple convinced all the devs to make malicious code. Well since the system has to be maintained and reviewed indefinitely into the future. Eventually it will weigh heavily on the morals of those individuals. On a long enough time frame those employees with either leak, quit and eventually talk, or that system will just get exposed. Its just not really sustainable to maintain a lie like that long term. I recognize that is just my opinion but Snowden, and other whistleblowers like him, are proof that its not sustainable.
 
Because all Apple see are hashes. Apple themselves don't know what's in the database. All they do is matching hashes.

The getaway driver in your analogy still physically saw the people inside his/her getaway car. Apple is like a blind and deaf driver who has constant route from A to B. Whether the passengers are bank robbers or not, Apple wouldn't know.
This to me is no different to Facebook saying "we're not responsible for our user's posts".
 
Number 2 is intentional to protect Apple themselves. They basically have plausible deniability when things go south as all they have are hashes. Really smart of Apple.

I suspect Apple doesn't want a database of child porn jpegs on their premises.
 
You didnt see the right interview did y

Read it again

APPLE
Is
Not
Informed
of
any
user
data
unless
the
30
image
hash
threshhold
is
met.

The only time user information is sent to Apple is when you are a pedophile.

would u consent to put a device in your home that sends info to the government when it detects something illegal?
 
  • Like
Reactions: 09872738
The fact that you stated this is the exact danger. You already formed an opinion and judgement of a person simply based on matching hashes. Do you really want a private system with the capability of making such heavy judgement without due process?
How do you think it works on Flickr, Dropbox? Gmail, amazon photos. google photos, yahoo mail?

where was this level of server of "danger" then? The scan on device is the same as iCloud
 
would u consent to put a device in your home that sends info to the government when it detects something illegal?
yes. Its why I still use Gmail. Google scans all private emails and reports to authorities for illegal activity. and thats not just home but on every device I use, windows, Mac, iPhone, android.

Every other service or device you use reports illegal activity to government. its already happening, why are you speaking in hypotheticals
 
No big deal, going to move away from iCloud and save myself the $9.99 a month. There’s nothing even remotely bad on my phone but I don’t care enough to read their explanation on how it works and why blah blah blah. It’s their choice to do whatever they want and if they want to scan photos before being uploaded to the cloud then fine, I just need an excuse to rid myself of one more pointless subscription anyways
But it will not remove the software capability in the OS you will be using, which is why I now asks myself if it was with intent to extend surveillance, as if not, then Apple could easily have emulated others by having hash checks on their servers, but they CHOSE a method whereby they prefer to put it into operating systems affecting billions, which makes no sense as its far less efficient than having it in one place, doing one thing, reassuring customers that Apple do not have the intent to extend it, which is not now the case if they intend putting into the various OS's.

To put coding in, it bypasses Apple's Systems Integrity Protection as Apple have unfettered access, allowing them to update or modify even system files, so whilst we might be told it was only designed to do one thing, that's not to say it cannot be modified by Apple whenever or if they chose to. This is why its so dangerous, and why its much better for Apple to step back and announce they are initiating hash checks on their servers not within operating systems.

I've asked them to consider the option of removing it from the OS and if they are really intent put the interrogation of hash's on iCloud leaving customers machines free from coding that would then be unnecessary.

For me even that will not achieve what they suggest is their aim, which is why I no longer believe their reasons, as if it were just that, they would emulate others who use their own servers, not customers machines, customers electricity, and customer's processing power on the computers that may need all of that, rather than processing power being used for Apple, instead of Apple using its own resources.

For me it is crucial that Apple do not place coding on the operating systems, as I do not believe they have considered all the consequences.

Can you imagine social services or their equivalent in the UK or US many of whom use iPads, where sadly they may come across child abuse and the pictures that go with it....with hash's then flagged up but apparently then perused by Apple, who do not have the clearance, which could prejudice any court case or prejudice national security, as this material and even the hash should be absolutely confidential and quite possibly sub judice.

Likewise many government departments, and even emergency services use iPads and iPhones and some procurement requirements could mean they could no longer buy Apple products, even if they didn't use iCloud, because the software may then be outside the procurement requirements for any sensitive work.

I bet paedophiles celebrated when Apple announced their intentions, as it was like they were receiving information on how to escape scrutiny, leaving innocent Apple users stuck with coding on their hardware.

Apple were recently fined $113,000,000 for slowing down older iPhones, so why put themselves in the firing line for another fine, a theoretically such cross checking prior to iCloud will still take processing power, and do similar, but whether noticeable is debatable and I wouldn't wish to predict that.

At the present time with the EU and countries assessing Apple and its modus operandi, with Apples public stance on privacy now questionable and with its spats with Facebook, Elon Musk and Epic, it probably could not have come at a worse time for Apple.

You should always leave an escape route and Apple has that, by agreeing not to incorporate the coding for the operating systems, but instead via its own servers, like others do, and where it might not be acceptable to everyone, but would provide a way of defusing the very serious red line crossed by having customers downloading updated software which puts the onus on the customer's hardware and whether someone is one iCloud or not, the mere fact that the coding exists within their system, will rightly cause justifiable concern.
 
Last edited:
This to me is no different to Facebook saying "we're not responsible for our user's posts".
Unfortunately, that's what Facebook keeps saying and hiding behind section 230. Meanwhile they're editorializing what you can see.

Anyway, that's a different rant. Apple's case is different, which is why it's so frightening. Apple has basically set up a mass scanning system worldwide in one shot that also free them of accountability. It's smart and sickening at the same time. Now whenever I see Apple talking about human rights, it feels like a very sick joke.
 
you are just impossible!!!!! You have some serious tendency to support Apple and to easily put the blame to users.. don't you..

Yes you could disable it after someone had informed you about this! Many users didn't even know it because they were NOT using icloud photos. Is it so hard to understand? What kind of link do you want me to give you?? If you have ever used an iphone like most of us , you would already know that icloud stream was on by default.

So where do we stop after children? Checking messages for domestic violence, political messages etc?
Please discuss your justice warrior opinions with someone else. Let's agree that we totally disagree.

No, I just work rationally with what you're dishing out. I can't help it if you use words incorrectly. There's a difference between "forced" and "on by default." Sounds like you were using the word "forced" to over-dramatize it and make it sound worse than what it was. You made it sound like some sort of scandal, which is why I was asking for a link to a news story about it or something. I guess it wasn't.

It's not often that I set up a new iPhone, so I honestly don't remember what's on/off by default in iCloud, but I always go through the different iCloud-compatible apps (Settings > Apple ID > iCloud) and set everything how I want it when I DO set up a new iPhone. I noticed Apple's instructions to set up and use iCloud Photos instruct you to go in and turn it on, which seems like an odd thing to say if they're already on by default.

And based on your last paragraph, it still appears you don't understand that the new parental controls for Messages are not the same thing as on-device CSAM detection for iCloud photos. Two different topics. Nothing is being reported to Apple from Messages. It's only between the parent(s) and the child. And as for "what's next" beyond CSAM detection, you're committing the slippery slope fallacy, as are so many on this forum. Just because something theoretically could be abused, doesn't mean it's a bad thing or shouldn't be allowed. This applies to MANY things in life, not just technology.

You're already trusting Apple isn't doing anything nefarious with software functionality already on your phone (e.g. facial recognition), so why all of a sudden the distrust? If you're that paranoid, then why continue to use iPhones?
 
Last edited:
I try to respect what Snowden and others like him say. However he is a perfect example of how not all people are robots. Companies are businesses which often get associated with robot mentalities. Not unfair but also not universally true. Companies are made up of people, which makes a culture that is different from company to company. Apple is not a small company either. To make malicious code you would need to convince all the software developers at Apple to make that code. That would already be a pretty hard task.

People can make mistakes in code. I’m certain there are bugs they caught and some they have not. I can bet that if the system is flagging stuff incorrectly they will address them as they come to make it even more accurate.

Also lets assume that the execs at Apple convinced all the devs to make malicious code. Well since the system has to be maintained and reviewed indefinitely into the future. Eventually it will weigh heavily on the morals of those individuals. On a long enough time frame those employees with either leak, quit and eventually talk, or that system will just get exposed. Its just not really sustainable to maintain a lie like that long term. I recognize that is just my opinion but Snowden, and other whistleblowers like him, are proof that its not sustainable.
It took a very long time for one of the thousands to stand up. Very long for a democratic free nation. That is my concern, and that makes others so calm.
 
  • Like
Reactions: 09872738
I have no idea where you're going regarding his vision and that he wears glasses. Smells like obfuscation to me.

With respect to Dr. Fauci, he's pretty much an open book. Start with wiki if you have to. Then go deeper. Much deeper. He has a 40 year track record of dealing with various infectious diseases and pandemics. Same with Larry Brilliant.

What are your qualifications that permit you to qualify Fauci's, or Brilliant's background?

Are you an epidemiologist or virologist who has engaged in similar research?

As I wrote before, its not about Dr. Fauci or Brilliant (they are your tangent*), it’s about what are the actual capabilities of microbiology research and observation.

It’s mostly about the hardware. “How good are the scopes?”

Just like Dr. Fauci’s vision corrective glasses, the specs of the specialized hardware to see at the microbiotic scale in question, are the crux of all viral research and discussions.

If the methods used are reliable with a high degree of accuracy, then I’m sure Dr. Fauci deserves his reputation.

But if the methods are not precise, and there are no other methods to verify, then that reputation should be tempered.

*Before cheerleading scientists by reputation alone (which is unscientific), you might consider establishing a foundation, is Dr. Fauci competent at TEM or another direct observational research method at the viral scale?

Was that skill even in his wheelhouse, or are you making an incorrect assumption(s) about epidemiologists/virologists and their field? Are there different types of epidemiologists?

You might also want to check out Dr. D.A. Henderson for setting your virology expert benchmark, the man credited with leading the efforts to eradicate small pox (Brilliant was just one of many who worked under him). When Brillant largely left the field to chase Silicon Valley VC money and then start working for Google on investment grants, Henderson continued to do research and publish on viruses and outbreaks (omitted from mention on his Wikipedia, go figure?).
 
That can be done by simply deploying e2e encryption on iCloud yesterday. Done. Apple wouldn't know what's on their server.

That doesn't address what I was responding to:

"2. Apple do not have access to the original database only the hashes. If the hash for a photo of BLM was added then Apple wouldn’t have a clue and would just push it to users."
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.