Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple just created the greatest spy network since the NSA infiltrated telecom in the 60's. The bluetooth mesh net can't be turned off and now they are working on the ultra wide band technology as a new spy net with better bandwidth.

Yeah it’s awesome and brilliant. And can’t be used to “spy” on you any more than cell phones currently can.
 
Welp, I guess there goes the era of not verifying emergency requests. Hopefully they can thread the needle and still respond expediently to immediate emergencies.
I would think that they would just have a designated contact(s) at every law enforcement agency who is the only person(s) they would deal with.
 
So I've never read all the Terms and Conditions etc., but could somebody please set me straight. I thought literally E-V-E-R-Y-T-H-I-N-G I stored in Apple's world was protected from being handed over to authorities. I always cite that example of Apple not handing over the keys to some suspected terrorist's iPhone in CA all those years ago.
Apple just claims they protect your privacy, they don’t guarantee it. It isn’t like they are bonded.
 
They can be right about something and wrong about other things. Even major mistakes. Last time I checked, Dan Rather never apologized for the forged National Guard documents in a 2004 story, but people still watch CBS news and I imagine they are right about a good deal more than they are wrong about.

Also--where was Apple's lawsuit against Bloomberg? You'd have to think Apple was boiling mad about a story that literally defamed them and they did.....nothing. This is a company that zealously protects itself.

You sure there was nothing to that story? Hmmmmm....
Dan Rather might not have apologized but he was removed from the anchor desk and eventually fired by CBS. CBS retracted the story:

 
Humans are the weak link in security, it's unfortunate it happened. I'm sure their in-house training is going to be pure hell for a while.

This has zero to do with encryption, people bringing that and the cloud into this topic are ignorantly reaching.
 
  • Like
Reactions: mac.ross
This has zero to do with encryption, people bringing that and the cloud into this topic are ignorantly reaching.

Please read fully before calling other respondents ignorant. It will avoid your comprehension skills looking lacklustre.
 
Last edited:
Fanboy brigade out in full force defending Apple.
I'm not defending anyone. **** happens. Sounds like for the way the rules are now they were presented well-forged documents and did what they were supposed to do. The fact that this had absolutely squat to do with encryption or iCloud security or pretty much anything else being discussed in this thread is what pisses me off.
 
This is true, but Apple talks a lot of sh** about privacy and boy does it look bad when they royally screw up a privacy issue. Something about Icarus, the sun, flying too high....
What "privacy issue" did they screw up? They thought they were responding to valid law enforcement requests.
 
There is no excuse. We don't pay them to be fallible. They should have had a better system in place to mitigate their excessive fallibility.
Hmmm... I don't know. This response makes me go 'shrug'.

The grand mistake was that Apple employees were phished. That's just a human fallacy and we're all fallible. You could say the technology mistake was that Apple had the key to decrypt, but that's a slightly different issue, no?
 
Hmmm... I don't know. This response makes me go 'shrug'.

The grand mistake was that Apple employees were phished. That's just a human fallacy and we're all fallible. You could say the technology mistake was that Apple had the key to decrypt, but that's a slightly different issue, no?
Decrypt what? They didn't "decrypt" anything per the story. It isn't mentioned at all. They gave out basic customer data they'd have without anything to do with iCloud, etc.
 
What "privacy issue" did they screw up? They thought they were responding to valid law enforcement requests.
Lol. They provided a third party hacker with user data. You don't think that's a privacy issue? You don't think the most valuable TECH company in the world should be better at sniffing out fraud? I love these people apologizing for Apple. Take the L---this is a big screw up on their part, and I promise you jobs were lost at Apple over this. Don't soft pedal it.
 
Well said and spot on.
"Other people fell for it too!" is not a good excuse when you have the resources Apple does. This should never happen. I can think of a simple way to fix this---how about you CALL LAW ENFORCEMENT, who you have a relationship with, on every request just to confirm it's legitimate? Do they have enough money to do that? Are their users worth it?

This is an unacceptable mistake. We don't know yet the damage and we likely never will, but jobs will be lost (at Apple), users will be embarrassed, compromised, etc.

I can't believe people are downplaying this---all from the company with adds on the side of buildings at CES with a padlock on them to symbolize how Apple respects and secures a user's privacy.

Lol. Amazing.
 
The hackers masqueraded as law enforcement officials and were able to convince Apple's staff to provide them with data that included customer addresses, phone numbers, and IP addresses after sending forged "emergency data requests."

Typically, Apple provides this information with a search warrant or subpoena from a judge, but that does not apply with emergency requests because they are used in cases of imminent danger. Apple did not confirm that data had been shared, and directed Bloomberg to its law enforcement guidelines when asked for comment.Facebook parent company Meta also provided data to the same hacker group, and in a statement, Meta said that it is working with law enforcement on the suspected fraudulent requests. Information obtained from Apple, Facebook, and others has been used in harassment campaigns and could be used in financial fraud schemes.
I don't see what people are getting all worked up about. Apple followed standard procedures that pretty much any company would. They obeyed the law. They got tricked by an elaborate scheme.
The requests were sent from hacked email domains belonging to law enforcement officials from multiple countries, and were crafted to look legitimate with forged signatures of real or fictional law enforcement officers.
Very elaborate... Email from the official law enforcement domains. If you get an email from and the header shows it's legit Gub'ment, you can't prove it's fake. Just like how if you a letter on legit government letterhead paper, you'll be hard pressed to call it a fake.
 
And they want people to trust them with things like CSAM… lol

Hopefully the final nail in the coffin for Apple thinking anyone will trust their competence and execution for the proposed CSAM child pornography reporting tool.
Precisely. ?

...

And to be clear, there is no such thing as "forged hashes". Either the hash value is in the database or it isn't. No one but CMEC (Center for Missing and Exploited Children) can add or subtract or modify values. Once something gets reported it's easily confirmed to be in the database or not, in like 2 microseconds. False positives, now that is the real thing that experts are worried about, an image can be sufficiently similar enough to be linked to a hash in the database, and still not be CSAM (Child Sexual Abuse Material). Usually they're generated intentionally to trigger these systems. But a human is supposed to check that the image is in fact CSAM or not. So false positives presumably can be checked and found to be not CSAM and no further action taken. It'd be a bigger issue for someone with access to your iCloud to upload actual CSAM and then trigger a law enforcement action on you, but that is a possibility today as it is and is a possibility with most cloud services as well as they're all scanning your images for CSAM.
I believe you are missing the point. Apple shouldn't be actively scanning anybody's mobile device - their private property - without their consent. Let Apple wait for a proper subpoena if there is evidence of criminal activity and then scan their own property - their servers. I have no issue with that. And the issue is more complex than you make it out to be. Algorithms for modifying images to avoid CSAM detection have already been made public and the only way to catch these altered images is to lower the threshold for detection, which will result in increased false positives. And the fact that some human being would be reviewing my family's private pictures in the event of false positives does not fill me with confidence. In fact, quite the opposite. Apple's CSAM scanning idea was based on laudable motivations, but only engineers myopically focused on technology rather than the real-world outcomes of their work could think so superficially about the potential issues with what they proposed.

Systems for verification should be in place, as they are for virtually every other class of information exchange that should be secure. However, this is a legislative issue as much as it is Apple's internal procedural issue. There should be a regulatory system that ensures only legitimate law enforcement requests are successful. Of course this won't happen unless voters pressure their legislators.
 
Now we need 2 factor for law enforcement. Someone at the station gets an iPhone (or could be Android), and for emergency requests they generate a code and include the code with the request. Bam, fixed the problem.

Two factor requires pre-registration. So, what is the logistics behind creating, validating, and maintaining this massive database of pre-registered agents that are authorized to make emergency requests?
 
I believe you are missing the point. Apple shouldn't be actively scanning anybody's mobile device - their private property - without their consent.

You technically give your consent when you buy or use Apple devices. You agree to it when you agree to the terms. And I'm sure Apple argues there's lots of scanning, scanning to build indexes for instance. Are they supposed to shut down Spotlight for everyone because you don't like scanning?

I'm not saying it's right though. But there's plenty of legalese behind what Apple does, that's for sure.

Let Apple wait for a proper subpoena if there is evidence of criminal activity

I think you misunderstand, you don't technically wait for a subpoena, according to the law if you're aware of CSAM content you have to make it available to investigators and section it off from other data to protect it while the investigation happens. It doesn't take a subpoena for CSAM content. This is how the law works.

On the other side of it, though, Apple technically doesn't need to scan anything, the law doesn't require anyone to undertake their own investigations. What it does require is if you're aware of such content to report it.

And the issue is more complex than you make it out to be.

I wasn't saying this is the full extent of the issue. I took issue with someone claiming that forged hashes are a thing. They are not. No one but CMEC can add or subtract or change hashes in the database. They'll literally confirm it in microseconds that it's in the database or not. CSAM of course has more to it than that, but I took issue with a person spreading actual misinformation and corrected them.

Algorithms for modifying images to avoid CSAM detection have already been made public and the only way to catch these altered images is to lower the threshold for detection, which will result in increased false positives. And the fact that some human being would be reviewing my family's private pictures in the event of false positives does not fill me with confidence.

And hence why it's being evaluated again. However, someone right now can review your private pictures too.

In fact, quite the opposite. Apple's CSAM scanning idea was based on laudable motivations, but only engineers myopically focused on technology rather than the real-world outcomes of their work could think so superficially about the potential issues with what they proposed.
And that's why it's good for Apple to take a second look. But I am so sick of hearing nonsense spewed about the issue, like the person I replied to did. I've read time and time again that people will upload their family photos which might have their nude kids in it and get arrested. Those people misunderstand that it's a database of known CSAM images, not something newly created.
 
For those of you defending this sharing of Customer data by simply characterizing it as human error don't get it. If a proper process is implemented with checks and balances then you can prevent human error. What you can't prevent is malicious behavior but even that can be mitigated. There is no good excuse period for Apple to allow this to happen.
 
Two factor requires pre-registration. So, what is the logistics behind creating, validating, and maintaining this massive database of pre-registered agents that are authorized to make emergency requests?
Police departments call Apple and say "hey this is me, my email address is xxx", Apple builds an app with a registration system and the police are given an access code via email, the app then registers them for 2 factor. Someone at Apple tries to confirm the details of the police department, ie did they call from a VOIP number or a traditional phone line, does the email make sense, etc. If they can't confirm it they can send people over and talk in person.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.