Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
"There is no mechanism to report spam, death threats, hate speech, NCII, or any other kinds of abuse on iMessage.

On iMessage? He confused it with the messanger garbage can. iMessage doesn't have problems with spam etc.
 
  • Like
Reactions: centauratlas
Now here's my thoughts on this. Of course child safety is important but where does this stop? Let's say I am a member of a new political party, a special interest or religious group and the establishment is interested in all such things even without cause. So I go to meeting and take a picture of a fellow believer or two. In the background of the photo has also been captured the images of other believers. So that's the scene. What is to stop this new Apple system (this isn't limited to Apple) from capturing the other faces and asking are they on Twitter, Facebook, Apple Photos etc? Who are they and who is in the photo's on those third party images etc etc? Carry this process on and the Apple system, brought in under the banner of a VERY noble cause, has now established relationship maps for tens of millions of people. It learns not only who is in my hypothetical religious group but in every gun club in America, in every campaign for or against every political movement. We have nothing to fear if we have nothing to hide, they say, but those people would be the first to call the police if someone stole their curtains - as happened when Edward Snowden lifted the government kilt a little too far.

Child Safety is a noble cause but it could also be a Trojan Horse for the latest iteration of mass surveillance.
 
  • Love
Reactions: Vege-Taco
Apple is planning to implement a back door with on device scanning, it doesn’t matter what is the use of that at the moment. There will be tremendous weakness that hasn’t been there before.
 
Precisely this.

Full disclosure: while I have not worked at Facebook, I have worked with Alex Stamos (I am a former IT Admin for iSEC Partners/NCC Group where Alex Stamos was a founding partner). His batting average isn't perfect (who's is?) but it's decidedly better than average in the field of computer security. While we worked together, Facebook was a client of iSEC Partners, and we helped FB implement TLS and performed IPv6 scans among other things. I departed iSEC Partners/NCC Group at the end of 2011, and Stamos later went on to found Artemis Security and later was CSO of Yahoo! He left Yahoo! if I understand correctly, after determining that there was a severe breach compromising most user accounts and when he wanted to implement a mandatory passphrase change, he received push back from other executives in the Board of Directors. Given that CSO levels are legally liable for security decisions, he decided that if he had his hands tied to the point where he couldn't do his job on such a basic level as mandating passphrase changes on known compromised accounts, then he didn't want to get paid for being Yahoo!'s fall guy. Seems like a wise move, and with the users' best interests at heart.

His departure from Facebook circa 2018 I seem to recall was over not entirely dissimilar grounds after information regarding the Cambridge Analytica breech came out. It appeared as if he did his best at damage control for a while, even going so far as to suggest that Mark Zuckerberg step down as CEO. After all, being implicated as leader of any organization which is facing involvement with war crimes and manipulating elections is a bad look, and like Bill Gates stepping down as CEO of Microsoft after they were losing antitrust lawsuits didn't stop Gates from being a multibillionaire, there is little wrong with letting someone else take over the reigns. Zuckerberg didn't acquiesce to such recommendations, and near as I can discern Alex Stamos thought it best to jump from a sinking ship in flames after he realized that no amount of damage control and bailing could save it.

Since then I guess he's been a staff research with Stanford? I took some issue with his apparent endorsement of Zoom a year or two ago (which has since lost an $85 million dollar lawsuit over their end-to-end encryption implementation being bollocks, as I could discern back when I looked at their "technical" documents), but as he sheepishly admitted to in a recent Youtube live stream with Mathew Green, David Thiel (another former iSEC Partners/NCC Group coworker of mine) et al over the Apple CSAM scanning decision recently, paraphrasing: "even some of the recent end-to-end encryption claims of vendors are difficult for experts to discern correctly." Albeit, that eating crow applies to things such as Apple's iCloud not being server-side encrypted and that Apple has repeatedly cooperated with law enforcement of particular concern in places such as China where they have handed over user data to governmental agencies (e.g. https://www.cpomagazine.com/data-privacy/icloud-data-turned-over-to-chinese-government-conflicts-with-apples-privacy-first-focus/#:~:text=Apple does not have the,be stored within the country.).

I'm more of a nerdy ops guy who fixates on bit level optimizations and Kolmogorov complexity reduction in code implementations and don't profess to even want to wear the sorts of hats that Alex Stamos has. Nonetheless, while I can state that I did help him unlock accounts in the past when we worked together, I've never testified in front of Congress as he has. Nor would I want to do such things. I think he and I still "fight for the users" more than not, but we may do so in different ways.

Suffice to say, Alex Stamos hasn't spoken on Facebook's behalf for a few years now, but even before he worked as CSO at Facebook, he was deeply involved in the field of computer security, and I consider him to be among the more seasoned and ethical practitioners in the field. Discounting someone's viewpoint because of a contextualization of one of their past employers, is a pretty narrow perspective, and given that I have worked as a janitor and far humbler positions at some pretty heinous employers in my past, I would sincerely question how anyone's past career history is a reflection on them as a person, in particular if it seems as if most of their actions as an employee were admirable. I know some with a lot of skeletons in their closets, myself included, but Alex Stamos didn't ever read that way to me.

The "screeching minority" categorizations of those who have raised concerns about Apple's local CSAM scanning implementation are more than concerning, they are dismissive. I don't think I've read nor heard a single perspective, even from trolls, advocating for child abuse. Meanwhile, hash collisions are a field of study within computer science which are widely known, with even cryptographic hashes such as MD5 and SHA-1 having fallen by the wayside in more recent years due to chosen prefix attacks. Another colleague and computer forensics expert, Cory Altheide, some years ago described to me malware which would drop files which had CSAM hash collisions so as to shift investigative onus, and I found that prospect chilling. The likelihood that similar techniques can and will be used against Apple's implementation, seems nonzero, and from my perhaps paranoid vantage, extremely likely as a means to facilitate unwarranted governmental eavesdropping based over spurious levels of "probable cause". For those who think that no one is that nefarious, perhaps they need some reminders of the threat levels of nation state level adversaries using known malware such as Pegasus sold by Israeli NSO Group.

At the very least, Apple's multi-million dollar ad campaign "Privacy. That's iPhone." now reads like Orwellian double speak with this most recent decision. As another colleague posited: it seems likely that this may have been a move which Apple was forced to take lest Tim Cook be faced with even worse choices from governmental pressure for cryptographic backdoors.

Regardless, the canary in the coal mine looks awfully dead to me.
Very interesting. Thanks for taking the time to offer your somewhat inside perspective. Interesting and worrying times ahead.
 
Now here's my thoughts on this. Of course child safety is important but where does this stop? Let's say I am a member of a new political party, a special interest or religious group and the establishment is interested in all such things even without cause. So I go to meeting and take a picture of a fellow believer or two. In the background of the photo has also been captured the images of other believers. So that's the scene. What is to stop this new Apple system (this isn't limited to Apple) from capturing the other faces and asking are they on Twitter, Facebook, Apple Photos etc? Who are they and who is in the photo's on those third party images etc etc? Carry this process on and the Apple system, brought in under the banner of a VERY noble cause, has now established relationship maps for tens of millions of people. It learns not only who is in my hypothetical religious group but in every gun club in America, in every campaign for or against every political movement. We have nothing to fear if we have nothing to hide, they say, but those people would be the first to call the police if someone stole their curtains - as happened when Edward Snowden lifted the government kilt a little too far.

Child Safety is a noble cause but it could also be a Trojan Horse for the latest iteration of mass surveillance.
I 100% agree with your concerns and the risk of abuse. However, as hashes are compared to a known database, this kind of processing can’t detect original content except if someone put the exact same photo in the database. The pictures that you take in your example will probably never match hashes of that database and never be reported anywhere.

The risk I see is putting political content in that database (or anything which is not related to CSAM), and reporting people from a particular ideology that would have a specific picture saved on his/her phone.

Except if Apple enable E2EE iCloud data without backdooring (which is less than likely), you can’t know for sure what Apple will do with your data in the future, and we can only rely on trust when they say that they will not look at nothing else than CSAM and that they will not concede abusive asks from governments. I like Apple but I can’t put this kind of trust in a private company without bold moves regarding encryption
 
What is to stop this new Apple system (this isn't limited to Apple) from capturing the other faces and asking are they on Twitter, Facebook, Apple Photos etc? Who are they and who is in the photo's on those third party images etc etc?

iOS and Mac OS have already been identifying faces in photos for years now...

I feel like there is a huge misunderstanding of the technology Apple says they are using (or that I've misunderstood). When it comes to finding CSAM, your phone needs to find an exact match with an existing photo in a database of known CSAM photos. This is not a feature that returns info on photos about general content (faces, objects,...) for Apple to then evaluate for CSAM. So if you take a photo that has never existed before, it cannot be a CSAM match. Now I suppose there is some argument about who controls the database of CSAM photos...
 
Last edited:
  • Like
Reactions: fwmireault
I’m always amazed when Facebook and its former staff wants to talk about privacy in public space. Do they know what reputation they have regarding security and privacy?
I think that is alright in this context. Apple's reputation is careening towards Facebook levels these days. To me Apple is now well below Google, and even with a complete reversal it would take years to rebuild the former trust.
 
This guy already sold his soul to the devil by being in charge “or lack thereof” at FB. Ironic he is taken seriously on the topic
 
Facebook caught 4.5 million users posting child abuse images
Surely this is a typo?

Facebook has fewer than 3B users, so that would mean that 1 in 700 people have been caught sharing this material? I assume that this is an undercount for how many people are actually sharing it - it seems safe to assume that Facebook is only catching the dumber half of people, so closer to 1 in 300 people are sharing it?

Either way, this sounds like a vastly bigger problem than I had imagined...

There's 843K registered sex offenders in the US, vs a population of 330M, so that works out to around 1 in 400 people...

Huh. I would have been off by a factor of 100 if I'd been asked to guess how common of an issue this is.
 
  • Like
Reactions: ersan191
Photos aren’t E2EE. I followed the link that you put and while Photos are encrypted on iCloud, it’s not E2E. Apple can decrypt Photos on server side if they want

E2E Encrypted stuff with Apple:
  • Apple Card transactions (requires iOS 12.4 or later)
  • Home data
  • Health data (requires iOS 12 or later)
  • iCloud Keychain (includes all of your saved accounts and passwords)
  • Maps Favorites, Collections and search history (requires iOS 13 or later)
  • Memoji (requires iOS 12.1 or later)
  • Payment information
  • QuickType Keyboard learned vocabulary (requires iOS 11 or later)
  • Safari History and iCloud Tabs (requires iOS 13 or later)
  • Screen Time
  • Siri information
  • Wi-Fi passwords
  • W1 and H1 Bluetooth keys (requires iOS 13 or later)
  • iMessages (but with iCloud backups enabled, Apple can restore and access to your messages)
My mistake, They are encrypted on the phone, in transit and on the server, but they are not listed as being end to end because, Apple can retrieve the key for the photos.
 
  • Like
Reactions: fwmireault
I’m always amazed when Facebook and its former staff wants to talk about privacy in public space. Do they know what reputation they have regarding security and privacy?
If the wolves (Facebook) and the shephards (EFF) of privacy. Both say this is ominous. Perhaps the flock (users) should be concerned.
 
  • Like
Reactions: maj71303
iOS and Mac OS have already been identifying faces in photos for years now...

I feel like there is a huge misunderstanding of the technology Apple says they are using (or that I've misunderstood). When it comes to finding CSAM, your phone needs to find an exact match with an existing photo in a database of known CSAM photos. This is not a feature that returns info on photos about general content (faces, objects,...) for Apple to then evaluate for CSAM. So if you take a photo that has never existed before, it cannot be a CSAM match. Now I suppose there is some argument about who controls the database of CSAM photos...
It doesn't require a 100% match. The likelihood percentage is probably closer to a 90% match, and can be tweaked at will by Apple. In addition, once a suspected photo is flagged, a real human, Apple employee, will be looking at your photo(s). Using Google's image matching service as an example, I can only imagine how many false positive flags will be thrown. I'm opposed to giving up privacy in the name of catching a few pervs. Afterall, it's a parent's job to protect their own children, and it's law enforcement's job to catch criminals. Apple is neither.
 
Huh. I would have been off by a factor of 100 if I'd been asked to guess how common of an issue this is.
Alarming numbers. I have a feeling if Apple is choosing to implement something like this, they must know something about the scale of this problem that we as individual users cant see. Say what you want about it's a slippery-slope and can't trust big Apple, but just what if Apple has info about their technology being a contributor to this problem and genuinely is trying to find a way to stop it.
 
My issue is this sets the stage to go from iOS to Mac OS.
They will scan your whole Photos library and report back. Then they will need to scan your whole hard drive just make sure you aren't doing anything wrong. And because they can.

I don't feel the need to have someone else checking over my shoulder, nor do I want a system process running for something as invasive as this wasting computing cycles.

This is why I don't use a Windows machine for my personal life. Microsoft is going over the edge with this stuff. I don't agree with how they are managing machines.
 
It doesn't require a 100% match. The likelihood percentage is probably closer to a 90% match, and can be tweaked at will by Apple. In addition, once a suspected photo is flagged, a real human, Apple employee, will be looking at your photo(s). Using Google's image matching service as an example, I can only imagine how many false positive flags will be thrown. I'm opposed to giving up privacy in the name of catching a few pervs. Afterall, it's a parent's job to protect their own children, and it's law enforcement's job to catch criminals. Apple is neither.
What you've posted is 100% opinion - and that's ok but it's not true. For anyone interested, here's Apple's text on that:

Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.
 
What you've posted is 100% opinion - and that's ok but it's not true. For anyone interested, here's Apple's text on that:

Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.
OK then....it must be true if Apple says so. Remember, only a very very very very small number of users had a butterfly keyboard problem....
 
  • Like
Reactions: PC_tech
"Facebook caught 4.5 million users posting child abuse images. "

What did Facebook do with these users ? Ban ? Report to authorities ? Manual validation ?
 
I think that is alright in this context. Apple's reputation is careening towards Facebook levels these days. To me Apple is now well below Google, and even with a complete reversal it would take years to rebuild the former trust.

At least release the source code for these CSAM programs under the GPL. Along with every iterative update. So, people can review it and be sure it's only capable of doing what Apple claims.

After all they're just doing this for child safety right? Why would they care if somebody copied them?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.