Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well, Apple incorrectly thought that people would rather Apple NOT go through ALL of their photos and I think it’s an easy mistake to make.

Apple: Hey, we need to do this thing and we can either
Pore over every single photo you’ve ever taken which means we have to store your photos in a way that we can decrypt them… of COURSE meaning that if the government ever asked us to decrypt them, we can’t say we don’t have they key, so that’s a thing.
OR
We can have your device flag any potential CP (like if you have a good number of matches) ON YOUR DEVICE. That way, if you’re using iCloud, we don’t look at ANYTHING of yours unless you have CP AND we can tell anyone asking us to provide access to your photos that they’ll have to ask for your password because we don’t have any way of accessing any non-flagged images.

Which would you prefer?

Public: ENSURE THAT YOU CAN PROVIDE TO THE GOVERNMENT ANY IMAGES THAT THE GOVERNMENT REQUESTS OF YOU.
Apple: Huh… wouldn’t have figured that but… well, back to the drawing board.
The idea was that the hash matching is done on your phone instead of communicating with the server until some threshold of certainty was crossed. It’s a similar logic behind so much of Apple’s AI approach (do it on the native device to the amount possible instead of remote servers, in part for privacy reasons*, in part to exploit Apple’s expertise in silicon vs their relative weakness in cloud hardware).

* An example of privacy implications of cloud based CSAM scanning vs on-device: image isn’t CSAM but may be sensitive, gets uploaded to the cloud, where who knows what happens to it in addition to being hashed and compared to the CSAM database. In the Apple approach, the image doesn’t get tested by server side CSAM detection until it crosses a certain threshold of matching the hash, so the non-CSAM but sensitive information doesn’t get uploaded to the cloud unless it matches the hash to a certain threshold. There are some other theoretical aspects of on-device vs cloud based analysis. Theoretically, on device analysis is more transparent than cloud based analysis, since researchers can analyze the device doing the analysis far easier than a cloud based server. Also theoretically, it could be harder to change the hash code to match non-CSAM sensitive content unnoticed. Not CSAM matching is of course more protective of privacy than CSAM matching, but theoretically on-device CSAM matching could be more protective of privacy than cloud based CSAM matching.
I'd definitely not prefer this to be done on my device. If you upload sensitive information to a server outside your control, it's understood you don't have full control over the sensitive information anymore and therefore taking risks with the sensitive information. This risk unfortunately does not go away whether the surveillance scanning is done on device or on the server. And in actuality, when you are doing this on-device, you are unnecessarily opening up additional vectors for attack.

It seems the biggest culprit in all this is Lindsey Graham who tried to introduce and pass the EARN IT act, trying to make providers 'earn' Section 230 immunity, and affect CALEA without having to amend it (by trying to remove E2E encryption). The wind of this most likely forced tech companies to try to pivot, causing Apple to make an unfortunate and unnecessary misstep.‬
 
  • Like
Reactions: Unregistered 4U
as the Pixel 6 Pro? Google disagrees with you as they’re saying that there are feature differences between the 6 and 6 Pro. But, ok!
Yes, the plain Pixel 6 has the same software features and capabilities as the 6 Pro, at this point this is fairly documented so its a plain fact.
Google doesn't disagree with me at all, the 6 Pro is more expensive, "the best Pixel you can get", so it has more RAM, higher MP front facing camera, higher storage option, a 4X optical zoom camera, bigger battery, better 1440p screen etc. So all the differences are hardware and intentional.

I know I'm wasting my time but its becoming clear you don't remember what you said, things like: the OS and hardware of the iPhone is designed for efficient and fast garbage collection.

This is false, iOS doesn't use garbage collection, it uses ARC.

Understanding how Android’s garbage collection works, it makes sense that it would need 12GB as compared to an iPhone’s 6.

No, it doesn't make any sense at all. If you really would understand how garbage collection works you wouldn't have written such a thing. I mean, at no point you have provided any logical and technical explanation as to why Android would need 12GB as compared to an iPhone’s 6 which is not surprising to me and I'm not expecting it to change.
 
Last edited:
I'd definitely not prefer this to be done on my device. If you upload sensitive information to a server outside your control, it's understood you don't have full control over the sensitive information anymore and therefore taking risks with the sensitive information. This risk unfortunately does not go away whether the surveillance scanning is done on device or on the server. And in actuality, when you are doing this on-device, you are unnecessarily opening up additional vectors for attack.

It seems the biggest culprit in all this is Lindsey Graham who tried to introduce and pass the EARN IT act, trying to make providers 'earn' Section 230 immunity, and affect CALEA without having to amend it (by trying to remove E2E encryption). The wind of this most likely forced tech companies to try to pivot, causing Apple to make an unfortunate and unnecessary misstep.‬
Incidentally, the CSAM “scan” isn’t really a “scan”, there’s no AI processing going on. There’s a list of hashes of known CSAM material. An image with a strong hash similarity is very likely to be a copy of that same image. It doesn’t really do anything about new, unknown CSAM material, for instance, because it’s a pretty “dumb” process (instead of being AI dependent). It would be incredibly difficult (though perhaps not impossible) to extend it to political uses. Political signs at a rally would actually be very difficult to detect using this technique, assuming that people have unique signs and slogans instead of using the same sign printed off the internet. Anti-government memes would be easier to detect, yes, assuming most users of the meme share the same version of it or that there aren’t “legitimate” memes sharing a meme template with the anti-government ones. It may even be less capable than that, depending on how the hashes are formed. Are they merely the cryptographic hash of a binary file of an image? Or is is calculated from the visual image of the file? The former would be very easy to defeat, by adding garbage data and metadata to pad the file and would be useless at detecting images of political speech (unless everyone shares the exact same file). The latter means that creating new hashes is more time intensive and that it would be harder to separate “legitimate” content from political content the more similarities they have (kinda like a chameleon). You’d have to be doing some AI processing to address the chameleon exploit (where illegal speech masquerades as legal speech). And it would incidentally be far easier to add those AI processing steps surreptitiously server-side than client-side, considering that security researchers can time, perform debugging, and conduct other analytic processes far more easily by analyzing on-device functions instead of a remote server.

On-device is probably less easy to be abused for political purposes than on-server. It’s okay, but probably not objectively justifiable, to claim that the on-device analysis crosses a bridge that devices shouldn’t do, but it’s demonstrably wrong to claim that on-device is somehow more capable to violate privacy than on-server. I’m not persuaded that on-device analysis (and I’m not sure how much analysis of the content is actually going on) adds more potential to be exploited (by malicious third parties), any more so than any automatic AI scanning process. As for potential to exploit by governments or Apple, I think the increased ability for researchers to scrutinize on-device analysis provides a level of oversight that we don’t get with an on-server approach, which helps protect users. Is doing nothing less intrusive than on-device scanning? Sure, but on-device is probably far less intrusive than on-server scanning. I think people really conflated the iMessage nudity blocking feature on children’s accounts with the CSAM steps, plus, a lot of the arguments on MacRumors against them seem to be heavily based in emotional appeals vs an actual understanding of the technical aspects.
 
Incidentally, the CSAM “scan” isn’t really a “scan”, there’s no AI processing going on. There’s a list of hashes of known CSAM material. An image with a strong hash similarity is very likely to be a copy of that same image. It doesn’t really do anything about new, unknown CSAM material, for instance, because it’s a pretty “dumb” process (instead of being AI dependent). It would be incredibly difficult (though perhaps not impossible) to extend it to political uses. Political signs at a rally would actually be very difficult to detect using this technique, assuming that people have unique signs and slogans instead of using the same sign printed off the internet. Anti-government memes would be easier to detect, yes, assuming most users of the meme share the same version of it or that there aren’t “legitimate” memes sharing a meme template with the anti-government ones. It may even be less capable than that, depending on how the hashes are formed. Are they merely the cryptographic hash of a binary file of an image? Or is is calculated from the visual image of the file? The former would be very easy to defeat, by adding garbage data and metadata to pad the file and would be useless at detecting images of political speech (unless everyone shares the exact same file). The latter means that creating new hashes is more time intensive and that it would be harder to separate “legitimate” content from political content the more similarities they have (kinda like a chameleon). You’d have to be doing some AI processing to address the chameleon exploit (where illegal speech masquerades as legal speech). And it would incidentally be far easier to add those AI processing steps surreptitiously server-side than client-side, considering that security researchers can time, perform debugging, and conduct other analytic processes far more easily by analyzing on-device functions instead of a remote server.

On-device is probably less easy to be abused for political purposes than on-server. It’s okay, but probably not objectively justifiable, to claim that the on-device analysis crosses a bridge that devices shouldn’t do, but it’s demonstrably wrong to claim that on-device is somehow more capable to violate privacy than on-server. I’m not persuaded that on-device analysis (and I’m not sure how much analysis of the content is actually going on) adds more potential to be exploited (by malicious third parties), any more so than any automatic AI scanning process. As for potential to exploit by governments or Apple, I think the increased ability for researchers to scrutinize on-device analysis provides a level of oversight that we don’t get with an on-server approach, which helps protect users. Is doing nothing less intrusive than on-device scanning? Sure, but on-device is probably far less intrusive than on-server scanning. I think people really conflated the iMessage nudity blocking feature on children’s accounts with the CSAM steps, plus, a lot of the arguments on MacRumors against them seem to be heavily based in emotional appeals vs an actual understanding of the technical aspects.
I think we inadvertently derailed the topic of this post but had to respond on this last comment. We can discuss more elsewhere if needed.

To put it simply, my point was not about extending the scanning to political uses. It is a matter of ownership of the device. The political aspect (and financial) is probably the main reason why Apple made this unfortunate and unnecessary misstep.

Apple was proposing to use consumers’ personal property for Apple’s own benefit and not the consumer.

Would you be okay with freely housing TSA agents at home? They will only check your bags at home if you choose to fly. It will also be personal and private (less intrusive per your post) since the detection is at home!

Further, if you are still defending this, are you saying you support things like cryptojacking?
 
Might as well leave this here :)
For what? It's not relevant to the claims you made.
I quote from the article:

Regarding "Garbage collection"
This method is optimal when there is a lot of RAM available in the device which is usually the case with most of the premium android devices.

So pretty much what I've been saying. You have enough RAM, "garbage colection" inefficiencies become irrelevant. This is achievable on Android with 6Gb and decent optimisation, Pixel 4a, 5a for example.

if you’re planning on buying an Android device then make sure that it has a RAM of at least 4GB, for smooth performance.

The blog recomands 4gb of RAM for Android phones for a smooth experience and the article is obviously outdated(written in 2019 and talking about even older phones) when looking at the latest Android phones. For example my A52s has 6Gb RAM and 4Gb dedicate virtual RAM and honestly is close to Android phones with dedicated 8Gb in terms of general behavior and there's no need for more RAM on this phone.
I will say it again, there's no technical reason for Android to need 12Gb of RAM in order to function properly. The recomanded is 6Gb, 8Gb is optimal, 4Gb is the minimum right now.
The 6 Pro doesn't have 12Gb out of necessity.
 
Last edited:
  • Haha
Reactions: Unregistered 4U
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.