Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

I7guy

macrumors Nehalem
Nov 30, 2013
34,228
23,971
Gotta be in it to win it
A lot of you guys didn't read the whole article or misunderstood something. This only flags images that match a database of known child abuse images or "visually similar" images (cropped / resized / color change / etc.). There's basically no chance that your little grandson's first bath will be flagged.

This really isn't the concern here. I think the concern is that this is a slippery slope.

If you read how the process will work, this kind of picture would never been flagged. That doesn’t make the move less controversial or more acceptable but Apple wants one of your saved pictures to match a database of child abuse photos before being flagged to anyone. Nudes of you or photos of your childs taking a bath would not match that database in the first place
I read the article and still have a concern. It's a slippery slope from identifying hashes in a database to other photos on your iphone, to speeding and detecting drunk driving patterns. (not that that would be a bad thing).
 

ipedro

macrumors 603
Nov 30, 2004
6,232
8,493
Toronto, ON
This is really a powerful use of AI to fight child exploitation. If all the devices that kids have access to are running this kind of protection, then it’ll substantially decrease their risk of exposure to malicious actors. Not every child will have an iPhone or iPad. I hope that Google will follow suit so that this kind of protection is universal. It would make a serious dent on child exploitation.
 
The CSAM thing doesn't detect/determine content of images. It checks photos against a database of specific (actively circulating) child abuse images.

Not to say there aren't legitimate concerns, but worrying that it is going to somehow flag your own kid's photos is not one of them.

(The child safety thing does detect, but seems the worst that does is through up a warning/blurring if you have it on)
The CSAM database is of known abuse images. So regular nudes shouldn't trigger them.

And the blurring of explicit photos seems to be a parental control thing.
Awesome! Thank you for clarifying. I appreciate it.
 

zakarhino

Contributor
Sep 13, 2014
2,480
6,711
If you read the article carefully you’ll see these kinds of photos will not raise an alert.

The article claims that but the author of the article is not the author of the code base. Just because a press release says something doesn’t make it true or accurate.

One of the most basic ML exercises is tricking an image recognition tool into identifying false positives. False positives from telling Siri to set a timer for an hour have no consequence except inconvenience. False positives on this system can potentially lock you out of your iCloud account and get you in trouble with the law.
 

nitramluap

Cancelled
Apr 26, 2015
440
994
As long as their false positive rate is zero, I have no problem with this. I’d hate for my childhood photos or medical photos to be flagged. I use iCloud for it’s security here.

Perhaps they’ll only divulge whether any images have been flagged or authorities IF they ask specifically and have a warrant. Fine by me. I don’t see Apple referring people to authorities blindly.
 

DevNull0

macrumors 68030
Jan 6, 2015
2,703
5,390
No, too far, Apple.

What is going to keep you from scanning my library for NeuralHash matches against politics you don’t like? Or criticism of mainland dictatorial China?

if that doesn’t happen in the US, what will keep other countries (read above) from doing just that to their citizens?

This is a very scary slippery slope. It's hard to say you're against it without people accusing you of supporting child exploration, but it makes it all too easy to take away people's freedom. How long until the picture of the Chinese president looking like Winnie the Pooh makes it in that database at least in China? We already know how eager Timmy is to bend the knee for Xi.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,471
California
As long as their false positive rate is zero, I have no problem with this. I’d hate for my childhood photos or medical photos to be flagged. I use iCloud for it’s security here.

Perhaps they’ll only divulge whether any images have been flagged or authorities IF they ask specifically and have a warrant. Fine by me. I don’t see Apple referring people to authorities blindly.

Are your childhood photos or medical photos in a database of known illegal photos?
 

dukebound85

macrumors Core
Jul 17, 2005
19,131
4,110
5045 feet above sea level
Read the whole article carefully. While there are plenty of legitimate concerns about this, I don't think false flags are one of them. They are scanning for known child abuse imagery or visually similar edits of those known images (e.g. cropped, filtered, etc.). So your personal family photos aren't going to get flagged.

In other words, they're not scanning for something vague like "a picture of a naked child".
calling it now......

cyber hackers will upload these images on unsuspecting machines and demand bitcoin to unlock files and remove imagery

but seriously, what the hell Apple. Stay out of my phone
 

zakarhino

Contributor
Sep 13, 2014
2,480
6,711
The only people who should be opposed to these specific features are people who are committing crimes, and people who don’t care about the safety of their children.

did you also think the only people who should be opposed to the Patriot act are terrorists?

did you also think the only people who should be opposed to anti misinformation technology are fascist racists and conspiracy theorists?
 

confirmed

macrumors regular
Dec 30, 2001
173
265
New York, NY
This is the beginning of the end to any hope that Apple would continue to differentiate themselves as an entity interested in protecting their user’s privacy. No, they don’t monitor our devices for the purposes of selling us stuff like FB, Google, etc.. but they do monitor for the purpose of identifying “good” or “bad” people or actions? Of course, they start with a use case like child abuse, since you’d have to be a monster to argue against it. But this sets a precedent and would begin to normalize monitoring actions for further use cases.

Also I noticed that MacRumors didn’t run anything on this until Apple themselves “previewed” the feature. They’ll repost some rumor on the next round of emoji’s in a matter of minutes, but a topic like this that has been published by FT, The Verge, CNN, and many others they stay quiet on? I can only assume that Apple has worked with publishers like MacRumors to control the messaging on this. There’s nothing that reminds me more of Big Brother, than steps to erode personal privacy, while making sure that they control all communication around that erosion.

This needs to be stopped.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.