That too!!Why not give parents the option to entirely prevent a child from opening a picture in which NudeKit detects a hit?
That too!!Why not give parents the option to entirely prevent a child from opening a picture in which NudeKit detects a hit?
Moving forward, Kids won't use Messages, they'll migrate to WhatsApp OR whatever, to get out from under Apple's control.
With that group, Apple will have lost one of the key iOS features that makes iOS "sticky," Messages !
NOT saying they will migrate away from Apple products, but Apple will have lost some control over them.
Some Analysts think Messages is the #1 reason Apple has a sticky eco-system.
I don't agree, I think it's a lack of viable & credible competition.
Huh? Location privacy is a whole other battle, and one that has been largely lost, as you say. This feature implements on-device image scanning, just like the previous one about CSAM. I'm not sure why, but it seems that sentiment about on-device scanning around here has done a 180 since then. I'm still here feeling like personal devices should be, you know, personal.Your wireless carrier already knows exactly where you are at any given moment, and will give this information to your government at a moments notice. Any website you visit will happily turn over this information as well, when requested to by law. And your VPN is probably lying to you.
What OS you run on your phone matters absolutely nothing in this regard.
Sorry man, you did not read/understand the article, and I guess you never developed an app.It seems this scans for nudity to tells the recipient that it might contain nudity. No informing takes place.
This isn't doing anything that existing infrastructure in iOS has already been capable of doing for over a decade. I guess you just don't know what software is and how it works.
This is assuming the option is only enabled on child devices by the parent and not by default by Apple.The children are not under Apples control, they are under their PARENTS control, rightfully so.. read the article!
This is straight up false information. Please read the article before making that kind of statement. This is not CSAM.
So well, explain it to me ..-
I always thought that the framework scans for nudity and informs a third party. So you do say that this is not correct?
This framework is the foundation. At the moment it may only contact a family member. But a third party is a third party. The framework could be extended very easily to contact the authorities as well - it is just a tiny step away.
I guess you just don‘t know what software is and how it works.
CSAM would already be working in the background of iOS devices
1) you don’t have any proof of thatCSAM would already be working in the background of iOS devices
No, he understood it fine. It’s you that can’t comprehend this simple concept. All your comments on this have been completely nonsensical.Sorry man, you did not read/understand the article,….
Taken in the context of the article "parents are more often than not the offenders" means that they send inappropriate pictures to their kids? I don't think this is what you meant.The parents are more often than not, the offenders.
Apple are in the business of communication. Privacy is a big part of communication, as is nefarious activity. Wether it’s spam or other nefarious activity, communication companies have a responsibility.
Anyone else here let their ISP or email provider kick emails into their spam or junk folder?
Yep.So the extent of this is the parents can enable a feature on their <13 yo’s device that will scan pictures sent to them in a text message. If that picture is nudity the child’s device will display a warning message. The child can then choose to view or not view. And that is it?
I mean, all this is doing is using the same technology that the people tab in Photos is using.Taken in the context of the article "parents are more often than not the offenders" means that they send inappropriate pictures to their kids? I don't think this is what you meant.
On the major point, though, your statement is too general. These are two different methods of communication.
This is specifically messages which are private communications. Just like mail. Just like phone calls. And should not be snooped by anyone without a warrant or explicit permission. E-mail is the same. And the spam filters for iOS and macOS run locally as well. You give permission to gmail and hotmail to scan your mail when you use their service. And what do they do with the data that they get from it? It isn't for your benefit.
I agree with you on public and posted communications like twitter, FB, etc. Those networks are different. Rather than direct communication to another entity these messages are posted for all to see. Owners of these services do have a responsibility to monitor and report on them and possibly more. In this type of communication companies do need to scan messages (and are also often the owner of those messages so its easier for them to make the case to do so).
Yep.
I don’t know, I can see where it could be useful.Then it seems to have no substance.
It’s like broth with no meat or vegetables.
With you 100%. And while you're right they can be it is technically illegal. For all that's worth. It was a legal/moral comparison I made, less so a technological one. And even that's funny. While one can't read the contents of a message the sender and receiver are fair game which is often enough for trouble.I mean, all this is doing is using the same technology that the people tab in Photos is using.
It’s not sending this information to anyone, it’s not alerting any authorities, it’s just using AI to put up a warning if you have this feature enabled, which most won’t.
Also a sidenote, most phone calls can easily be tracked, along with SMS text messages, and depending on the provider, emails as well. Not exactly the most secure form of communication.
I would trust iMessage with encryption enabled before I would trust Gmail or T-Mobile
I don’t know, I can see where it could be useful.
There have been several stories of people getting extremely explicit photos from random phone numbers out of the blue, I could completely understand why putting a warning for children could be useful.
Obviously it’s not a 100% perfect solution, but that’s really not possible without actually invading privacy.
You see, when one reads a sentence, words within the sentence matter. For the sake of reading comprehension, you do not simply take atomistic segments out of complete sentences in order for it to fit your view. Take the complete sentence that you broke apart, for example: "Apple also presents children with ways to get help by messaging a trusted grown-up in their life." This does not say that Apple messages a trusted grown-up, but presents a child with ways to message a trusted grown-up. Furthermore, reading comprehension requires context not just at the sentence level, but at the paragraph, and for multi-paragraph trains of thought, at the full article level. If you continue reading down a bit further, you'll come across this:Sorry man, you did not read/understand the article, and I guess you never developed an app.
„by messaging a trusted grown-up in their life.“ - So the basic functionality is there. And I only said that Apple is moving into this direction, and it does.
When Apple first described Communication Safety in August, there was a feature designed to notify parents if children opted to view a nude photo after being warned against it. This has been removed.
If a child is warned about a nude photo and views it anyway, parents will not be notified, and full autonomy is in the hands of the child. Apple removed the feature after criticism from advocacy groups that worried it could be a problem in situations of parental abuse.
Huh? Location privacy is a whole other battle, and one that has been largely lost, as you say. This feature implements on-device image scanning, just like the previous one about CSAM. I'm not sure why, but it seems that sentiment about on-device scanning around here has done a 180 since then. I'm still here feeling like personal devices should be, you know, personal.
This is assuming the option is only enabled on child devices by the parent and not by default by Apple.
Good child protection always starts and ends with Apple having the tools available but the Parents having the 100% say over whether their child's devices have the features enabled or not. Also it should only be on child devices (via parental consent), not as added spyware that every Apple device is infected with.
CSAM fails this because the parents do not have a say, and everyone else also suffers. Also CSAM literally searches your information without a police warrant. Another reason that CSAM in its current form should NEVER exist on any Apple device.
So well, explain it to me ..-
I always thought that the framework scans for nudity and informs a third party. So you do say that this is not correct?
This framework is the foundation. At the moment it may only contact a family member. But a third party is a third party. The framework could be extended very easily to contact the authorities as well - it is just a tiny step away.
Look, let's not quibble about words here, all kinds of software "scans" files. This is different because of the intent, to monitor and limit the behavior of the user. The same slippery slope there for CSAM exists here too.The Photos app has done on device scanning for over a decade.
Anti-virus software has done on-device scanning since the 90ties.