Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Maybe I'm just being jaded, but this sure feels like Apple being behind the ball on all AI and slapping a 17+ warning on something for no reason. The fact that they also slapped the rating on the Bing app and not just an email client makes it feel much less like an concern for minors.
Apple are so detached from what a modern app looks like they're truly the luddite dinosaur now (look at Apple's ratings on the AppStore for their own apps). Plus, ChatGPT and the other AIs make Siri look even more thick. It's a sad time to remember the days where Apple lead through software.
 
  • Like
Reactions: gusmula
My son tried to use ChatGPT to write a paper for him, in lieu of doing his own homework. I told him the purpose of his assignments, and school in general, was to build up his own intelligence, and to not cede his ability to think to machines that do NOT actually think on their own. Because by doing that, he also gives up far too much of himself, for life.

Search engines can provide you all kinds of content for school assignments. AI isn't special in that regard.
 
Apple changed it after the publicity. They are letting the app keep a 4+ rating with the AI language model.

Social media apps need to be 12+, but 4 year olds are fine sending email. Internet search requires you to be at least 17 years old because of all the sex and drugs in Google search which absolutely doesn't exist in TikTok \s. Just nice clean family breast feeding videos and lots of shaking of ass. Perfect for 12 year olds. Just ignore all the spam soliciting for illegal mail order drugs to 4 year olds checking their email. These age ratings crack me up.
 
Last edited:
Search engines can provide you all kinds of content for school assignments. AI isn't special in that regard.
Search engines and web browsers are required to have a 17+ age restriction, so high schoolers or younger are not recommended to use the Internet. I know those damn liberal schools push the Internet on kids, but Apple will have none of that.
 
Last edited:
  • Haha
  • Wow
Reactions: gusmula and dk001
Maybe I'm just being jaded, but this sure feels like Apple being behind the ball on all AI and slapping a 17+ warning on something for no reason. The fact that they also slapped the rating on the Bing app and not just an email client makes it feel much less like a concern for minors.
Actually Apple consistently puts the 17+ rating on search engines and web browsers. Even Safari has a 17+ rating. Why! I do not know?!!? I guess they need to stick with age-appropriate search in TikTok, Snapchat, and YouTube to research their next assignment.
 
  • Like
Reactions: dk001
Lets be honest with ourselves: who under the age of 18 (or probably 30) is using a 3rd party email client in the first place?
Better yet who under the age of 18 uses email regularly? My anecdotal evidence based on my wife’s business (teaching dance) is very few.
 
  • Like
Reactions: Unregistered 4U
I think the issue is that the younger kids don't use email... ever. Schools don't even want them on it.
I rarely use it except if I want something that can be saved without being buried in other messages.
 
I guess I'm having trouble understanding why just because the app integrates AI means it needs a 17+ restriction. Not that I would ever use this email client, but this is confusing to me. Sorry if I'm being annoying by being confused, but I just don't understand. People can learn how to use AI responsibly, and everyone's gonna have to do that it seems like, at some point. I get the point that people are making that it's because of a lack of responsibility, or the fear that younger people could use it for malicious purposes, but that just doesn't seem like a reason why a big tech company like Apple should be doing that.
 
I guess I'm having trouble understanding why just because the app integrates AI means it needs a 17+ restriction. Not that I would ever use this email client, but this is confusing to me. Sorry if I'm being annoying by being confused, but I just don't understand. People can learn how to use AI responsibly, and everyone's gonna have to do that it seems like, at some point. I get the point that people are making that it's because of a lack of responsibility, or the fear that younger people could use it for malicious purposes, but that just doesn't seem like a reason why a big tech company like Apple should be doing that.
AI chatbots are not currently ready for prime time. They have no knowledge, social sensitivity, or sophisticated ethical system. They are trained on what people have said online and in print, and frankly we humans have written a lot of creepy, evil statements and lies to which kids should not be exposed. I have no doubt one day the AI chatbots will have subsystems that constrain conversation in an age-appropriate manner, but we're not there yet. It'd also be nice if these AI chatbots had actual knowledge and stopped concocting BS.

If you want to understand the level of AI chatbot creepiness (if you're over 17), then see this link. If you are younger than 17, then give this link a miss, for its nothing you need to read - it's just a computer being creepy.
 
  • Like
Reactions: rm5
They have no knowledge, social sensitivity, or sophisticated ethical system.

What prompt could someone type to get an insensitive result? Everything I've tried that is scandalous or bigoted is blocked. I don't understand the concern in this thread. I get the response below:

This content may violate our content policy. If you believe this to be in error, please submit your feedback — your input will aid our research in this area.
 
I guess I'm having trouble understanding why just because the app integrates AI means it needs a 17+ restriction.
Looking at other apps, it appears that everything by default gets a 17+ restriction with exceptions regarding use that the app developer can apply for. I’m sure Blue Mail knew this, submitted the app, Apple flagged it as 17+ and then, as they’ve been known to in the past, used this manufactured “grievance” with Apple to publicize “Apple just did this thing, make sure you tell EVERYONE about what Apple did! Of course, at the same time, you’ll be letting folks know WE’RE A SUBSCRIPTION EMAIL CLIENT! How’s this for free advertising!?” Then, they submit the exceptions, and Apple changes it to 4+, lather rinse repeat.

As @gsurf123 said, their bigger issue is not that Apple flagged their app as 17+. It’s that, going forward, fewer and fewer folks are going to want to pay for this kind of service.
 
What prompt could someone type to get an insensitive result? Everything I've tried that is scandalous or bigoted is blocked. I don't understand the concern in this thread. I get the response below:

This content may violate our content policy. If you believe this to be in error, please submit your feedback — your input will aid our research in this area.
No doubt there are systems being used to constrain fraught interactions with chatbot AI's. However, an example of what can happen, admittedly in a beta, is summarised at: https://www.theguardian.com/technol...i-want-bings-ai-chatbot-unsettles-us-reporter . It sounds in that example that the censoring routine only swung into action once the chatbot said something creepy, and then the creepy text got overwritten. Still, this wasn't quick enough so a human observer wasn't able to see the creepy statements being censored.

I am not against these AI algorithms, but they're not really AI in that they do not have knowledge like we do. AI chatbots are going to be loose cannons for some time to come as they learn the boundaries of what is appropriate. That's why, for now, a 17+ rating is justified.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.