Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The real problem lies with the social media companies failing to filter out harmful and inappropriate content.
Are you referring to this morning's story about how Instagram Reels showed/recommended graphic and violent content to its users, even those who opted out?


Meta fixes error that flooded Instagram Reels with violent videos

Feb 27 (Reuters) - Meta Platforms said on Thursday it had resolved an error that flooded the personal Reels feeds of Instagram users with violent and graphic videos worldwide.


It was not immediately clear how many people were affected by the glitch. Meta's comments followed a wave of complaints on social media about violent and "not safe for work" content in Reels feeds, despite some users having enabled the "sensitive content control" setting meant to filter such material.

"We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake," a spokesperson for Meta said.

It did not disclose the reason behind the error.

Meta's moderation policies have come under scrutiny after it decided last month to scrap its U.S. fact-checking program on Facebook, Instagram and Threads, three of the world's biggest social media platforms with more than 3 billion users globally.


Violent and graphic videos are prohibited under Meta's policy and the company usually removes such content to protect users, barring exceptions given for videos that raise awareness on topics including human rights abuse and conflict.
 
  • Like
Reactions: 0049190
I just wish they’d revisit their rules on Apple Pay. My kids are old enough to have their own payment cards, the providers support Apple Pay, but because they aren’t yet 13 Apple have decided they can’t add these cards to their Apple Wallet… I really don’t understand why Apple have made this ruling, or at least haven’t had it as an option to enable or disable, I’d much rather my kids iPhones have their cards on than me send them out with their physical cards which are less secure and they’re much more likely to lose!
 
The real problem lies with the social media companies failing to filter out harmful and inappropriate content.
In the form of ads.

Online Streamers do the same thing. As do the platforms like Roku and GoogleTV

PG set in system preferences, but ads for R rated slasher films, splash posters for TV-MA series, search results full of inapro content you can see but not click on…

And yet they filter OUT access to many PG movies due to “content” despite the PG filter being set.

Why? Because despite a pledge to “protect” the kids, the ad revenue is earned by placements and eyeballs and they have to get them that revenue. But at the same time they don’t actually want to show you any content because each view costs them money in royalties.
 
my 5yo has a child account but sees recommendations for the new "The Gorge" movie in the AppleTV app, which is a movie in horror category imho. I find that pretty strange. Why is there no filter?
 
Apple continues to go the wrong way on this especially on the Mac. On a Mac I set up my teen with a standard account which means they cannot remove or add new wireless networks. There is no option if I want to allow them to do this for when they go to a library etc. the only option would be to make them an admin which would unlock everything AND allow them to remove all adult etc restrictions including downtime etc from both their Mac and iPhone. There used to be a way to override this through terminal commands so at least they could get more network related privileges without becoming an admin but that has been crushed by what is the non-super-user death grip that is each iteration of Mac OS becoming less and less friendly to any tinkerers.

Parental controls on iOS are so bad that for a few years I tried to implement Qustodio at $60 per year but even that is somewhat flawed but is still leaps and bounds better than the baked in screen time/parental controls from Apple.
 
If anyone has tried to set up an iPad for a young child like my 5-year-old there are currently a lot of pain points. If you want to change a parental control setting on the child's device you have to log out and then log in as yourself and it asks you to authenticate multiple times. It takes forever and is not fun.

If your child wants to buy an app the consent flow is ridiculously bad. Both me and my wife have iPhones but when my son wants to buy an app do those notifications go to either of those iPhones — you know, the devices we actually have with us and use? No, no they don't. They go to an ancient iPad that no one uses. One would think it would select an iPhone first; one would be wrong.

Hopefully this will be fixed in this update. It's like they never tested it with an account with more than 5 devices.
 
It’s hard to imagine though how the “new API that will let developers confirm age range to deliver age appropriate experiences to kids” won’t be leaking age information to the app.
Apple will probably share an "age approved" or "age unaproved" message. It doesn't need to share your kid's age just as Apple Pay doesn't need to share your credit card number with merchants.
 
Apple should go back to the drawing board and redesign Screen Time in settings.
Awful UI from Apple.
it is not intuitive to use.
 
Isn't it already possible to share AirTags with any other account (except sharing to a child account)?

Unfortunately we couldn’t get it to work. We set up an iPhone with a child account for a relative with dementia. And we couldn’t share an AirTag from the adult’s to the child’s phone or vice versa.
 
"9+ - The app may contain instances of content not suitable for users under 9, including infrequent or mild cartoon or fantasy violence, profanity or crude humor, or mature, suggestive, or horror- or fear-themed content."

One of my kids is this age right now, and reading that list, I couldn't imagine him taking in some of those things... it's crazy that the age of 9 is considered old enough for profanity, crude humor, and mature/suggestive/fear themed content :oops:😩
Kids vary in their maturity based on more than just age. I'm guessing Apple is setting a sort of lower bar. If I understand correctly, you can still have it set so that every purchase requires parental approval.
 
13+ - The app may contain instances of content not suitable for users under 13, including infrequent or mild medical or treatment-focused content, references to alcohol, tobacco, or drug use, sexual content or nudity, realistic violence, or simulated gambling; or frequent or intense contests, profanity or crude humor, horror or fear-themed content, or cartoon or fantasy violence.
So, does "simulated gambling" include gacha mechanics in every "free" game that targets users with predatory microtransaction-based gambling?

Which I suppose you could argue isn't simulated gambling, it's real gambling where you don't actually win money back... which is not much of an argument for letting 13+ year olds do it.
 
On topic: Excellent and overdue on all services. It's past time to realize that minors need to use these services and that we need proper controls.

Maybe a wild thought but as every account moves online, is a properly managed child account the new sudo?

Limited permissions, fewer terrible ads, not deliberately designed to be engaging at the cost of all else, etc.

When you do need to do something "adult" approve via the "adult" account. Set the child permissions as high as they go.

Maybe they could block this by basing it on "birthdate" and automatically making it an "adult" account at "18."

I'm always looking for the last refuges against the enshirtification.
 
If anyone has tried to set up an iPad for a young child like my 5-year-old there are currently a lot of pain points. If you want to change a parental control setting on the child's device you have to log out and then log in as yourself and it asks you to authenticate multiple times. It takes forever and is not fun.

If your child wants to buy an app the consent flow is ridiculously bad. Both me and my wife have iPhones but when my son wants to buy an app do those notifications go to either of those iPhones — you know, the devices we actually have with us and use? No, no they don't. They go to an ancient iPad that no one uses. One would think it would select an iPhone first; one would be wrong.

Hopefully this will be fixed in this update. It's like they never tested it with an account with more than 5 devices.
My kids app requests come through as text messages, which I happen to find very annoying. I would rather have a notification. The text messages are cluttering up my messages, as I’m busy and tend to forget to delete the messages after approving/and or disapproving.
 
Is Apple still requiring iCloud family for parental controls, and if so are they still limiting family size to 6? Those of us with larger families are locked out from parental controls for some of our kids. I'd love to be able to add all of my kids, even if limited only to parental controls and not all of the other iCloud features.
At least they finally have (admittedly not fully functional yet) allowed for merging of iCloud accounts. I can finally merge my two accounts and still have one free for my youngest daughter. The only solution I'm aware of, though, for a larger family, would be to buy an extra phone and create a second family (really annoying). I'd like to see them add some sort of extended family feature because I'm using two of my family slots presently for my parents now that they are older.
 
my 5yo has a child account but sees recommendations for the new "The Gorge" movie in the AppleTV app, which is a movie in horror category imho. I find that pretty strange. Why is there no filter?
Same with music. There are some types of music that are absolute filth even without profanity.
 
Good to know about all the changes. Hopefully it makes setting up the child account easier and simpler.
 
  • Like
Reactions: mganu
I think the current system is okay at present and update is good. However when a child asks for permission to install an app it's pot luck if my phone / mac or my wife's phone/mac provides notification to allow.
Also they can still browse apps installed by us and install without a confirmation prompt. Would like to know where that is prevented.
 
In the form of ads.

Online Streamers do the same thing. As do the platforms like Roku and GoogleTV

PG set in system preferences, but ads for R rated slasher films, splash posters for TV-MA series, search results full of inapro content you can see but not click on…

And yet they filter OUT access to many PG movies due to “content” despite the PG filter being set.

Why? Because despite a pledge to “protect” the kids, the ad revenue is earned by placements and eyeballs and they have to get them that revenue. But at the same time they don’t actually want to show you any content because each view costs them money in royalties.
It’s not just ads. Social media companies are allowing kids to view violent and sexually explicit content.
 
Are you referring to this morning's story about how Instagram Reels showed/recommended graphic and violent content to its users, even those who opted out?


Meta fixes error that flooded Instagram Reels with violent videos

Feb 27 (Reuters) - Meta Platforms said on Thursday it had resolved an error that flooded the personal Reels feeds of Instagram users with violent and graphic videos worldwide.


It was not immediately clear how many people were affected by the glitch. Meta's comments followed a wave of complaints on social media about violent and "not safe for work" content in Reels feeds, despite some users having enabled the "sensitive content control" setting meant to filter such material.

"We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake," a spokesperson for Meta said.

It did not disclose the reason behind the error.

Meta's moderation policies have come under scrutiny after it decided last month to scrap its U.S. fact-checking program on Facebook, Instagram and Threads, three of the world's biggest social media platforms with more than 3 billion users globally.


Violent and graphic videos are prohibited under Meta's policy and the company usually removes such content to protect users, barring exceptions given for videos that raise awareness on topics including human rights abuse and conflict.
I wasn’t but that is just one example of what’s going on. Their algorithms are actively pushing violent and inappropriate content to kids everyday with little or no moderation.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.