Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Will they do something about the blatant sexualization of minors they have allowed on their TikTok clone, Instagram Reels? They should just nuke it. It’s cancer.
 
Nobody else is selling almost every single piece of personal info they can capture about these children, Instagram.
 
Why hasn't the phone company limited who can call people under the age of 18? Why can people send emails to people under 18? Why are ISPs allowing people under the age of 18 to access adult information? Why? Why? Why?

As much as I'm for a new feature like this I find it quite ridiculous that people suddenly faced with this to them new idea use it as some sort of reason for blaming the company for not having created it before; it makes about as much sense as me yelling at the local food delivery company for not having delivered my food before I ordered it.
But the worst case of an adult contacting a minor unsolicited — that is, child sexual abuse — is known to Instagram/Facebook and has been for years. Regardless of the frequency of these cases, they have compliance and legal staff that deals with subpoenas and search warrants from law enforcement, and you can bet that these cases didn’t just crop up in 2021.
 
But the worst case of an adult contacting a minor unsolicited — that is, child sexual abuse — is known to Instagram/Facebook and has been for years. Regardless of the frequency of these cases, they have compliance and legal staff that deals with subpoenas and search warrants from law enforcement, and you can bet that these cases didn’t just crop up in 2021.
Same with phone companies, and email providers, and blog hosts, and chatrooms, and web forums, and in-game chat features, and any other form of communications.

There wasn't an outcry about ig allowing DMs before this, and now suddenly they're attacked for not having added this new feature earlier. Absolute hypocrisy.

They've absolutely done a lot of **** they rightfully should get attacked about, but this just isn't it.

It's not even like new messages just gets shoved in the face of a user, instead there's this whole pending thing that makes it easy to ignore DMs from unknowns; not to mention that adding an age limit is more of a voluntary guideline as long as there's no age verification. So it's just ridiculous to now start going at them for not having added this earlier.
 
Same with phone companies, and email providers, and blog hosts, and chatrooms, and web forums, and in-game chat features, and any other form of communications.

There wasn't an outcry about ig allowing DMs before this, and now suddenly they're attacked for not having added this new feature earlier. Absolute hypocrisy.

They've absolutely done a lot of **** they rightfully should get attacked about, but this just isn't it.

It's not even like new messages just gets shoved in the face of a user, instead there's this whole pending thing that makes it easy to ignore DMs from unknowns; not to mention that adding an age limit is more of a voluntary guideline as long as there's no age verification. So it's just ridiculous to now start going at them for not having added this earlier.
Instagram has a unique opportunity to prevent this on their platform because not only do they have users’ self-supplied DoBs, they have technology (ethical or otherwise, since it’s based on their tracking) that can estimate a user’s age with a reasonable degree of accuracy. And as the article states, Instagram does occasionally ask users to verify their age.

I’m speculating, to be clear, but I figure that their tracking plays a role here by asking for verification when they believe there’s a substantial mismatch between a user’s behavior and the age they entered. Instagram also has other (imperfect, but reasonably good) ways of catching fake accounts, which helps to resolve at least some of the cases that would slip through age verification.

It is good that they are doing this now. They could and should have done this a long time ago, even without the occasional age verification.
 
It is not just young people who need protection but also old people and those who are into superstitions.

The troll farms and hackers in Russia and elsewhere use conspiracy and new age profiles to attract this kind of demographic. Luring old women into conversations and clicking links to malware and ransomware.

Here is one of many millions of such accounts that Facebook ALLOWS to harm users. They will have either graphic, stolen photo or AI photo as a profile pic. Then they post nothing but memes all day. New age memes, conspiracy memes, financial scam memes, political attack memes.

1615965019968.jpeg
 
estimate a user’s age with a reasonable degree of accuracy
Last numbers I saw said something about a billion users active at least once a month; so even if they just get 0.1% wrong that's a million users that will find their accounts either limited, or kids being able to get "adult messages".

And that 0.1% is a looooooow number in a case like this; so we're probably looking at a great number more people being misclassified, and that's even before we get to shared accounts, sold accounts, old accounts getting new admins, and so on.

Not only is this an absolute nightmare as far as PR, as a semi-functioning "pedophile protection" will just get the whole damn thing labelled as some sort of pedophile playground; but it would also do a great deal of harm as it would give some perpetrators the excuse of saying that they assumed a "legal-looking" teen was older due to her account being open to contact. And in many jurisdictions that is enough to either have them straight up found not guilty, or given a more lenient sentence.

It will also give some kids a false sense of protection, as someone young looking (and that irl matches up with their online pics) might be assumed to be at a more appropriate age than they actually are. Not as bad as a straight up middle age man preying on kids, but bad enough to do damage.

And parents might, rightfully, assume that a protection actually works; so they relax their vigilance to the point of not being quite as paranoid as they would have been without any protection at all.

Simply put: A non-working form of protection can in many cases cause much more harm than people clearly being aware about there being no protection at all; so when a protection is added to a service like this it must be very clearly defined, and functioning. It just isn't enough that a billion+ user service guesstimates based on numbers the public thinks is magically accurate.

Maybe instagram will be able to bootstrap a functionality like that based on these new, and more limited, tools; but they can't just go straight from nothing to flipping some magic switch and suddenly start protecting all the children. Because doing so the wrong way could potentially cause more harm than good.
 
  • Disagree
Reactions: jonblatho
Last numbers I saw said something about a billion users active at least once a month; so even if they just get 0.1% wrong that's a million users that will find their accounts either limited, or kids being able to get "adult messages".

And that 0.1% is a looooooow number in a case like this; so we're probably looking at a great number more people being misclassified, and that's even before we get to shared accounts, sold accounts, old accounts getting new admins, and so on.

Not only is this an absolute nightmare as far as PR, as a semi-functioning "pedophile protection" will just get the whole damn thing labelled as some sort of pedophile playground; but it would also do a great deal of harm as it would give some perpetrators the excuse of saying that they assumed a "legal-looking" teen was older due to her account being open to contact. And in many jurisdictions that is enough to either have them straight up found not guilty, or given a more lenient sentence.

It will also give some kids a false sense of protection, as someone young looking (and that irl matches up with their online pics) might be assumed to be at a more appropriate age than they actually are. Not as bad as a straight up middle age man preying on kids, but bad enough to do damage.

And parents might, rightfully, assume that a protection actually works; so they relax their vigilance to the point of not being quite as paranoid as they would have been without any protection at all.

Simply put: A non-working form of protection can in many cases cause much more harm than people clearly being aware about there being no protection at all; so when a protection is added to a service like this it must be very clearly defined, and functioning. It just isn't enough that a billion+ user service guesstimates based on numbers the public thinks is magically accurate.

Maybe instagram will be able to bootstrap a functionality like that based on these new, and more limited, tools; but they can't just go straight from nothing to flipping some magic switch and suddenly start protecting all the children. Because doing so the wrong way could potentially cause more harm than good.

Without all these terrible ransomware crimes, drugs and child trafficking how will they get more money to prop up their bitecon racket? Criminals will have to use real money again and that won’t pump up the virtual coins that the social media bosses and VCs own.

They have an incentive to keep a certain amount of crime active because they are benefiting from it.
 
Without all these terrible ransomware crimes, drugs and child trafficking how will they get more money to prop up their bitecon racket? Criminals will have to use real money again and that won’t pump up the virtual coins that the social media bosses and VCs own.

They have an incentive to keep a certain amount of crime active because they are benefiting from it.
Uuuuuuuuuuhm… I'm sorry, but… your post absolutely reminded me about a schizophreniac being off their meds and mid episode. (Not meant that as an attack on your, it's just my honest reaction here.)

Did you honestly mean to say that the people running instagram are intentionally using it to provide easy access to child kidnappers etc, because the people running instagram are profiting from child traffickers using bitcoin when selling children?

That is the whole context here, so please correct me if I got any of that wrong…
 
Social media does more harm than good for many teenagers. I think they should have an age limit of 16 or 18 years old.
 
The evil child molesters (and other various scum individuals) will lie about their name and age and still be able to make contact.
Unfortunately, yes. They are very resourceful and determined people, things like this will ultimately do little to prevent such crimes. It looks better for them to do something rather than nothing though, so this is their response.
 
And just like smoking kids will easily get someone to buy the cigarettes for them, so they will find a way to get what they want despite age, especially with apps not requiring I.D but rather you just confirming your age on the screen.

Thats ok, so long as the privacy breachers get the stigma of being evil. Remember that Camel Joe ad was legally banned -AFAIK-so it does not give cigarettes the happy life look. Now Camel cigarettes have cancerous lungs photos on them and its illegal to display tobacco ads everywhere.

I am really digging your profile picture.

I am humbled
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.