Meta finally enacted restrictions to stop unknown adults from directly contacting underage users on Thursday — a move critics said is long overdue as concerns mount about child safety on Facebook and Instagram.
The brand new rules were revealed on Thursday as Meta faces a series of explosive lawsuits — including a sweeping challenge from 33 states accusing the corporate of fueling a youth mental health crisis and an alarming grievance by Recent Mexico alleging the corporate has exposed underage users to alleged sex predators.
A recent default setting will block teenage Instagram users from receiving direct messages or being added to group chats from accounts they aren’t already connected to or follow, Meta said in a blog post.
The change applies to all US users under age 16 — or age 18 in another countries.
Meta is enacting an identical feature for teens on its Messenger app which blocks them from receiving messages unless they’re already connected to the sender as Facebook friends or through phone contacts.
“We wish teens to have secure, age-appropriate experiences on our apps,” Meta said within the blog post.
Meta is added more direct message restrictions for teens. Getty Images
Moreover, parents will now have the power to approve or deny attempts to change account safety setting for youngsters under the age of 16. Under the previous version of Instagram’s parental supervision feature, parents were only notified if a teen modified the protection settings.
Meta said it’s also preparing one other feature that will “help protect teens from seeing unwanted and potentially inappropriate images of their messages from people they’re already connected to, and to discourage them from sending some of these images themselves.” Meta said it could have “more to share” on that feature later this yr.
The renewed safety push has occurred after reports and lawsuits have accused top Meta executives — including Zuckerberg and Instagram chief Adam Mosseri — of vetoing or watering down proposed features aimed toward protecting teens within the recent past.
In a single case, Zuckerberg reportedly rejected an effort to ban filters that simulated the consequences of cosmetic surgery — despite concerns that they were fueling teen body dysmorphia.
The security update is the newest of several changes by Meta. Meta
Josh Golin, the manager director of the kid safety advocacy group Fairplay, said Meta’s announcement was long overdue.
“Today’s announcement demonstrates that Meta is, in truth, capable of constructing changes to the design of Instagram to make it safer by default,” Golin said. “However it shouldn’t have taken over a decade of predation on Instagram, whistleblower revelations, lawsuits, furious parents, and Mark Zuckerberg getting hauled before Congress for Meta to make this transformation.”
Recent Mexico attorney general Raúl Torrez said “parents have every reason to stay skeptical that Meta’s latest policy changes are meaningful or that Meta is prone to implement those changes faithfully.”
“With respect to child safety on Facebook and Instagram, the evidence shows Meta has consistently done the minimum or worse,” Torrez said in a press release. “While every step forward is welcome, it mustn’t have required litigation to finally prompt motion to guard children on Meta’s platform.”
Meta said it wants its apps to supply “age appropriate” experiences. Getty Images
Critics of Mark Zuckerberg’s social media sites allege they use addictive features, similar to rampant notifications and the “like” button, to maintain young users hooked — whilst disturbing content on the platforms fuel bad outcomes like anxiety, depression, body image issues and even self-harm.
Recent Mexico’s lawsuit revealed that an unnamed Apple executive once complained to Meta that his 12-year-old had been “solicited” on Instagram.
The grievance, which cited various company documents and communications, also detailed an internal 2021 presentation that showed “100,000 children per day received online sexual harassment, similar to pictures of adult genitalia.”
Elsewhere, the lawsuit filed by the 33 state attorneys general included a sweeping take a look at Meta’s response to its mounting child safety crisis.
Meta faces multiple lawsuits over the protection of its platforms. REUTERS
The states alleged that Meta drastically downplayed the prevalence of content depicting self-harm on Instagram that was shown to teens — in contradiction of its own internal research.
Earlier this month, Meta revealed that it could place more restrictions on content settings for young users.
That included controls on search terms designed to stop teens from being exposed to sensitive subjects similar to eating disorders or suicide.
Zuckerberg, X CEO Linda Yaccarino, TikTok CEO Shou Chew and other Big Tech leaders are set to testify before a Senate panel next Wednesday as a part of a hearing on the “online child sexual exploitation crisis.”