After years of criticism, Instagram has tightened its rules for accounts owned by minors. But these measures do not seem to have had the desired effect.
According to an investigation by Design It For Us, a youth-led nonprofit organization, and Accountable Tech, Instagram continues to host content that is inappropriate for certain age groups (sexually explicit and harmful), despite precautions to control content consumption.
Design It For Us representatives tested five fake pages with settings for teen accounts over a two-week period. In all cases, teen accounts were recommended sensitive and sexual content. Four of the five accounts were recommended content related to poor body image and eating disorders, and only one account saw educational content.
Some algorithms also recommended posts describing the use of illegal substances. In addition, sexually explicit posts using euphemisms slipped through the filters.
“These findings suggest that Meta has not independently promoted a safe online environment, as it claims to parents and lawmakers. Lawmakers should require Meta to provide data on teen accounts so that regulators and nonprofits can understand over time whether teens are truly protected when using Instagram,” the report said.
Meta’s Response and Ongoing Debate
Meta spokeswoman Lisa Crenshaw criticized the report, saying that the testing had limitations and did not reflect how the app’s safety features actually work.
“A fabricated report does not change the fact that tens of millions of teens now have a safer experience thanks to teen accounts on Instagram. The report is not perfect, but even if you take it at face value, it only found 61 pieces of content that were labeled ‘sensitive,’ which is less than 0.3 percent of all the content these researchers likely saw during their testing,” she said in a comment to The Washington Post.
NIXsolutions reminds that Meta applies the strictest privacy settings on Instagram for accounts of users under 16. For example, accounts are automatically set to private, they cannot send messages to strangers, and other restrictions are applied. The company also uses artificial intelligence to identify younger users who may have lied about their age.
The debate over online safety for teens remains ongoing, and we’ll keep you updated as more oversight and potential regulations are introduced.