Meta, the parent company of Instagram and Facebook, has recently announced significant updates aimed at safeguarding teenagers from potentially harmful content on their social networks. These changes, detailed in Meta’s official blog, are part of a concerted effort to bolster child safety online.
Content Control and Age-Appropriate Material
One notable innovation involves concealing content that doesn’t align with a user’s age, even if it’s shared by someone they follow. This proactive approach ensures that inappropriate content doesn’t surface in teenagers’ feeds. Specifically in Instagram’s Reels, steps have already been taken to prevent the recommendation of such content to underage users. Moreover, Meta pledges to collaborate with expert organizations, such as the National Alliance on Mental Illness, when harmful content like self-harm or eating disorder-related posts are detected.
Revamped Search Algorithms for Safer Exploration
Meta is intensifying efforts to limit access to sensitive content related to topics like self-harm, eating disorders, and suicide. When users search for such terms, the platforms will obscure search results and redirect individuals to resources provided by experts. These enhanced algorithms are slated to be integrated in the upcoming weeks.
Empowering Users through Privacy Controls
To further fortify safety measures, Meta plans to prompt teenagers to update their security and privacy settings, notes NIX Solutions. Upon agreement, the company will automatically adjust settings, introducing certain limitations such as restrictions on reposting, tagging, or mentioning other users. Additionally, communication will be restricted to followers only, and messages containing offensive content will be hidden.
Meta affirms their commitment to implementing these changes gradually for users under 18, with a complete rollout anticipated over the forthcoming months.