
Meta has announced new updates for teen Instagram accounts, introducing stricter measures on the type of content young users can view—particularly posts linked to harmful trends.
The company also revealed plans to regulate conversations between teens and its digital assistants, beginning next year.
The update comes just over a year after Instagram launched teen accounts in September 2024, which are automatically assigned to users aged 13 to 18.
Initially, these accounts restricted exposure to sexual, explicit, or disturbing content. The new version expands these safeguards by filtering out or removing recommendations for posts that feature harsh language, risky online challenges, or material that may encourage dangerous behavior, according to a statement from Meta, reports Al-Rai daily.
Capucine Touvier, Meta’s Director of Public Affairs for Child Protection, said the company is “adding an additional virtual barrier for teens, especially when it comes to sensitive and inappropriate content.” The revised framework will first take effect in the United States, Canada, the United Kingdom, and Australia, with plans to extend it to more countries in the coming months.
To decide what content to hide from teens, Meta has adopted standards similar to the PG-13 movie rating used in the United States. This rating signals that some content may be unsuitable for children under 13, and its application on Instagram will be guided by a specialized committee composed of independent parents.
The approach aims to ensure the “most restrictive and protective standards for teenagers,” Touvier explained, citing examples such as posts promoting extreme dieting or glorifying alcohol and tobacco use.
Artificial intelligence plays a central role in identifying and classifying such content, working alongside human reviewers.
Meta stated that all users under 18 will automatically be placed in “13+ mode,” which they cannot deactivate without parental approval. The company will also use age-verification tools—such as ID checks or selfie videos—to confirm users’ ages when necessary.
Parents will gain new powers under the update, including the option to activate a “limited content” mode that prevents teens from seeing, posting, or commenting on certain types of material. Starting next year, this setting will also limit the conversations teens can have with Meta’s AI assistants.
These developments come amid growing global concern about the effects of social media and AI-based chatbots on youth mental health. In the United States, regulators have begun investigating the potential risks of AI assistants after several tragic incidents involving teenagers.
The California government recently passed a law requiring AI operators to verify users’ ages and remind minors regularly that they are interacting with a machine, reinforcing efforts to ensure digital safety for young people.
Follow The Times Kuwait on
X, Instagram and Facebook for the latest news updates