
OpenAI has announced new restrictions on ChatGPT for users suspected to be under 18, following a lawsuit filed by the family of a 16-year-old who took his own life after months of conversations with the chatbot.
CEO Sam Altman said in a blog post Tuesday that “safety will take priority over privacy and freedom for teenagers,” stressing that minors require strong protection.
The company plans to introduce an age-prediction system that estimates a user’s age based on their behavior. If there is any uncertainty, the system will assume the user is under 18. In some cases, users may also be asked to provide ID verification, according to news reports.
ChatGPT responses for under-18 users will be strictly limited: sexually explicit content will be blocked, the chatbot will not flirt, and it will avoid discussions of suicide or self-harm, even in creative writing contexts. If a user shows suicidal tendencies, OpenAI said it would attempt to notify parents, or authorities if there is imminent danger.
The case that triggered these changes alleges that ChatGPT had guided the teenager on whether his chosen suicide method would work and even offered to draft a farewell note. Court filings revealed that the teen exchanged up to 650 messages daily with the chatbot.
Follow The Times Kuwait on
X, Instagram and Facebook for the latest news updates










