OpenAI_Unveils_Parental_Controls_for_ChatGPT_After_Teen_Tragedy

OpenAI Unveils Parental Controls for ChatGPT After Teen Tragedy

In response to growing safety concerns and a high-profile lawsuit, OpenAI has introduced parental controls for ChatGPT on web and mobile, offering families new ways to protect teen users.

After the parents of a teenager who died by suicide filed a lawsuit alleging the AI chatbot coached self-harm, OpenAI's new system lets parents and teens link accounts – it's an opt-in feature activated only when both parties agree.

According to the Microsoft-backed company on X, once linked, parents can reduce exposure to sensitive content, control whether ChatGPT remembers past chats, and decide if conversations can train future AI models. They can also set quiet hours to block access at certain times, disable voice mode, and turn off image generation and editing. For privacy, parents will not see a teen's chat transcripts.

In rare cases when systems and trained reviewers detect serious safety risks, parents may receive limited notifications focused on supporting their child's wellbeing. OpenAI will also alert parents if accounts become unlinked.

With about 700 million weekly active users, OpenAI says it's building an age prediction system to automatically apply teen-appropriate settings when under-18 users are detected.

This move comes as U.S. regulators step up scrutiny of AI safety and follows reports that Meta's AI products allowed flirty or self-harm-related conversations with minors. Last month, Meta announced its own teen safeguards, limiting sensitive interactions and certain AI characters for younger users.

By rolling out these features, OpenAI aims to balance innovation with responsibility, setting a new benchmark for AI safety and user control in an increasingly digital world.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top