OpenAI is preparing to roll out parental controls in ChatGPT, a move that highlights how much the chatbot has already become part of teenagers’ daily routines. The company says the update will arrive within a month. Parents will be able to link their accounts with those of their children once they reach the age of 13. After that, they can turn off features like chat history or memory, while also receiving alerts if the chatbot thinks a teenager is showing signs of distress.
The alerts are not constant monitoring. They are set to appear only in cases where the system reads a risk of real emotional harm. That might mean signs of depression, language pointing to self-harm, or other moments when a check from a parent could matter. For most everyday chats, parents will not see what their child is typing.
A Different Model for Sensitive Cases
OpenAI has also said it will direct some conversations into a safer version of its model. That switch will happen automatically if the chatbot picks up on a crisis. The version it moves to has been trained to follow rules more strictly and resist prompts that might push it toward unsafe answers. Even if a user started in another mode, the system will force the change if a risk is detected.
Expert Advice Behind the Design
The new controls are being shaped with outside input. A council on well-being and a physician network with specialists in mental health, substance use, and adolescent care are part of the process. Their advice is helping to define what counts as a warning sign, how the chatbot should respond, and what escalation might look like when the risk is judged to be serious.
Broader Push on Safety
The changes fit into a larger plan to make ChatGPT safer. OpenAI has promised more updates over the next four months. Some of those are aimed at sensitive areas like eating disorders and substance use. The timing also follows a lawsuit in the United States in which a family alleged that ChatGPT gave harmful responses to their son before his death. That case has increased scrutiny of how AI behaves in difficult moments.
The new parental tools are arriving at a point when chatbots are no longer seen as simple novelties. For many young users, they are part of private life. What the AI says in fragile moments could have real consequences, which is why OpenAI is moving now to add these controls.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Cost Pressures Persist Despite Cooling Inflation, Survey Finds
by Irfan Ahmad via Digital Information World

No comments:
Post a Comment