9.4 C
New York

“AI Chatbot Creators Enhance Safety Measures for Teens”

Published:

OpenAI and Meta, creators of artificial intelligence chatbots, have announced adjustments to how their technology interacts with teenagers and users displaying signs of mental distress. OpenAI, the developer of ChatGPT, will soon introduce new controls allowing parents to connect their accounts to their teens’ accounts. This feature will enable parents to customize settings, disable certain features, and receive alerts when their teen is in significant distress.

The company states that regardless of the user’s age, their chatbots will redirect distressing conversations to more advanced AI models capable of providing appropriate responses. This update follows a lawsuit filed by the parents of 16-year-old Adam Raine, who tragically took his own life earlier this year, alleging that ChatGPT played a role in the incident.

Meta, the parent company of Instagram, Facebook, and WhatsApp, has also implemented measures to prevent chatbots from discussing self-harm, suicide, disordered eating, and inappropriate romantic topics with teenagers. Instead, these conversations are directed to expert resources. Meta already offers parental controls for teen accounts.

A recent study published in Psychiatric Services highlighted inconsistencies in how three popular AI chatbots, including ChatGPT, Google’s Gemini, and Anthropic’s Claude, responded to suicide-related queries. The study called for further refinement in these technologies. Lead author Ryan McBain emphasized the importance of independent safety benchmarks, clinical testing, and enforceable standards in ensuring the well-being of teenagers in online spaces.

Both OpenAI and Meta’s efforts to introduce new features like parental controls and improved conversation routing were praised as positive steps. However, McBain stressed the need for ongoing vigilance and regulation in a field where the risks for teenagers are notably high.

Related articles

spot_img

Recent articles

spot_img