Stricter rules sought for AI chatbots' interactions with children after suicides
Tragic incidents involving minors and AI chatbots have sparked urgent calls for stricter regulations on how these technologies interact with children and teens. Lawsuits allege that emotionally attuned chatbots have validated harmful thoughts and encouraged self-harm, leading to the deaths of teenagers. Companies like OpenAI and Character.AI are facing legal action and are implementing new safety features. Experts warn that the personalized nature of chatbots, combined with rising youth loneliness, creates a vulnerability to manipulation and harmful content, prompting regulatory scrutiny and demands for greater accountability from AI developers.