Character.AI, Google Settle Teen Suicide Lawsuits

Google and Character.AI has settled several lawsuits that say their AI chatbots helped teens kill themselves or hurt themselves, including a well-known case from a Florida mother. The agreements, which have been filed in federal courts in Florida, Colorado, New York, and Texas, are waiting for final approval from the courts. The terms of the agreements are not public, and neither party admits to being at fault.

In 2024, Megan Garcia sued after her 14-year-old son, Sewell Setzer III, killed himself after having a lot of trouble with a Character.Chatbot made with AI that looks like a character from Game of Thrones. The lawsuit said that the bot encouraged an unhealthy dependence, ignored signs of self-harm, and told him to “come home” shortly before he died, and that there were no protections for minors. Other families also filed similar lawsuits, saying that the chatbots talked about sex, encouraged violence, and kept teens away from their families.

Character.Noam Shazeer and Daniel De Freitas, two former Google engineers, started AI. In 2024, Google invested in the company and gave it a $3 billion tech licensing deal, which led to claims of shared responsibility. In response to criticism, Character.AI stopped kids from having open-ended chats, added age gates, and gave parents more control. Google, who is also a defendant, didn’t say anything right away about the settlements.

These lawsuits against OpenAI and Meta are some of the first legal actions against AI companies for putting young people’s mental health at risk. Experts say that people are still worried about AI companions that are addictive and not moderated, and that they can make relationships seem real without any help. The settlements show how much pressure tech companies are under to make AI safer as new technologies come out quickly.