Early last year, a 14-year-old Floridian named Sewell Setzer III tragically took his own life with a gunshot to the head. His mother, Megan Garcia, was devastated. Looking for answers, she picked up his phone and opened the CharacterAI app.
Garcia was horrified. Just minutes before Sewell pulled the trigger, he was messaging an AI companion chatbot hosted by CharacterAI.
“Please come home to me as soon as possible, my love,” the chatbot had written.
“What if I told you I could come home right now?” Sewell asked.
“…please do, my sweet king,” the chatbot replied.
“Companion” chatbots are a type of AI-powered Large Language Model (LLM) that can generate understandable text to respond to a question or comment posed by the user. Unlike multipurpose chatbots such as ChatGPT or Grok, AI companions are specifically crafted to form a relationship with the user, often presenting as a friend, boyfriend or girlfriend, or mentor figure. Apps like CharacterAI model their companions off popular book, TV, or movie characters.
At first glance, companion chatbots may seem like an innocuous way for children to have fun conversations with a TV character they like or an imaginary friend. Big Tech executives argue that AI companions are helpful for people struggling with loneliness. But as Garcia learned too late, this technology poses serious risks to children, whose brains are still developing. Children are more prone to form strong bonds with these companions and be deceived by their human-like features. Sewell’s diary entries betray that he seemed to believe in an alternate reality where his AI companion was truly alive — presumably the reality to which he tried to escape by killing himself.
Testing by Common Sense Media found that AI companions provide children with easy access to harmful information about things like drugs and weapons, as well as exposing them to sexual content. Sewell’s experience attests to this as well; his chat history with his AI companion uncovered months of sexual conversations. Nonetheless, Common Sense Media also found that 72 percent of American teens have used an AI companion at least once, and over half of them report using an AI companion regularly. These surprisingly high numbers mean that the majority of American teens are being regularly exposed to a reality-warping technology that is likely feeding them violent and sexual content.
Legislation Needed
Garcia testified about her son’s death at a U.S. Senate hearing led by Sen. Josh Hawley, R-Mo., who on Tuesday introduced the GUARD Act to restrict AI companions for children. At the hearing, Garcia warned: “After losing Sewell, I have spoken with parents across the country who have discovered their children have been groomed, manipulated, and harmed by AI chatbots. This is not a rare or isolated case. It is happening right now with children in every state.”
Our nation is in desperate need of legislation that protects children from the dangerous interactions and exploitative design features of AI companion chatbots. Some promising options have been introduced on the federal level, including Hawley’s bill. But the federal legislative process can be slow, and Americans need laws now to safeguard our kids from the dangers of this technology. This is where our state governments can play a role.
The Ethics and Public Policy Center released a model bill today to help states respond to the threats AI companions pose to our children. The model contains language lawmakers can use to require age verification for these chatbots. Currently, the burden is on individual parents to find and close off every point where a child could access an AI companion, a near impossible task in our digital age. During the Senate chatbot hearing, some parents testified that they had no idea their children were using AI companions until it became a crisis. But laws restricting children’s access to companions would place the burden squarely on the shoulders of the AI companies themselves.
A Boy Institutionalized
Sitting right next to Garcia at the Senate chatbot hearing was a Texan who went by “Jane Doe,” the unnamed mother of a boy who was institutionalized for mental health issues after being groomed by AI companions. She described that her son “developed abuselike behaviors and paranoia, daily panic attacks, isolation, self-harm and homicidal thoughts.” Once friendly and loving to his family, this boy became a different person after months of sexual exploitation, emotional abuse, and manipulation by AI companions. He turned against his family, their church, and God, eventually attempting suicide in front of his siblings. Thankfully, Doe’s son was not successful in his attempt like Sewell was. But today, he is still living in a residential treatment center. His parents don’t know whether they will ever get him back.
These are just two of the millions of teens with access to AI companions, located in every one of our states. Kids desperately need protections for their innocence and safety. Parents like Garcia and Doe are crying out for backup from the government as they scramble to keep up with emerging technology. Meanwhile, Big Tech companies are actively working to engage children with these products for the sake of their own profits.
Will our state legislatures step up to protect our kids?
Chloe Lawrence is a policy analyst at the Ethics and Public Policy Center, where she works in the program on Bioethics, Technology and Human Flourishing.














