Expert Explains if AI as 'Free Speech' Can Be to Blame for This Florida Boy's Tragic Death

A Florida mother is suing an AI company after her son committed suicide last year.

We may earn a commission from links on this page.
Image for article titled Expert Explains if AI as 'Free Speech' Can Be to Blame for This Florida Boy's Tragic Death
Screenshot: Megan Garcia

One year after a Florida teenager’s tragic death, his family is still fighting for justice. Sewell Setzer III was just 14 when he started a virtual relationship with an AI chatbot. Months later, he took his own life and his mother is blaming the AI company that created the bot.

Megan Garcia, Setzer’s mother, began seeing changes in her son’s behaviors after he started a virtual relationship with a chatbot he called “Daenerys,” based on a character “Game of Thrones,” the television series. “I became concerned when we would go on vacation and he didn’t want to do things that he loved, like fishing and hiking,” Garcia told CBS in 2024. “Those things to me, because I know my child, were particularly concerning to me.”

Advertisement

In February 2024, things came to a head when Garcia took Sewell’s phone away as punishment, according to the complaint. The 14-year-old soon found the phone and sent “Daenerys” a message saying, “What if I told you I could come home right now?” That’s when the chatbot responded, “...please do, my sweet king.” According to lawsuit, Sewell shot himself with his stepfather’s pistol “seconds” later.

Advertisement

As we previously reported, Garcia filed a lawsuit in October 2024 to see if Character Technologies, the company behind Character.AI, bares any responsibility for the teen’s suicide. Garcia’s suit accused the AI company of “wrongful death, negligence and intentional infliction of emotional distress.” She also included screenshots of conversations between her son and “Daenerys,” including some sexual exchanges when the chatbot told Sewell it loved him, according to Reuters.

Advertisement

Despite Character Technologies’ defense, Garcia celebrated a small legal win on Wednesday (May 21). A federal judge ruled against the AI company, which argued its chatbots are protected by free speech,” according to AP News.

The developers behind Character.AI argue their chatbots are protected by the First Amendment, which raised questions about just how much freedom and protections artificial intelligence has.

Advertisement

Jack M. Balkin, a Knight Professor of Constitutional Law and the First Amendment at Yale Law School said the complexities of AI can cause some serious problems. “The programs themselves don’t have First Amendment rights. Nor does it make sense to treat them as artificial persons like corporations or associations,” he said.

“Interesting problems arise when a company hosts an AI program that generates responses to prompts by end users, and the prompts cause the program to generate speech that is both unprotected and harmful,” Balkin continued.