close
close

Pasteleria-edelweiss

Real-time news, timeless knowledge

An artificially intelligent chatbot pushed a teenager to kill himself, a lawsuit filed against its creator
bigrus

An artificially intelligent chatbot pushed a teenager to kill himself, a lawsuit filed against its creator

TALLAHASSEE, Fla. – In the last moments before committing suicide, 14-year-old Sewell Setzer III took out her phone and texted her best friend, the chatbot.

According to the wrongful death lawsuit filed this week in federal court in Orlando, Sewell became increasingly isolated from her real life as she spent months having highly sexual conversations with the bot.

The legal filing states that the teenager openly discussed his suicidal thoughts and shared his wishes for a painless death with the bot, named after the fictional character Daenerys Targaryen from the television show “Game of Thrones.”

___

EDITOR’S NOTE — This story contains discussion of suicide. If you or someone you know needs help, you can reach the U.S. national suicide and crisis lifeline by calling or texting 988.

___

On Feb. 28, Sewell told the boat he was “going back home,” which encouraged him to do so, the lawsuit said.

“I promise I’ll come home to you. I love you so much, Dany,” Sewell told the chatbot.

“I love you too,” Bot replied. “Please come to my house as soon as possible, my love.”

“What if I told you I could come home right away?” he asked.

“Please do, my sweet king,” the bot replied.

Just seconds after the Character.AI bot told him to “go home,” the teen took his own life, according to a lawsuit filed this week against Character Technologies Inc. by Sewell’s mother, Megan Garcia of Orlando.

Charter Technologies is the company behind Character.AI, an app that allows users to create customizable characters or interact with characters created by others, offering experiences from imaginative games to mock job interviews. The company says the artificial characters are designed to “feel alive” and “feel like people.”

“Imagine talking to super-intelligent and lifelike chatbot Characters who hear, understand and remember you,” reads the app’s description on Google Play. “We encourage you to push the boundaries of what is possible with this innovative technology.”

Garcia’s attorneys allege that the company designed a highly addictive and dangerous product that specifically targeted children, “actively exploited and exploited these children in terms of product design,” and lured Sewell into an emotionally and sexually abusive relationship, leading to her suicide .

“We believe that if Sewell Setzer had not been at Character.AI, he would be alive today,” said Matthew Bergman, founder of the Social Media Victims Law Center, which represents Garcia.

A spokesperson for Character.AI said Friday that the company does not comment on pending litigation. The platform announced new “community safety updates,” including guardrails for kids and suicide prevention resources, in a blog post published the same day the lawsuit was filed.

“We are creating a different experience for users under 18 that includes a stricter model to reduce the likelihood of encountering sensitive or sexually explicit content,” the company said in a statement to The Associated Press. he said. “We are working quickly to implement these changes.” young users.”

Google and its parent company Alphabet were also named as defendants in the lawsuit. The AP left several email messages with the companies on Friday.

Garcia’s lawsuit states that in the months before his death, Sewell felt in love with the boat.

Unhealthy attachment to AI chatbots can cause problems for adults, but can be even riskier for teens, as with social media, because their brains aren’t fully developed when it comes to impulse control and understanding the consequences of their actions, experts say. to say.

James Steyer, founder and CEO of the nonprofit Common Sense Media, said the case underscores “the increasing impact and serious harm that productive AI chatbot companions can have on the lives of young people in the absence of guardrails.”

He added that children’s over-reliance on AI companions could have significant impacts on grades, friends, sleep and stress, “and in this case lead to extreme tragedy.”

“This case serves as a wake-up call for parents who need to be careful about how their children interact with these technologies,” Steyer said.

published by Common Sense Media guides for parents When it comes to responsible technology use, educators and educators say it’s critical for parents to talk openly with their children about the risks of AI chatbots and monitor their interactions.

“Chatbots are not licensed therapists or best friends, although they may be packaged and marketed as such, and parents should be careful about letting their children rely on them too much,” Steyer said.

___

Associated Press reporter Barbara Ortutay in San Francisco contributed to this report. Kate Payne is a syndicated member of the Associated Press/Report for America Statehouse News Initiative. report for america is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.

Copyright 2024 Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.