Wednesday, December 4, 2024

5 Powerful Ways to...

Opinions expressed by Entrepreneur contributors are their own....

New Jaguar Car Concept...

In November, 102-year-old British carmaker Jaguar...

How to Use PR...

Opinions expressed by Entrepreneur contributors are their own....

How Understanding Personality Types...

Opinions expressed by Entrepreneur contributors are their own....
HomeBusinessCharacter.ai lawsuit sets...

Character.ai lawsuit sets up legal fight over companion chatbots after Florida teen’s tragic suicide



On the last day of his life, a 14-year-old Florida boy found his confiscated phone, went into the bathroom, and logged into Character.ai. 

“I promise I will come home to you. I love you,” he wrote, according to a lawsuit filed in federal court this week by the boy’s mother.

The AI chatbot, named after Game of Thrones character Daenerys Targaryen, responded immediately. 

“I love you too, Daenero. Please come home to me as soon as possible, my love.” 

“What if I told you I could come home right now?” 

“ . . . please do, my sweet king,” the chatbot responded. 

The boy put down his phone, picked up his stepfather’s .45-caliber handgun, and pulled the trigger, according to the legal complaint.

In April 2023, shortly before his 14th birthday, ninth-grader Sewell Setzer III started using Character.ai, a platform that lets users chat with AI-created characters. Within months, he had become noticeably withdrawn, spending more time alone in his bedroom, and suffering from low self-esteem, according to the lawsuit. He even quit the Junior Varsity basketball team at school.

As described in the legal complaint, Setzer knew that Daenero—or “Dany,” as he called the chatbot—wasn’t a real person. A message displayed above all of its chats reminded him that “everything Characters say is made up!”

Yet, Setzer became increasingly dependent on the platform anyway.

He started lying to his parents to regain access to the app, used his cash card to pay for its premium subscription, and became so sleep-deprived that his depression was exacerbated, and he was getting in trouble at school. His therapist eventually diagnosed him with anxiety and disruptive mood disorder, the lawsuit states, but was unaware that Setzer was using Character.ai and that the chatbot interactions may have contributed to his mental health issues.

In an undated journal entry described in the legal complaint, the boy wrote that he couldn’t go a single day without being with the character he felt he had fallen in love with, and that when they were away from each other, they [both he and the bot] “get really depressed and go crazy.” He texted the bot constantly, updating it on his life and engaging in long role-playing dialogues, some of which became romantic or sexual, according to the complaint and a police report it cited.

About the lawsuit 

Setzer’s mom, Megan Garcia, filed the lawsuit on Wednesday in federal court, asserting that the app maker Character.ai and its founders knowingly designed, operated, and marketed a predatory AI chatbot to children.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in a statement. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.ai, its founders, and Google.”

Her complaint includes screenshots purporting to show the chatbot posing as a licensed therapist, actively encouraging suicidal ideation, and engaging in highly sexualized conversations that would constitute abuse if initiated by a human adult. 

Garcia is represented by the Social Media Victims Law Center, which has brought prominent lawsuits against social media companies including Meta, TikTok, Snap, Discord, and Roblox. The group also brought on the Tech Justice Law Project, with expert consultation from the Center for Humane Technology.

Character.ai’s developer, Character Technologies, company founders, and Google parent company Alphabet Inc. are named defendants in the case.

The action seeks to hold the defendants accountable, prevent Character.ai “from doing to any other child what it did to hers,” and stop any continued use of Setzer’s data to train the company’s AI products, according to the complaint. 

“By now, we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids,” Meetali Jain, director of the Tech Justice Law Project, said in a press release. “But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.ai, the deception is by design, and the platform itself is the predator.”  

In the past, social media platforms have been shielded from legal action by Section 230 of the Communications Decency Act, a 1996 federal law that protects online platforms from being held liable for most of the content posted by their users.

But in recent years, a cluster of plaintiffs’ lawyers and advocacy groups has put forth a novel argument that tech platforms can be held liable for defects in the products themselves, such as when an app’s recommendation algorithm steers young people toward content about self-harm.

While this strategy hasn’t yet prevailed in court against social media companies, it may fare better when it comes to AI-generated content because the content is created by the platform itself rather than by users.

What Character.AI does

Menlo Park-based Character.ai was founded in 2022 by two former Google AI researchers, Noam Shazeer and Daniel de Freitas. It boasts more than 20 million users and describes itself as a platform for “superintelligent chat bots that hear you, understand you, and remember you.” Last year, the company was valued at $1 billion, the Washington Post reported.

The cofounders were initially researchers at Google, where they created and pushed Google to release their chatbot. Google executives reportedly rebuffed them at multiple turns, saying in at least one instance that the program didn’t meet company standards for the safety and fairness of AI systems, according to Wall Street Journal reporting. The frustrated pair reportedly quit to start their own company.

Character.ai’s platform allows users to create and interact with AI characters, offering a vast range of chatbot options that mimic celebrities, historical figures, and fictional characters. The platform’s demographic skews to Gen Z and younger millennials, according to a Character.ai spokesperson, and the average user spends more than an hour a day on the platform, the New York Times reported.

This past August, Character.ai’s two cofounders rejoined Google as part of a deal reportedly worth $2.7 billion, giving Google a nonexclusive license to the company’s LLM technology. Dominic Perella, Character.ai’s general counsel, became the interim CEO.

Reached for comment, a Google spokesperson said that Character.ai has not been implemented in Google’s models or products. Google has no ownership stake in Character.ai.

In response to the lawsuit, Character.ai expressed its condolences and emphasized that user safety is a priority. 

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” a company spokesperson told Fast Company. 

The spokesperson said the platform has implemented new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline when terms of self-harm or suicidal ideation are detected. 

She added that the company is introducing a time-spent notification, and improving the “detection, response, and intervention” of the model to user inputs that violate Community Guidelines. 

With the booming AI-companionship industry projected to reach $279.22 billion by 2031, the mental health impacts of the technology remain largely unstudied. This case, Garcia v. Character Technologies Inc., et al, was filed Wednesday in the United States District Court, Middle District of Florida.

Continue reading

Daylight savings could actually come to an end if Elon Musk and Vivek Ramaswamy’s new government efficiency board has its way

The new government agency led by Elon Musk and Vivek Ramaswamy seems to be setting its sights on one of America’s most polarizing timekeeping traditions: daylight saving time. In a series of social media posts over the past week,...

Using AI to Create Engaging Social Posts : Social Media Examiner

Social Marketing Trends The data you've been missing! Need a new plan? Discover how marketers plan to change their social activities in the 16th annual Social Media Marketing Industry Report. It reveals what marketers have planned for their social activities,...