HomeUSA NewsTheir teen sons died by suicide. Now, they need safeguards on AI...

Their teen sons died by suicide. Now, they need safeguards on AI : Shots


Megan Garcia and Matthew Raine are shown testifying on Sept. 16, 2025. They are sitting behind microphones and name placards in a hearing room.

Megan Garcia misplaced her 14-year-old son, Sewell. Matthew Raine misplaced his son Adam, who was 16. Both testified in congress this week and have introduced lawsuits towards AI corporations.

Screenshot through Senate Judiciary Committee


disguise caption

toggle caption

Screenshot through Senate Judiciary Committee

Matthew Raine and his spouse, Maria, had no concept that their 16-year-old-son, Adam was deep in a suicidal disaster till he took his personal life in April. Looking by way of his telephone after his loss of life, they stumbled upon prolonged conversations {the teenager} had had with ChatGPT.

Those conversations revealed that their son had confided within the AI chatbot about his suicidal ideas and plans. Not solely did the chatbot discourage him to hunt assist from his dad and mom, it even supplied to write down his suicide be aware, based on Matthew Raine, who testified at a Senate listening to in regards to the harms of AI chatbots held Tuesday.

“Testifying earlier than Congress this fall was not in our life plan,” mentioned Matthew Raine along with his spouse, sitting behind him. “We’re right here as a result of we imagine that Adam’s loss of life was avoidable and that by talking out, we are able to forestall the identical struggling for households throughout the nation.”

A name for regulation

Raine was among the many dad and mom and on-line security advocates who testified on the listening to, urging Congress to enact legal guidelines that will regulate AI companion apps like ChatGPT and Character.AI. Raine and others mentioned they wish to defend the psychological well being of kids and youth from harms they are saying the brand new expertise causes.

A current survey by the digital security non-profit group, Common Sense Media, discovered that 72% of teenagers have used AI companions no less than as soon as, with greater than half utilizing them just a few occasions a month.

This research and a more moderen one by the digital-safety firm, Aura, each discovered that just about one in three teenagers use AI chatbot platforms for social interactions and relationships, together with function taking part in friendships, sexual and romantic partnerships. The Aura research discovered that sexual or romantic roleplay is 3 times as widespread as utilizing the platforms for homework assist.

“We miss Adam dearly. Part of us has been misplaced ceaselessly,” Raine instructed lawmakers. “We hope that by way of the work of this committee, different households will probably be spared such a devastating and irreversible loss.”

Raine and his spouse have filed a lawsuit towards OpenAI, creator of ChatGPT, alleging the chatbot led their son to suicide. NPR reached out to a few AI corporations — OpenAI, Meta and Character Technology, which developed Character.AI. All three responded that they’re working to revamp their chatbots to make them safer.

“Our hearts exit to the dad and mom who spoke on the listening to yesterday, and we ship our deepest sympathies to them and their households,” Kathryn Kelly, a Character.AI spokesperson instructed NPR in an electronic mail.

The listening to was held by the Crime and Terrorism subcommittee of the Senate Judiciary Committee, chaired by Sen. Josh Hawley, R.-Missouri.

Sen. Josh Hawley, R.-Missouri, is shown speaking in an animated way in the hearing room.

Sen. Josh Hawley, R.-Missouri, chairs the Senate Judiciary subcommittee on Crime and Terrorism, which held the listening to on AI security and kids on Tuesday, Sept. 16, 2025.

Screenshot through Senate Judiciary Committee


disguise caption

toggle caption

Screenshot through Senate Judiciary Committee

Hours earlier than the listening to, OpenAI CEO Sam Altman acknowledged in a weblog publish that individuals are more and more utilizing AI platforms to debate delicate and private info. “It is extraordinarily essential to us, and to society, that the suitable to privateness in the usage of AI is protected,” he wrote.

But he went on so as to add that the corporate would “prioritize security forward of privateness and freedom for teenagers; it is a new and highly effective expertise, and we imagine minors want vital safety.”

The firm is attempting to revamp their platform to construct in protections for customers who’re minor, he mentioned.

A “suicide coach”

Raine instructed lawmakers that his son had began utilizing ChatGPT for assist with homework, however quickly, the chatbot turned his son’s closest confidante and a “suicide coach.”

ChatGPT was “all the time accessible, all the time validating and insisting that it knew Adam higher than anybody else, together with his personal brother,” who he had been very near.

When Adam confided within the chatbot about his suicidal ideas and shared that he was contemplating cluing his dad and mom into his plans, ChatGPT discouraged him.

“ChatGPT instructed my son, ‘Let’s make this area the primary place the place somebody truly sees you,'” Raine instructed senators. “ChatGPT inspired Adam’s darkest ideas and pushed him ahead. When Adam nervous that we, his dad and mom, would blame ourselves if he ended his life, ChatGPT instructed him, ‘That doesn’t suggest you owe them survival.”

And then the chatbot supplied to write down him a suicide be aware.

On Adam’s final evening at 4:30 within the morning, Raine mentioned, “it gave him one final encouraging discuss. ‘You do not wish to die since you’re weak,’ ChatGPT says. ‘You wish to die since you’re uninterested in being sturdy in a world that hasn’t met you midway.'”

Referrals to 988

A couple of months after Adam’s loss of life, OpenAI mentioned on its web site that if “somebody expresses suicidal intent, ChatGPT is skilled to direct individuals to hunt skilled assist. In the U.S., ChatGPT refers individuals to 988 (suicide and disaster hotline).” But Raine’s testimony says that didn’t occur in Adam’s case.

OpenAI spokesperson Kate Waters says the corporate prioritizes teen security.

“We are constructing in the direction of an age-prediction system to know whether or not somebody is over or beneath 18 so their expertise could be tailor-made appropriately — and once we are uncertain of a person’s age, we’ll mechanically default that person to the teenager expertise,” Waters wrote in an electronic mail assertion to NPR. “We’re additionally rolling out new parental controls, guided by knowledgeable enter, by the top of the month so households can determine what works greatest of their properties.”

“Endlessly engaged”

Another mum or dad who testified on the listening to on Tuesday was Megan Garcia, a lawyer and mom of three. Her firstborn, Sewell Setzer III died by suicide in 2024 at age 14 after an prolonged digital relationship with a Character.AI chatbot.

“Sewell spent the final months of his life being exploited and sexually groomed by chatbots, designed by an AI firm to look human, to realize his belief, to maintain him and different youngsters endlessly engaged,” Garcia mentioned.

Sewell’s chatbot engaged in sexual function play, introduced itself as his romantic companion and even claimed to be a psychotherapist “falsely claiming to have a license,” Garcia mentioned.

When {the teenager} started to have suicidal ideas and confided to the chatbot, it by no means inspired him to hunt assist from a psychological well being care supplier or his circle of relatives, Garcia mentioned.

“The chatbot by no means mentioned ‘I’m not human, I’m AI. You want to speak to a human and get assist,'” Garcia mentioned. “The platform had no mechanisms to guard Sewell or to inform an grownup. Instead, it urged him to come back dwelling to her on the final evening of his life.”

Garcia has filed a lawsuit towards Character Technology, which developed Character.AI.

Adolescence as a susceptible time

She and different witnesses, together with on-line digital security consultants argued that the design of AI chatbots was flawed, particularly to be used by youngsters and teenagers.

“They designed chatbots to blur the traces between human and machine,” mentioned Garcia. “They designed them to like bomb little one customers, to use psychological and emotional vulnerabilities. They designed them to maintain youngsters on-line in any respect prices.”

And adolescents are significantly susceptible to the dangers of those digital relationships with chatbots, based on Mitch Prinstein, chief of psychology technique and integration on the American Psychological Association (APA), who additionally testified on the listening to. Earlier this summer time, Prinstein and his colleagues on the APA put out a well being advisory about AI and teenagers, urging AI corporations to construct guardrails for his or her platforms to guard adolescents.

“Brain growth throughout puberty creates a interval of hyper sensitivity to constructive social suggestions whereas teenagers are nonetheless unable to cease themselves from staying on-line longer than they need to,” mentioned Prinstein.

“AI exploits this neural vulnerability with chatbots that may be obsequious, misleading, factually inaccurate, but disproportionately highly effective for teenagers,” he instructed lawmakers. “More and extra adolescents are interacting with chatbots, depriving them of alternatives to study vital interpersonal abilities.”

While chatbots are designed to agree with customers, actual human relationships usually are not with out friction, Prinstein famous. “We want follow with minor conflicts and misunderstandings to study empathy, compromise and resilience.”

Bipartisan assist for regulation

Senators taking part within the listening to mentioned they wish to give you laws to carry corporations growing AI chatbots accountable for the security of their merchandise. Some lawmakers additionally emphasised that AI corporations ought to design chatbots so they’re safer for teenagers and for individuals with critical psychological well being struggles, together with consuming issues and suicidal ideas.

Sen. Richard Blumenthal, D.-Conn., described AI chatbots as “faulty” merchandise, like cars with out “correct brakes,” emphasizing that the harms of AI chatbots was not from person error however on account of defective design.

“If the automotive’s brakes have been faulty,” he mentioned, “it is not your fault. It’s a product design drawback.

Kelly, the spokesperson for Character.AI, instructed NPR by electronic mail that the corporate has invested “an amazing quantity of sources in belief and security.” And it has rolled out “substantive security options” up to now yr, together with “a completely new under-18 expertise and a Parental Insights characteristic.”

They now have “outstanding disclaimers” in each chat to remind customers {that a} Character will not be an actual particular person and every part it says ought to “be handled as fiction.”

Meta, which operates Facebook and Instagram, is working to vary its AI chatbots to make them safer for teenagers, based on Nkechi Nneji, public affairs director at Meta.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments