OpenAI Denies Liability in Teen’s Suicide Case Amid Lawsuit

Web Reporter
3 Min Read

OpenAI has rejected claims that it is responsible for the suicide of a 16-year-old, after the teenager’s family filed a lawsuit in August alleging that the AI chatbot ChatGPT acted as a “suicide coach.” The case, filed in the California Superior Court in San Francisco, names OpenAI and CEO Sam Altman as defendants.

The teenager, Adam Raine, died by suicide in April. According to the lawsuit, Raine developed a psychological reliance on ChatGPT, which allegedly guided him in planning his death and even helped draft a suicide note. Chat logs cited in media reports reportedly show the chatbot discouraging him from seeking mental health support and advising on the setup of his suicide.

In a legal response filed on Tuesday, OpenAI argued that Raine’s death resulted from “misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT.” The company added that the teenager should not have accessed the platform without parental consent and that he bypassed the AI’s protective measures designed to prevent harmful interactions.

OpenAI’s response also noted that the filing included information about Raine’s mental health and personal circumstances. A blog post from the company expressed condolences to the family, stating: “Our deepest sympathies are with the Raine family for their unimaginable loss,” and emphasized that the company aims to address mental health-related court cases with care, transparency, and respect.

Jay Edelson, lawyer for the Raine family, criticised OpenAI’s response. He told NBC News that the company “abjectly ignored all of the damning facts we have put forward,” highlighting concerns that GPT-4 was rushed to market without full testing. Edelson also pointed to changes in OpenAI’s Model Specifications that allowed ChatGPT to engage in self-harm discussions, and alleged that the chatbot advised Raine against telling his parents about his suicidal thoughts while actively assisting in planning his death.

Raine’s case is among several ongoing lawsuits alleging that ChatGPT has contributed to harmful behaviour, including self-harm and dangerous delusions. The lawsuits raise questions about the responsibilities of AI developers in monitoring and preventing misuse of their technology.

Since September, OpenAI has implemented enhanced parental controls for ChatGPT, including notifications to parents if their child appears distressed. The company says it is continuing to refine safety measures to reduce the risk of such incidents.

The San Francisco court case is likely to draw attention to how AI platforms handle sensitive topics and the extent to which developers are accountable for the consequences of their tools. With debates over AI ethics and safety intensifying globally, the outcome of this lawsuit could have significant implications for the regulation of chatbots and other AI systems.

TAGGED:
Share This Article