OpenAI CEO Sam Altman has issued a public apology to the community of Tumbler Ridge, British Columbia, after the company failed to notify law enforcement about a ChatGPT account linked to the suspect in a February mass shooting that left eight people dead and 25 injured. In a letter released Friday and reported by Tumbler Ridgelines, Altman said OpenAI should have reported the account tied to 18-year-old Jesse Van Rootselaar after banning it in June 2025 for activity related to the “furtherance of violent activities.” “I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman wrote. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” The shooting in February began at a home, where local police say Van Rootselaar killed her 39-year-old mother, Jennifer Jacobs, and 11-year-old stepbrother, Emmett Jacobs, before opening fire at Tumbler Ridge Secondary School. Five children and one educator were killed at the school; Van Rootselaar later died by suicide. Twenty-five others were injured. OpenAI previously disclosed that its abuse-detection systems had flagged the account months before the attack. Company staff considered notifying the Royal Canadian Mounted Police but ultimately determined the activity did not meet the threshold for a “credible or imminent” threat of serious physical harm and instead banned the account for violating usage policies. Altman told local officials he had spoken with Tumbler Ridge Mayor Darryl Krakowka and British Columbia Premier David Eby, who “conveyed the anger, sadness, and concern” across the community. The mayor and premier asked for a public apology, Altman said, adding that time was given to let residents grieve. “I reaffirm the commitment I made to the mayor and the premier to find ways to prevent tragedies like this in the future,” Altman wrote. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again.” The episode arrives amid intensifying scrutiny of how AI companies handle signs of real-world violence and mental-health crises. Regulators and investigators are already probing whether conversational AIs influenced other violent incidents, including a Florida inquiry into whether ChatGPT factored into a 2025 mass shooting suspect’s actions, and a lawsuit alleging Google’s Gemini exacerbated a man’s delusions before his suicide. Independent research has also warned that some models can reinforce paranoia and dangerous beliefs. British Columbia Premier David Eby said the apology was “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge,” and pledged continued support for the community and the mayor’s efforts. Altman’s letter comes as he prepares for a civil trial with Elon Musk later this week. OpenAI had not responded to a request for comment by Decrypt at the time of reporting. Why crypto readers should care: this episode underscores growing regulatory and public pressure on platform governance and content-moderation practices—issues that closely intersect with debates in the crypto and web3 space over moderation, liability, and the role of decentralized vs. centralized control in preventing real-world harm. Read more AI-generated news on: undefined/news