How OpenAI Flagged a Potential School Shooting Threat in Canada Early

How OpenAI Flagged a Potential School Shooting Threat in Canada Early
  • PublishedFebruary 21, 2026

In the aftermath of one of Canada’s deadliest school shootings, a troubling detail has emerged: the gunman’s online activity had been flagged by OpenAI months before the attack, raising complex questions about the role of technology companies in preventing violence.

Jesse Van Rootselaar, 18, killed eight people in the remote British Columbia community of Tumbler Ridge last week before dying from a self-inflicted gunshot wound. The victims included the shooter’s mother, stepbrother, a teaching assistant, and five students aged 12 to 13.

A Warning Unheeded

OpenAI revealed Friday that it had identified Van Rootselaar’s account in June 2025 through abuse detection efforts, specifically for “furtherance of violent activities.” The company banned the account for violating its usage policy but determined that the activity did not meet its threshold for referral to law enforcement.

The threshold for contacting authorities, according to OpenAI, is whether a case involves “an imminent and credible risk of serious physical harm to others.” The company said it did not identify credible or imminent planning in Van Rootselaar’s account activity.

After the Tragedy

Following the shooting, OpenAI employees reached out to the Royal Canadian Mounted Police with information about the individual and their use of ChatGPT. “Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” an OpenAI spokesperson said. “We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”

The Wall Street Journal first reported OpenAI’s revelation about the flagged account.

The Challenge of Prediction

The case highlights the difficult position technology companies occupy. They monitor vast amounts of user activity for signs of dangerous behavior but must balance the desire to prevent harm against privacy concerns and the risk of false positives.

OpenAI’s threshold—requiring imminent and credible risk—reflects a cautious approach. But in Van Rootselaar’s case, that caution meant that warning signs visible to the company did not reach authorities who might have intervened.

The shooter had a history of mental health contacts with police, according to RCMP. Whether earlier awareness of online activity would have changed the outcome is impossible to know.

A Community in Mourning

Tumbler Ridge, a town of 2,700 people in the Canadian Rockies, is now a community in mourning. The attack was Canada’s deadliest since 2020, when a gunman in Nova Scotia killed 13 people.

For the families of the victims, the revelation that warning signs existed but were not acted upon adds a layer of anguish to an already devastating loss. For technology companies, it underscores the immense responsibility they bear and the imperfect tools they have to meet it.

The motive for the shooting remains unclear. What is clear is that a young man with access to firearms, a history of mental health struggles, and online activity that raised alarms at a major technology company was able to carry out an attack that took eight lives. The question—what could have been done differently—will haunt both the company and the community for years to come.

Also Read:

UK Monarchy in Crisis After Prince Andrew’s Stunning Arrest

Over 5,000 Female Civilians Dead in Ukraine War, According to UN Findings

Written By
thetycoontimes

Leave a Reply

Your email address will not be published. Required fields are marked *