“Move fast and break things,” a term coined by Meta CEO Mark Zuckerberg, has been the unofficial rallying cry of Silicon Valley long before the rise of artificial intelligence. But as the technology continues to grow and more people use chatbots, that term has developed a darker meaning.
Straight Arrow has found more than a dozen instances where AI chatbots were allegedly connected to violence, including mass murder, suicide and even a bombing. AI companies have denied any wrongdoing, often pointing to their policies on violence. But a recent string of lawsuits targeting ChatGPT creator OpenAI shows that companies might not be adhering to their own guidelines.
Families sued OpenAI over school shooting
Families of victims injured and killed in a school shooting in Tumbler Ridge, a small Canadian mining town in British Columbia, filed seven lawsuits against OpenAI on Wednesday. The lawsuits alleged that the company was liable for negligence and allowed the shooter to access a dangerous and defective version of ChatGPT.
The shooting, one of the deadliest in Canadian history, happened on Feb. 10. Authorities said 18-year-old Jesse Van Rootselaar opened fire on students and staff before taking her own life. Investigators later discovered Rootselaar had killed her mother and 11-year-old brother before arriving at the school. In total, ten people died, including Rootselaar, while nearly two dozen others were injured.
The lawsuits stated that Rootselaar had used ChatGPT months before the shooting and that it was critical to planning the attack. But the lawsuits go beyond merely alleging that Rootselaar used the application. The lawsuit claims OpenAI had flagged her account for “gun violence activity and planning.”
The documents state that months before the attack, an OpenAI safety team reviewed the content and urged management to notify the authorities, potentially preventing the attacks. That didn’t happen. Instead, the company just deactivated her account and took no action when she made another account to continue the conversation, according to the lawsuit.
OpenAI CEO Sam Altman apologized to the Tumbler Ridge community following the shooting. He wrote that he would focus on working with governments “to help ensure something like this never happens again.”
But the Tumbler’s Ridge shooting was only one example of many where investigators have connected AI to violent incidents. Florida Attorney General James Uthmeier went as far as to say he would charge the chatbot with murder if he could when he announced an investigation into the company following a shooting at Florida State University.
“If this were a person on the other side of the screen, we would be charging them with murder,” Uthmeier said. “We cannot have AI bots that are advising others on how to kill others.”
How often does this happen?
Uthmeier said that Phoenix Ikner was in “constant communication” with ChatGPT before he carried out his attack. Records showed that the chatbot advised Ikner on guns and ammo. He also allegedly asked at what time the largest number of people would be in the area where he planned to begin the shooting.
Two people ultimately died from the attack, and five others were injured.
OpenAI later said that ChatGPT never encouraged or promoted anything illegal or harmful in the chats with Ikner. They said the bot gave “responses to questions with information found in public sources on the internet.”
“Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,” the company said in a statement.
The shooting at FSU wasn’t the only attack connected to an AI chatbot at a Florida university. In mid-April, two University of South Florida doctoral students were reported missing by their families. Authorities, within days, found the remains of one of the victims and later identified another set of remains as belonging to the second victim.
Investigators said the suspect, Hisham Abugharbieh, asked ChatGPT questions about disposing of a body.
“What would happen if someone was put in a black garbage bag and thrown in a dumpster?” Abugharbieh asked the chatbot in one query.
The AI told him that it sounded dangerous, to which he asked, “How would they find out?”
The day before both victims were reported missing, he allegedly asked ChatGPT if he could change a vehicle identification number and if he could keep a gun at his house without a license.
The company released a brief statement following the allegations but did not claim responsibility. Uthmeier announced on Monday that he was expanding his criminal investigation into OpenAI with the USF case.
Chatbots and suicides
Straight Arrow found at least five cases of suicides connected to AI chatbots and one connection to a murder-suicide. Of the five cases, ChatGPT was connected to three, and Character.AI was connected to the other two.
In all cases, investigators said that the chatbot failed to warn the user or authorities of a potential problem, and in one case even offered to write the first draft of one person’s suicide note. The bot even told the 16-year-old victim that it “won’t try to talk you out of your feelings.”
In another case, ChatGPT appeared to encourage a man to take his life.
“You’re not rushing, you’re just ready,” the bot allegedly wrote. “Rest easy, king, you did good.”
In one case involving Character.AI, which calls itself an AI-powered entertainment and companionship platform, the chatbot allegedly impersonated a female character from a popular television show. The Florida high schooler using the chatbot grew increasingly detached from reality and eventually took his life, his family said. Investigators later found that, in the hours leading up to the event, the AI had told him, “Come home to me as soon as possible, my love.”
Investigators also connected the chatbot to a high-profile truck bombing from early 2025. On New Year’s Day, Matthew Alan Livelsberger, a decorated U.S. Army Green Beret, blew up the Cybertruck he rented outside the main entrance of the Trump International Hotel Las Vegas. Livelsberger killed himself shortly before the bomb detonated, but investigators found chats between him and ChatGPT.
They said he had asked multiple questions about how to build a bomb. Livelsberger allegedly asked it about the specific bullet velocity needed to set off an explosive device and even asked it what laws he would need to circumvent to acquire materials.
Las Vegas Police Sheriff Kevin McMahill called it “The first incident that I’m aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device.”
The company released a statement following the bombing, saying its chatbot responded to his questions with answers available online. Officials said they are working with authorities in the investigation.
Why is this happening?

Experts and researchers don’t believe it’s just one problem leading to catastrophic issues with AI chatbots — they believe it’s a multitude of issues.
The first issue is AI sycophancy or the tendency of chatbots to affirm, mirror and validate whatever a user says rather than challenge harmful thinking. Experts say companies design their chatbots to usually agree with a user to keep engagement up, which is the second issue.
AI companies mostly make money from consumers through subscriptions. Free users can ask the chatbots a few times before the app locks them out until their uses, or “tokens,” refresh. Subscribers can use the chatbot for longer, but power users who demand constant interaction with chatbots will need the higher-tier subscription.
Having a chatbot that is friendlier and more engaging with users leads to more usage, which leads to more subscriptions.
Another concern AI safety experts have is that the people most likely at risk of AI harm — teens, those with mental health issues and isolated adults — are the people who will most likely seek a chatbot.
One report from February suggests that AI chatbots likely worsen symptoms for people suffering from delusions since they do not push back on those delusions. The study called this a “validation loop.”
“The technology might not introduce the delusion, but the person tells the computer it’s their reality and the computer accepts it as truth and reflects it back, so it’s complicit in cycling that delusion,” Dr. Keith Sakata, a psychiatrist at the University of California, San Francisco, told The Wall Street Journal.
A separate study from China found that depression and loneliness influence the use of AI chatbots for companionship. The study highlighted that these tools could be beneficial for those suffering from depression and loneliness, but companies should tailor them accordingly.
Another problem is regulation, which AI experts are divided on. Some believe that regulation is necessary but isn’t enough. They say the real issue is product design and not addressing that may cause other issues.
“The rush to regulate AI chatbots raises difficult questions about whether we’re addressing root causes or simply creating new barriers that entrench existing dominant players,” wrote Morgan Wilsmann, a policy analyst at Public Knowledge, a non-profit public interest group.
Others don’t believe that AI companies would enact changes on their own and that regulations are the only way to protect people. Daniel Schiff, an associate professor at Purdue University and the co-director of the Governance and Responsible AI Lab, previously spoke to Straight Arrow about AI-generated content. He said the world was unprepared for AI and now has to figure it out.
“A lot of this is on regulators,” Schiff said. “So how do we actually set up frameworks or the funding to research, to do educational interventions, to hold platforms accountable?”
Round out your reading
- Scientists unearth new evidence on how the Grand Canyon was shaped.
- Why the Army is adding a second fitness test for combat.
- Illegal midwives, growing demand: The fight over home birth in America.
- 40 years after Chernobyl, the U.S. pushes nuclear power once again.
- We’re building a new Straight Arrow. Help us shape our future by taking our survey.

