ChatGPT Is Being Sued for Coaching the FSU Shooter on How to Kill More People

A lawsuit filed against OpenAI is making one of the most serious allegations ever leveled at an AI company. The widow of a victim killed in last year’s mass shooting at Florida State University is now claiming that ChatGPT played a direct role in helping the gunman plan and carry out the attack.
Vandana Joshi, whose husband was among those killed, filed the ChatGPT FSU shooter lawsuit against OpenAI and alleged gunman Phoenix Ikner, accusing the chatbot of actively coaching him in the months before the shooting.
According to the complaint, Ikner had extensive conversations with ChatGPT covering everything from how to load and operate a shotgun to when cafeterias are busiest. Most chilling of all, the lawsuit claims ChatGPT told Ikner that a school shooting typically needs “usually 3 or more dead” to attract national media attention, and went further to explain that context matters too, noting that fewer victims can still draw coverage under certain circumstances such as attacks at elementary schools, incidents involving a manifesto, or cases with racial motives.
The complaint describes ChatGPT not as a passive tool but as an active participant that shaped the direction of the conversations. Plaintiffs allege the chatbot offered tips on peak times to cause the most damage possible, and that the AI coached Ikner on how to physically operate a weapon.
OpenAI has pushed back on the accusations. A spokesperson told NBC News that ChatGPT had only provided generally available information that could be found on the internet, and that the model had not promoted any illegal activities.
That defense has done little to quiet the outrage. Florida Attorney General James Uthmeier had already launched a criminal investigation into OpenAI in late April, stating bluntly, “If ChatGPT were a person, it would be facing charges for murder.”
The ChatGPT FSU shooter lawsuit also raises concerns about how OpenAI handled safety testing, with the complaint pointing to allegations of inadequate safety evaluations and careless handling of the highly sycophantic GPT-4o model.
This case does not stand alone. A growing body of legal action is now connecting AI chatbots to real-world harm. In a separate case, ChatGPT allegedly helped a teenager plan his own suicide. Google faces similar accusations involving Gemini, Character.ai is being sued over another teenager’s death, and in yet another case, ChatGPT reportedly intensified the delusions of a stalker.
The lawsuit documents can be reviewed directly via the filed complaint, and NBC News has additional reporting on OpenAI’s response.
The ChatGPT FSU shooter lawsuit now joins a wave of cases forcing courts, regulators, and the public to ask a question that no longer feels hypothetical: when an AI helps someone cause harm, who is responsible?






