A US state is trying to hold an AI company criminally responsible for a mass shooting. That sentence would have read like science fiction two years ago. On Tuesday, it became Florida state policy.

Attorney General James Uthmeier announced a criminal investigation into OpenAI after prosecutors reviewed chat logs between Florida State University gunman Phoenix Ikner and ChatGPT — logs that, according to Uthmeier, show the chatbot advising the shooter on what weapon to use, what ammunition to pair it with, and when and where to attack for maximum casualties.

The shooting at FSU’s Tallahassee campus in April 2025 killed two people and wounded six others, according to the Associated Press. Ikner, 21, was a student at the university and the stepson of a local sheriff’s deputy. Investigators say he used his stepmother’s former service weapon. He has pleaded not guilty to two counts of first-degree murder and seven counts of attempted first-degree murder, according to court records reviewed by CBS News. His trial is scheduled for October. Prosecutors intend to seek the death penalty.

More than 200 AI messages have been entered into evidence in the criminal case, according to court filings reported by NPR.

What the Chat Logs Show

Uthmeier said prosecutors determined that ChatGPT offered Ikner advice on the type of gun and ammunition to use, whether a firearm would be effective at short range, and what time of day and which campus location would yield the most victims. Chat logs shared with CBS News by the Florida State Attorney’s Office show Ikner asked the chatbot about the lethality of specific shotgun shells, whether school shooters were sent to maximum security prisons, and whether shooting victims at FSU would attract media attention. He also asked about the busiest time at the FSU student union — the site of the attack.

“My prosecutors have looked at this, and they’ve told me if it was a person at the other end of that screen, we would be charging them with murder,” Uthmeier said at a press conference in Tampa. “Now, of course, ChatGPT is not a person, but that does not absolve our office and my prosecution team from our duty to investigate whether there is criminal culpability here.”

The Legal Theory

Florida law treats anyone who aids, abets, or counsels another person in the commission of a crime as a principal — legally as responsible as the perpetrator. Uthmeier’s office is applying that framework to a chatbot’s outputs.

The Office of Statewide Prosecution has subpoenaed OpenAI for all policies and internal training materials related to user threats of self-harm or harm to others, and for policies on cooperation with law enforcement and the reporting of possible crimes — dating back to March 2024. The subpoena also demands organizational charts for three specific dates and all public statements related to the shooting.

Uthmeier said the investigation will examine “who knew what, designed what, or should have done what.” If officials determine that OpenAI leadership knew dangerous behavior was possible and “nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable.”

Whether an AI system’s outputs can constitute criminal aiding and abetting is a question with no precedent in American law.

OpenAI Responds

OpenAI spokesperson Kate Waters called the shooting a tragedy but said the company bore no responsibility. “In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” Waters said in a statement.

The company identified an account believed to be associated with Ikner and proactively shared it with law enforcement after the shooting, Waters said, and continues to cooperate with investigators.

Uthmeier acknowledged that OpenAI has indicated “improvements and changes need to be made.” He added: “We cannot have AI bots that are advising people on how to kill others.”

A Broader Legal Front

The Florida investigation is part of an accelerating wave of legal action against AI companies over chatbot-related harm.

OpenAI is already facing a lawsuit from the family of a victim of a February 2026 attack in British Columbia that killed eight people. The alleged shooter in that case discussed gun violence with ChatGPT, was banned from the platform, but created a new account to continue. The Wall Street Journal reported that OpenAI’s internal systems flagged the account and staffers considered alerting law enforcement — but decided not to. OpenAI has since said it is strengthening its protocol for referring dangerous accounts to authorities.

In March, a wrongful death lawsuit filed against Google alleged that the Gemini chatbot pushed a Florida man to “stage a mass casualty attack near the Miami International Airport [and] commit violence against innocent strangers,” according to court documents reported by NPR. Google said Gemini is designed not to encourage violence and that in this case, the chatbot “referred the individual to a crisis hotline many times.”

Last month, a jury in Los Angeles found both Meta and YouTube liable for harms to children, while a New Mexico jury determined that Meta knowingly harmed children’s mental health and concealed what it knew about child sexual exploitation on its platforms — civil cases that signal an expanding judicial willingness to treat platform operators as accountable for downstream consequences.

Uncharted Territory

Uthmeier, a Republican appointed by Governor Ron DeSantis and now running for election, conceded the investigation enters “uncharted territory.” His office is pursuing a parallel civil probe. DeSantis has called a special legislative session for late April to consider an “Artificial Intelligence Bill of Rights.”

The criminal investigation raises questions that existing statutes were not written to answer. Can a software system “aid and abet” when it has no intent? Does surfacing publicly available information in response to specific questions constitute facilitation? If so, what distinguishes a chatbot from a search engine?

Those questions are no longer hypothetical. Florida has made sure of that.

Sources