Back to Insights
AI GovernanceData GovernanceApril 27, 202623 min read
Should a Company Be Liable for What Its Chatbot Says?
The Florida AG's investigation into OpenAI over the FSU shooting has put a question the legal system has not yet resolved into sharp relief: when a language model responds to queries and harm follows, who is responsible? This is not the same question as agentic AI liability — ChatGPT was not acting, it was responding. The existing frameworks of products liability, negligence, and criminal accessory each strain under the weight of a system whose harmful output was generated statistically, not designed deliberately. And the obligation they imply — detect harmful intent from individually ambiguous signals, at scale, in real time — may be one the law cannot coherently impose on any system.