Agentic AI. Non-Agentic Liability.
Most boards have not made a conscious decision to deploy agentic AI. They have made decisions to accelerate AI adoption — and agentic capabilities arrived embedded in the tools that were purchased and the workflows that were automated. The governance threshold was crossed without anyone marking the moment. Agentic AI does not produce output for a human to review. It receives a goal and pursues it: booking meetings, executing purchases, sending communications, committing to contracts — autonomously, without pausing for human approval between steps. That changes the liability question from "who approved the decision" to "who authorised the agent." In March 2026, the UK CMA confirmed that businesses are responsible for what an AI agent does in the same way they are responsible for what an employee does. California AB 316, effective January 2026, explicitly bars the "the AI acted autonomously" defence in civil proceedings. The EU Product Liability Directive, applying from December 2026, extends strict liability to AI systems — and treats their continuous learning as a potential product defect. This article examines how the liability architecture has changed, who in the organisation actually built the agent and holds the risk, and five actions boards should take before an incident forces the question.