Back to Insights
AI GovernanceMarch 18, 202612 min read

AI Governance Is Not a Future Compliance Project

Boards are setting AI adoption targets. CEOs are celebrating deployment milestones. And somewhere downstream, a compliance officer is waiting for an AI-specific law to arrive before building a governance framework. That wait is producing liability right now. A review of major AI enforcement actions across multiple jurisdictions reveals a consistent pattern: not one required an AI-specific statute. Air Canada was held liable for chatbot misinformation under basic tort law. UnitedHealth and Cigna face class action claims under insurance contract and Medicare law for AI-driven claim denials. Workday faces a national class action under 1967 employment discrimination law for its AI hiring tools. A Berlin bank was fined €300,000 under GDPR's 2018 automated decision-making provisions. The legal infrastructure to hold organisations accountable for what their AI does was already in place. It is being actively used. This article examines why AI governance is not a future compliance obligation but a present-day legal risk — and why the decisions that close the governance gap can only be made at the board and CEO level.