AI Compliance
Dec 1, 2025
AI in the Boardroom: What Directors Don't Know Could Cost Them Everything

Joelle Kaufman
From Larridin
Key Takeaways
Shadow AI Usage Crisis: For every AI system a company officially deploys, employees use 1-2 additional artificial intelligence tools without leadership knowledge
Insurance Coverage Gaps: Major insurers are introducing broad AI exclusions in D&O policies due to inability to assess AI risks
Personal Liability Exposure: Directors face fiduciary responsibility for AI governance failures, with AI-related liability claims already emerging
Regulatory Complexity: The EU AI Act and emerging U.S. state regulations create a patchwork legal framework that multiplies compliance challenges
Real-Time Monitoring Required: Static annual audits are insufficient; boards need continuous visibility into AI technology deployment across their organizations
Key Terms
AI Liability - Legal responsibility for harm caused by artificial intelligence systems
D&O Insurance - Directors and Officers liability insurance protecting corporate leadership from personal exposure
AI Governance - Framework for oversight, accountability, and risk management of AI systems
Generative AI - AI technology that creates new content (text, images, code) based on training data
Policy Rescission - Complete voiding of an insurance policy as if it never existed
AI Washing - Misrepresenting or exaggerating a company's AI capabilities
Shadow IT - Technology used by employees without official approval or knowledge
Strict Liability - Legal responsibility regardless of fault or negligence
Risk Assessment - Systematic evaluation of potential AI-related vulnerabilities and exposures
The Crisis Hiding in Plain Sight
In a recent conversation with Michael Levine, Partner at Hunton Andrews Kurth and insurance coverage expert, a troubling reality emerged: most CFOs and directors have no idea what artificial intelligence tools their employees are actually using—and this blind spot could expose their companies to catastrophic AI liability.
"I'll ask them if their company is using AI," Levine explains. "Most now say yes. Next question is how. And that's where the conversation usually ends. They don't know."
This isn't just an operational inconvenience. It's a ticking time bomb for directors and officers who bear fiduciary responsibility and duty of care for corporate risk management. As AI companies deploy increasingly sophisticated AI models across the business ecosystem, the gap between leadership awareness and actual use of AI continues to widen.
The Hidden AI Iceberg
Here's what we've discovered working with companies on AI measurement: for every AI system a company knowingly purchases and deploys, employees are using three additional artificial intelligence applications that leadership has never heard of.
Even more alarming? Two-thirds of employees use personal accounts for AI tools—like personal ChatGPT logins instead of the enterprise versions their companies paid for. Despite signed Business Partnership Agreements and non-data retention guarantees with enterprise AI providers, employees default to their familiar personal accounts out of convenience.
This creates massive cybersecurity vulnerabilities, data protection violations, and potential compliance failures that directors may not discover until it's far too late. The proliferation of generative AI tools has intensified this challenge, with employees across all departments—from supply chain management to healthcare operations—using AI technology without proper oversight.
The Insurance Coverage Minefield
As artificial intelligence proliferates across organizations—embedded in everything from manufacturing automation to HR platforms to the spell-check in Microsoft Word—insurance carriers face an unprecedented challenge: how do they quantify the risk posed by AI systems? The answer to this question is central to setting a fair and reasonable premium and assessing whether a risk is worth insuring.
"Most major insurers have not jumped into affirmative coverage of AI," Levine notes. "They simply cannot measure the risk and are without historical risk data. This, among other things, makes it difficult if not impossible to scope and price coverage. It's the biggest challenge."
The complexity deepens when considering emerging liability frameworks. While the European Union has introduced the AI Liability Directive (AILD) and updated Product Liability Directive (PLD) to address AI-related harms, U.S. insurers grapple with a fragmented regulatory framework across multiple jurisdictions. The EU AI Act establishes strict liability regimes for high-risk AI systems, while U.S. liability rules remain rooted in traditional tort law principles that struggle with questions of causation when algorithms malfunction.
The Growing AI Insurance Market
While major carriers work to develop pricing models, a handful of specialized providers are beginning to enter the market. Companies like Armilla, Chaucer, and Testudo are focusing exclusively on AI liability coverage, while Munich Re has introduced "AI Sure," a product warranty instrument that guarantees AI products will function as intended.
However, traditional insurers are beginning to respond to their uncertainty with a more defensive approach: broad AI exclusions. According to Levine, "ISO and other companies are coming forth with broad AI exclusions to try to get ahead of" potential coverage disputes.
These exclusions aren't widespread yet, but they're coming—and they could deny coverage in unexpected ways.
Consider this scenario: Your company uses Microsoft Word with embedded AI assistance to draft SEC disclosures. A D&O claim arises from an allegedly inaccurate statement. If your D&O policy includes a comprehensive AI exclusion, that claim could be excluded entirely, simply because Microsoft Word's AI suggested alternative phrasing. It is critically important, therefore, that exclusions are reasonably tailored and the AI risks to which they pertain are properly defined.
"The exclusion might say 'we don't cover any claim arising from any use of generative AI,'" Levine warns. "Applying that exclusion to a loss simply because somebody wrote something with the assistance of spell check and grammar check is likely not the intended use, but if the loss fits the letter of the exclusion and the value of the loss is significant, you can expect some insurers to stand on the exclusion." "It's severe, and it may not be what was intended by the underwriters or drafters, but unfortunately, that's how the insurance industry works."
The Moving Target Problem
Insurance applications increasingly ask pointed questions about AI usage: What percentage of revenue derives from generative AI? Which AI systems incorporate artificial intelligence capabilities? How is AI technology being deployed across the organization?
These questions demand static answers on applications that become binding representations to insurers. But AI development and deployment is anything but static—it's the fastest-moving of all emerging technologies in history.
"That's a snapshot in time, and generally there's no duty to update that information," Levine explains. "When you get to the back end of that 12-month policy and that number has changed substantially, they're going to come back and say you misrepresented your revenue, so we're not only going to deny your claim, but we're going to rescind the policy altogether."
Policy rescission doesn't just mean losing coverage for one liability claim—it means the entire policy is void as if it never existed, leaving the company completely exposed to plaintiffs and claimants without protection.
Real-World Consequences Are Already Here
The theoretical is rapidly becoming actual. CVS Health faces a class action lawsuit in Massachusetts for using AI-powered facial expression analysis during job interviews without obtaining required consent under the state's lie detector statute. Multiple companies face "AI washing" lawsuits for misstatements about their AI capabilities, triggering stock drops and derivative shareholder suits—classic D&O claims that test existing liability frameworks.
Even more sobering: personal injury and wrongful death cases are emerging against AI companies like OpenAI, Anthropic, and Character AI, where chatbots allegedly provided harmful advice to vulnerable users. These cases raise novel questions about product liability for AI models and whether traditional civil liability standards adequately address AI-related harms.
According to Salesforce CEO Marc Benioff, "We're probably looking at three to twelve trillion dollars of digital labor getting deployed. That digital labor's going to be everything from AI agents to robots." Gartner projects AI spending will reach $644 billion in 2025, up 76% from 2024. This explosive growth magnifies every risk exponentially and creates cascading vulnerabilities across the business ecosystem.
Regulatory Complexity Compounds the Challenge
The regulatory framework adds another layer of uncertainty. The EU AI Act, adopted by the European Parliament and European Commission, contains provisions that—depending on interpretation—may prohibit using AI systems to write employee performance reviews, treating such use as having "an AI person as your boss." Member states are now implementing this legal framework, creating compliance obligations for any company operating in the European Union.
The AI Act establishes a risk-based approach, categorizing AI systems by risk level and imposing strict liability on deployers of high-risk AI technology. The AI Liability Directive (AILD) complements this by establishing presumptions of causation that ease the burden on claimants seeking damages from AI-related harms—a significant departure from traditional tort principles.
For a multinational company rolling out enterprise AI systems, this creates immediate compliance challenges across multiple jurisdictions. Employees might naturally use Microsoft Co-Pilot, ChatGPT or Claude to help draft performance reviews without realizing they're potentially violating the AI Act's provisions. The complexity extends to data protection (GDPR compliance), intellectual property concerns, and cybersecurity requirements that vary by jurisdiction.
As AI regulation proliferates state-by-state in the U.S., we risk creating the same patchwork complexity that plagues the alcohol industry, where every state maintains different rules. Policymakers struggle to balance innovation with AI safety, while stakeholders across the ecosystem—from AI development teams to end users—navigate uncertainty about liability regimes.
What Directors Must Do Now
Directors cannot plead ignorance as a defense. Courts and regulators will expect boards to have implemented appropriate AI governance, oversight, and controls—especially as artificial intelligence becomes core to business operations and decision-making.
The solution starts with visibility. "If you do not know what is actually happening in your company, you certainly can't have any plan to deal with it," as we've told our clients. Directors need continuous, real-time monitoring of AI tool usage across their organizations—not annual audits that become outdated the day they're completed.
Essential AI Governance Framework
Boards must implement comprehensive AI governance that includes:
Continuous Risk Assessment - Real-time monitoring of AI systems deployment, not static annual reviews
Vendor Management - Due diligence on AI providers' safety measures, data protection practices, and liability frameworks
Cross-Functional Oversight - Engagement of stakeholders across legal, IT, compliance, and operations
Policy Development - Clear AI policy governing employee use of artificial intelligence tools
Insurance Strategy - Proactive dialogue with insurers about AI-related exposures and coverage needs
When engaging with insurance carriers, transparency and documentation are essential. Companies must provide narratives explaining current use of AI and anticipated evolution over the policy period, inviting dialogue rather than simply checking boxes on applications.
Most critically, boards should engage specialized expertise in AI governance, cybersecurity, and emerging technologies. "You have to partner with consultants," Levine advises. "Get your key stakeholders across the company together and continually evaluate what you're doing, how you're doing it, and what your exposure looks like."
The complexity of AI liability—spanning product liability questions, strict liability regimes, algorithmic decision-making, and novel causation challenges—requires expertise that extends beyond traditional corporate governance. As new technologies like generative AI continue to evolve, boards need advisors who understand both the technical architecture of AI models and the evolving legal framework across jurisdictions.
Conclusion
The AI revolution offers extraordinary opportunities, but directors who fail to understand and manage their organizations' AI-related exposure may find themselves personally liable for consequences they never saw coming. The convergence of complex liability frameworks, insurance coverage gaps, and rapid AI development creates unprecedented challenges for corporate leadership.
From the European Union's comprehensive AI Act to emerging U.S. state regulations, from product liability questions to cybersecurity vulnerabilities, the use of AI introduces risks that demand board-level attention and expertise. As AI companies like Anthropic, OpenAI, and others continue to advance AI technology, the gap between innovation and governance grows wider.
In the AI era, what you don't know about artificial intelligence systems can—and will—hurt you. Directors must move from reactive compliance to proactive AI governance, treating AI risk assessment as a core fiduciary duty alongside traditional oversight responsibilities.
Frequently Asked Questions (FAQs)
What is AI liability and why should board directors care?
AI liability refers to legal responsibility for harms caused by artificial intelligence systems. Directors should care because they bear fiduciary duty for corporate risk management, and AI-related liability claims can result in D&O insurance coverage denials, personal liability exposure, and catastrophic financial losses. As AI systems become embedded across business operations, directors face increasing scrutiny over AI governance failures.
What is the difference between the EU AI Act and the AI Liability Directive?
The EU AI Act is a regulatory framework that establishes rules for AI development and deployment, categorizing AI systems by risk level and imposing obligations on providers and deployers. The AI Liability Directive (AILD) is a separate legal framework that addresses civil liability for AI-related harms, making it easier for claimants to prove causation and establish liability. Together, they create comprehensive liability regimes for artificial intelligence in the European Union.
Are D&O insurance policies starting to exclude AI coverage?
Yes. Traditional insurers are beginning to introduce broad AI exclusions in D&O policies because they cannot adequately assess AI risks without historical data. These exclusions could deny coverage for claims that involve any use of generative AI—even if AI was only minimally involved, such as spell-check assistance in document drafting. Companies need to carefully review policy language and engage in dialogue with insurers about AI-related exposures.
What is "shadow AI" and why is it dangerous?
Shadow AI refers to artificial intelligence tools that employees use without official approval or leadership knowledge—similar to "shadow IT." Research shows that for every AI system a company officially deploys, employees use 3 additional AI tools, often through personal accounts rather than enterprise versions. This creates cybersecurity vulnerabilities, data protection violations, and compliance failures that can expose directors to liability.
How can boards implement effective AI governance?
Effective AI governance requires: (1) continuous real-time monitoring of AI systems usage, not just annual audits; (2) comprehensive risk assessment across all AI technology deployments; (3) clear AI policy governing employee use of artificial intelligence; (4) cross-functional stakeholder engagement involving legal, IT, compliance, and operations; (5) proactive insurance strategy with transparent disclosure to carriers; and (6) specialized expertise in AI safety, cybersecurity, and emerging technologies.
What is product liability for AI, and how does it differ from traditional product liability?
Product liability for AI applies traditional product liability principles to AI products and AI systems. However, AI presents unique challenges: algorithms can malfunction in unpredictable ways, causation is difficult to establish when decision-making involves complex AI models, and traditional strict liability regimes may not fit AI-related harms. The EU's updated Product Liability Directive (PLD) attempts to address these gaps by extending liability to software and clarifying standards for AI products.
What are high-risk AI systems under the EU AI Act?
The EU AI Act categorizes certain AI systems as "high-risk" based on their potential to cause harm. These include AI used in: critical infrastructure, employment decisions, law enforcement, biometric identification, education, healthcare, and credit scoring. High-risk AI systems face stricter requirements including mandatory risk assessment, human oversight, transparency obligations, and cybersecurity measures. Deployers of high-risk systems may face strict liability under the AI Liability Directive.
How should companies disclose AI usage on insurance applications?
Companies should provide detailed narratives explaining current use of AI systems, anticipated AI development over the policy period, and governance measures in place. Rather than simply checking boxes, engage in dialogue with insurance providers about AI-related risks and coverage needs. Document all AI technology deployments, vendor relationships with AI companies, and risk assessment procedures. Be aware that representations on insurance applications are binding, and material misrepresentations can lead to policy rescission.
What happens if an insurance policy is rescinded due to AI misrepresentation?
Policy rescission means the entire insurance policy is voided retroactively as if it never existed. The company loses coverage not just for one claim, but for all liability claims during the policy period. This leaves the organization—and potentially directors personally—completely exposed to plaintiffs without insurance protection. Rescission can occur if insurers determine that representations about AI usage on the application were materially inaccurate, even if the inaccuracy was unintentional.
Are there specialized insurance providers for AI liability?
Yes. While major traditional insurers remain cautious, specialized providers are entering the AI liability insurance market. Companies like Armilla, Chaucer, and Testudo focus exclusively on AI-related coverage. Munich Re offers "AI Sure," a product warranty for AI systems. However, this market is still emerging, coverage is limited, and premiums reflect the uncertainty around AI risks. Companies should work with insurance brokers experienced in emerging technologies and cybersecurity.
How does the EU AI Act affect U.S. companies?
The EU AI Act applies to any company that deploys AI systems in the European Union, regardless of where the company is headquartered. U.S. companies serving EU customers or operating in member states must comply with the AI Act's requirements for their European operations. This includes risk assessment obligations, transparency requirements, and potential strict liability under the AILD. The extraterritorial reach mirrors GDPR's approach to data protection.
What role do AI companies like OpenAI and Anthropic play in the liability ecosystem?
AI companies that develop foundational AI models (like GPT, Claude) are considered "providers" under the EU AI Act and may face liability under both the AI Liability Directive and Product Liability Directive. However, liability questions become complex in the AI ecosystem: when a deployer customizes an AI model, integrates it into products, or uses it in decision-making, liability may shift. Courts are still establishing legal frameworks for apportioning responsibility across the AI supply chain from providers to deployers to end users.
What is AI washing and what are the legal consequences?
AI washing refers to companies misrepresenting or exaggerating their use of AI technology or AI capabilities. This can trigger securities fraud claims, shareholder derivative suits, and regulatory enforcement actions. Multiple companies face AI washing lawsuits where stock prices dropped after AI capabilities were revealed to be overstated. These claims fall squarely within D&O insurance coverage—unless the policy contains AI exclusions that insurers might invoke to deny coverage.
Interested in knowing how your organization is using AI? Learn more about AI measurement and accountability at larridin.com.




