

Companies are delegating more and more work to generative AI systems, but the contractual liability for those systems' outputs remains firmly with the deployer (being the enterprise that passively uses a third-party AI system). Vendors who supply such systems are not, by and large, willing to bear the consequences when something goes wrong. The problem has two parts: contractual gaps and insurance exclusions.
So, how do things go wrong for enterprise deployers of generative AI?
Companies are generally responsible when the outputs of the generative AI systems they have deployed harm third parties. AI deployers are on the hook. Examples of harms include financial loss arising from negligent misrepresentation, reputational harm such as defamation, physical harm, and, of course, infringement of intellectual property rights.
These harms often lead the injured party to send written demands or file lawsuits. In fact, US generative AI lawsuits increased over 150% from 2025 to 2026, according to our lawsuit data (source: ‘Generative AI Litigation Overview’, 1 January 2026, by Testudo Global Inc.). In addition, there are emerging regulatory frameworks (EU AI Act, US state-level laws) that affect deployer liability.
What do we mean by ‘Vendors are not willing to bear the consequences’?
Vendors often contractually limit their liability for all outputs generated by the systems they have created and supplied. Not ideal, given that it is the most likely way a company deploying a generative AI system will cause harm to others.
For any general counsel, risk manager, or broker reading this who uses generative AI in their business, pull up the services agreement between you and your vendor and read the limitations of liability section and waivers. You will almost certainly find some of the following:
Unless the company is a Fortune 1000 enterprise with significant negotiating power, it is very likely to be on its own when the generative AI system it has deployed produces outputs that harm others.
Insurance exclusions are in place, making the problem more challenging.
If your vendor contract gives you almost no protection, the next question to ask is, "Will my insurance program cover me?” To date, the answer has been ‘maybe’. Most policies, aside from select cyber insurance policies to date, contain no affirmative language addressing AI risks, so silent coverage (where coverage is neither excluded or explicitly provided) is the best you can hope for, and that is unlikely to be comprehensive, and even that is eroding.
We are moving into a world where exclusions are starting to penetrate programs across coverages. The table below shows that 18 top insurance companies have together made 2042 requests for AI exclusions:
Source - Wolfe research & https://www.theinformation.com/articles/berkshire-hathaway-chubb-win-approval-drop-ai-insurance-coverage
So, if there is no protection in the vendor contract and your insurance program introduces exclusions, you are likely, albeit unknowingly, self-insuring AI liability risk on your company's balance sheet.
This is a major concern for companies on small and medium-sized companies, who will feel the pinch of litigation far more than larger enterprises. We know from our analysis that litigation attorneys are on the lookout and, in some cases, even advertising to build class actions against generative AI use cases that have caused harm!
Of course, beyond the contractual liability, deployers will try to shift the legal burden onto the vendor by trying to prove that they have followed the vendor’s usage instructions, including potential monitoring procedures and not modifying or misusing the tool. The vendor in turn will attempt to pass the liability onto the AI developer. However, this may not be successful for the deployer or for the vendor.
Immediate Actions for Risk Managers, Brokers, and General Counsels:
During these times of great change and uncertainty, it is worth undertaking the following three-step exercise. It will only take a few minutes and will help companies achieve some clarity:
Joshua Motta, the CEO of the cyber insurer Coalition Inc., put it well: ‘AI is its own peril’.