There has been an awful lot of mainstream media attention about AI insurance in the past week, with Bloomberg and the FT releasing articles, to name but a few. But where are we up to with this category of risk forming? That’s something I wanted to share my thoughts on, having spent over a year and a half working on AI insurance.
When we started Testudo in early 2024, there were limited dedicated AI insurance products. The only coverages available were performance warranties, pioneered by Michael and his team at Munich Re in 2018, which enabled policyholders to indemnify their clients for financial losses arising from AI performance errors. Performance errors under this contract were clearly defined and tied to KPI’s.
Aside from the Munich Re performance warranty, and a few other startups attempting a similar product, it was clear that most other AI risks, such as those arising from technology failures or cyber attacks, were silently covered in traditional lines like cyber and technology errors & omissions insurance.
I’m sure, as a founder of an AI insurance company, you were expecting me to say that everything was excluded and the insurance market doesn’t know what it’s doing, but that is not the way we operate at Testudo!
A new category of insurance
That begs the question, what actually drives new categories of insurance? In my lifetime, and over the past 14 years operating in the insurance market, new categories of insurance have, in part, been created by a combination of the following three things that I like to call the trifecta:
Regulations and/or new technologies create new exposures, such as cyber risk and the associated privacy laws and regulations, which;
Create actual (often unexpected) or perceived portfolio losses exceeding those forecasted and modelled by actuarial teams, which;
Lead insurers to exclude risks from traditional lines of coverage for the emerging exposure because they i) do not have data to price the exposure accurately, ii) have suffered significant unexpected losses, or iii) it was not the original intention of the policy to cover the emerging exposure.
Note - Sometimes laws and regulations can mandate insurance, think of good old car insurance. But for the sake of my writings here, we are focusing on where laws do not mandate insurance. However, some are admittedly calling for mandated AI insurance.
When the trifecta comes together, they often create a need for insurance rather than a want, which is critical for selling a new type of insurance product and building a portfolio.
From ideation to execution
When we founded Testudo, we were betting on an AI risk trifecta emerging, which would lead to demand for a new category of insurance. Our thesis was that this new category would form around US liabilities arising from generative AI systems, specifically from the deployers, not the vendors. So, where are we currently at with the trifecta:
There are numerous state bills, laws, and regulations, such as the Colorado AI Act, Texas Responsible Artificial Intelligence Governance Act, and California Transparency in Frontier Artificial Intelligence Act. Many of these create specific risk management, disclosure, and reporting requirements. Orrick maintains a comprehensive list here.
Whilst we do not benefit from any information asymmetry of what is happening behind the scenes at insurance carriers, we have built technology that has allowed us to create one of the largest granular databases of real-world AI risk, in the form of US AI lawsuits. We can share that there are already thousands of AI lawsuits, and a number of these must be working through to insurers' balance sheets as claims. We will share more insights on this data at another time, but for now, we are pleased to be helping inform insurance markets and regulators with our insights.
Exclusions are emerging, particularly concerning generative AI, with known exclusions in some E&O and media liability forms, and a significant, broad-based application of generative AI exclusions in 1/1 renewals for a critical line of insurance purchased by most US insureds. More on this another time …
In addition to the above, the Lloyd’s Market Association (LMA) has put out guidance on how artificial intelligence can impact the international errors and omissions (E&O) market. The Geneva Association has produced an excellent report, which finds that nine in ten businesses show interest in insurance cover for Gen AI risks. Certain cyber insurers, such as Coalition, have also provided endorsements for specific malicious Gen AI cyber exposures.
The shift
It is clear that even in the year and a half since we started Testudo, there has been a significant shift towards creating a new category of AI insurance, with a few caveats I want to mention:
Insurers still accommodate vendor risk, i.e., companies that develop and sell technology to others, within traditional cyber and technology errors and omissions (E&O) insurance. The market remains soft: insurers maintain broad forms, offer cheap rates, and reinstate limits with few exclusions. Any insurer that adds exclusions under these conditions risks becoming very unpopular.
The point above remains true even though headlines suggest insurers worry about significant losses among foundational model providers. I’m not saying they shouldn’t worry, because foundational models are subject to lots of litigation, and I cannot see how the premium on those policies + investment returns would ever cover the claims! But luckily for most insurers, these foundational model exposures form a small part of an extensive portfolio, and each insurer will minimise exposures with appropriate limit sizing.
I do not foresee any over-generalised AI exclusions, as it would risk excluding most technology use cases on Earth. The exclusions will be more specific, such as those relating to defined damages arising from the use or deployment of generative AI.
Despite those who think we can insure catastrophic, end-of-the-world generative AI risk, I offer a reality check: the insurance market will not be able to cover damages amounting to a trillion dollars from one event! This is why you will likely see the need for foundational model developers to build captives. However, many foundational model developers are rolling all their revenue into buying GPUs and frontier research. As they are loss-making entities, I remain sceptical of a potential per company captive approach, especially if they need to cover billions of dollars in damages; the liquidity and balance sheet to sustain such losses won't be there. Structures will need to get creative with group captives.
Looking ahead, next year promises to see the first proper rise of new dedicated products designed to address significant coverage gaps formed through exclusions, rather than merely offering ‘affirmative coverage’, because everything remains silently covered (not compelling). This is an exciting development and will help protect companies building the AI economy!
Closing thoughts
Until I write again, I will leave you with some additional insights to keep you thinking:
In our data, we are seeing extremely low correlation between model performance and liability/litigation risk, so please be mindful not to be fooled by randomness. I do not need to remind you that when a measure becomes a target, it ceases to be a good measure.
AI systems exacerbate biometric class action lawsuits by wrongfully collecting biometric identifiers, thereby creating significant class action potential. Watch out, cyber insurers…
Companies are taking each other to court over trade secrets when models are jailbroken and system prompts are stolen.
Also, feel free to reach out whenever for a chat about AI insurance. I'm always happy to talk!
Mark (mark.titmarsh@testudo.co)
Key takeaways
The insurance industry is on the cusp of forming a new category: AI insurance, driven by regulatory change, real-world losses, and emerging exclusions.
Early AI coverage existed mainly through performance warranties, but the market is rapidly evolving and new products will emerge as generative AI exclusions create policy gaps.
The “AI risk trifecta” - regulation, unexpected losses, and exclusions - is creating demand for purpose-built insurance products.
The next year will see dedicated Generative AI liability insurance products address new coverage gaps created by exclusions, which will enable companies to continue to confidently build in the AI economy.
Frequently asked questions
How will AI affect the insurance industry? AI could create an entirely new category of insurance, focused on the unique risks of deploying and developing AI systems.
What is a business risk associated with AI? Generative AI systems can cause significant financial or reputational losses if outputs infringe on copyright, expose protected information, or are inaccurate. These incidents can lead to lawsuits for negligence and violations of statute, regulatory fines, or loss of customer trust.
What are AI exclusions in insurance? Generative AI exclusions are policy clauses that limit or remove coverage for damages caused by or related a policyholders use of generative AI systems. They’re becoming more common as insurers look to remove coverage for risks they do not know how to assess how or price.
Next steps:
Read more of our research and insights articles to get the latest AI insurance insights
Contact our team to find out more about insuring GenAI and how to monitor the litigation risks of the systems you use