Lydia Savill, partner, and Matt Steven, associate, at Hogan Lovells provides practical insights for risk professionals seeking to assess AI-related exposure and secure comprehensive coverage. They examine how insurers are currently approaching AI in underwriting and the areas where coverage may be limited or ambiguous.
In the rapidly evolving landscape of artificial intelligence (AI), risk managers are increasingly tasked with navigating both the opportunities and emerging exposures that AI presents. As businesses integrate AI into core operations, the risk profile shifts, with novel liabilities and governance challenges being introduced that traditional insurance may not fully address.
The nascent nature of AI technology, the swift pace at which it is being developed and implemented, and the particular nature of generative AI means that care must be taken to mitigate AI-related risk.
Risk teams must have a clear understanding of how AI solutions deployed within their organisation operate, the potential failure points, and the specific exposures they introduce. Only with this insight can they effectively evaluate whether the organisation’s existing insurance arrangements provide adequate protection against those risks.
How do AI risks relate to insurance lines?
Different companies face different risk profiles depending on how they are using AI and that feeds through into what sort of insurance protections will be required. To give a few examples:
Silent vs Affirmative cover
Whether AI-related risks are in fact covered under the relevant insurance policy is very much dependent on the applicable terms. It is therefore very important for policyholders to consider when purchasing or renewing insurance whether their particular AI liability risks would be covered under the terms and conditions of the relevant insurance.
Broadly, there are two ways in which AI risks may be covered by insurance:
Silent (or implicit) AI cover is where coverage of AI risks is provided via existing policies, such as in the examples provided above. Existing policies such as professional indemnity, business interruption, D&O, crime, product liability and employers liability may provide coverage for AI liability. Particular attention should be given to the language of these policies, policy limits and any AI exclusions. However, exclusions are not currently market standard.
Insurers often look to limit their potential exposure to new or developing risks, so, as the AI risk profile develops and becomes better understood, it is entirely possible that we will see AI-specific exclusions being implemented to ‘business standard’ insurance policies. And if that happens, what then?
Standalone or bespoke ‘AI cover’ is not (yet) market standard, so as it stands, AI-related risks will primarily fail to be dealt with under existing insurance policies. However, should AI exclusions begin to be implemented, the need for affirmative coverage may become more prevalent.
The market is showing signs of evolution in this space. Recently, Armilla Insurance Services launched an AI liability insurance product underwritten by Lloyd’s underwriters. The product is one of the first to offer affirmative coverage for unique AI-related losses rather than relying on protections in existing policies. However, companies would be well advised to take care in ensuring they are not paying for something already covered.
Whilst AI risks may be covered by existing policies now, clarity of cover may become a priority in the coming years for both insurers and policyholders alike. Those businesses adopting AI to a significant degree may need to seek positive cover from insurers, affirmatively covering AI-related risks (rather than just relying on the fact AI risks are not expressly excluded).
Appetite for affirmative cover may be particularly acute in the context of the intersection of AI and cyber risk, there being obvious concerns around security of data and also the integrity of the AI systems themselves. Existing cyber cover might encompass certain AI-related liabilities, but the cyber market is unlikely to be the answer on its own, for reasons of both risk appetite and underwriting capacity.
But what risks might not be covered?
It is also important to consider the AI-related areas which might not be covered by any policy, whether existing or affirmative. This is an evolving area, but examples might include where AI is intentionally or maliciously misused (which might be excluded under a ‘deliberate acts’ exclusion, or one which excludes criminal conduct), or where its usage results in regulatory fines or penalties (which are often excluded under professional indemnity covers and similar).
Alternatively, systemic AI related risks may arise, involving the widespread failure or malfunction of foundational AI models affecting a broad class of organisations and sectors. By way of example, a widely deployed AI model used in autonomous vehicles could develop a critical flaw that causes widespread operational failures, thus triggering multiple claims from insureds across a wide range of product lines - all at the same time.
A loose analogy can be drawn with the cyber market, and the usage of mandatory cyber war exclusions aimed at mitigating the risk arising from, amongst other things, disruption caused by state-sponsored or war-related cyber-attacks on critical digital infrastructure.
As AI becomes ever more ubiquitous in daily and commercial life, it is not difficult to conceive of existential AI-related risks arising which have, or could have, an impact on a global scale. Insurers may take the position that such risks are so unquantifiable or unmanageable as to not be capable of coverage.
Practical steps to consider
For risk and compliance professionals considering AI-related risk, it is essential to take a proactive and structured approach. We have included below some key considerations to help ensure your organisation is adequately protected: