AI is changing how risk is created, measured, and insured. and companies can’t assume they’re covered. Expert recommendations in this article:
As Chaucer’s Tom Graham puts it: “AI is reshaping the risk landscape—and that requires fresh thinking from the insurance market.”
General counsel, already burdened with the daunting if not impossible task of assessing the risks stemming from their company's uses of AI, increasingly are advising on a related matter that's similarly vexing and high stakes: securing insurance to mitigate against those risks.
It's a big challenge as artificial intelligence grows ever more sophisticated and migrates into everything from software and phones to logistics and manufacturing—even thermostats.
With such rapid proliferation, the deployment of AI has quickly outpaced the insurance market's ability to scope and value the attendant risk, said Michael Levine, a partner at Hunton Andrews Kurth in Washington, D.C.
"The technology has become so pervasive that it is relatively ubiquitous now," said Levine, who leads the firm’s property/casualty and emerging issues insurance practices. "What does that mean in practical sense? That's the challenge that people are wrestling with."
For example, does a company's existing insurance cover AI underperformance or hallucinations? wonders Thomas Faulconer, a clinical professor of risk management and insurance at Butler University and former executive at Indiana Farm Bureau Insurance.
"If you're a corporate general counsel, is there something you might do during this uncertain time? I suppose you need a broker or someone who can look at your needs and hopefully 'guess' at what may come up," Faulconer added, in a nod to the difficulty of gauging AI risk.
While a number of insurers offer AI insurance, the coverage is relatively narrow, and global premiums in 2024 were expected to reach only $40 million, according to research by Deloitte.
However, the firm estimates that insurers may write about $4.8 billion in AI insurance globally by 2032, which works out to a compounded annual growth rate of about 80%.
In April, insurer Chaucer Group teamed up with Armilla AI to cover AI "mechanical underperformance" liabilities such as hallucinations or "model drift," which is when performance degrades over time.
"AI is reshaping the risk landscape—and that requires fresh thinking from the insurance market," Chaucer executive Tom Graham proclaimed during the product rollout.
Apart from AI-specific insurance as a "first line of defense," those facing AI-related liability should look to their existing lines of insurance for cover, Levine and Hunton associate Alex Pappas said in a client advisory in January.
These policies are broad and typically cover loss from all causes not specifically excluded, they said.
"Thus, even for those with AI-specific policies in hand, a good role of thumb is to look first to traditional or 'legacy' coverage lines," such as general and excess liability, directors and officers liability, professional liability, employment practices liability and cyber insurance, Levine said.
General and excess liability policies typically broadly cover the cost of defending and settling lawsuits involving property damage or bodily injury, for example, "with defense cost coveragebeing due if the allegations against the insured merely raise a potential for coverage," he said.
As such, businesses should carefully review their existing policies, first, "to understand the extent of their coverage in the context of AI and consider whether traditional endorsements or specialized coverage may be necessary to filling any coverage gaps," the Hunton lawyers noted.
Other traditional lines, such as directors and officers liability insurance, may blunt the impact of litigation stemming from leadership's use and implementation of AI as well as alleged overstatements or misstatements in public filings.
Perhaps because of the rising frequency of claims that companies overstated or misstated their use of AI technology, D&O has been one of the coverage lines where Levine has seen insurers rollout AI exclusions.
To what extent insurers will add AI exclusions to other traditional policies remains to be seen. One insurance executive Levine spoke with said that "nobody wants to be the first to do it," lest insureds run to competitors.
Faulconer, the Butler University professor, said that AI raises some of the same liability-apportionment questions as autonomous vehicles. Some would argue that, unlike with a human-piloted vehicle, one cannot hold owners responsible since they were not in control of the autonomous vehicle.
"If my company uses AI to design a product or code a website and the design or program is faulty, causing injury or damage, the question becomes, 'Who is liable?'"
"Ostensibly, (it's) the organization that produced the AI. But one could also argue that the business using it is liable for not choosing and vetting the AI used. All AIs are not the same," said Faulconer, who also is an attorney.
Another unsettled matter, at least for insurers, is how to price AI-specific products. Insurers simply don't have a comprehensive loss history at this point. So pricing is likely to seesaw in the coming years, as it did with cyber insurance after it came on the market around the turn of the century.
Faulconer noted that around the time cyber-insurance carriers were finally becoming confident in their pricing, hackers became more sophisticated and unleashed ransomware attacks and other schemes that caused huge losses that insurers didn't fully anticipate.
"That instituted the time period where it (cyber insurance) was really hard to get because insurance companies were afraid to write it because they didn't want to lose their rear ends,"Faulconer said.
When they resumed writing, "they priced it really high because they'd been burned. So I see that similar path with AI insurance because it's been the Wild, Wild West."
Faulconer and Levine agreed that general counsel not only need to conduct thorough AI risk assessments but also pore over insurance policies so that they understand what's covered and what is not.
Levine said companies may need to involve more than just the legal chief and risk manager—particularly given the interplay of AI throughout large organizations
"This is quickly evolving to a C-suite level function. There really needs to be a 'chief AI officer,'" especially in larger enterprises that are using different types of AI throughout their operations, Levine added.
The extent of a company's exposure to AI risks should guide whether purchasing coverage makes sense. Falconer noted.
For example, he said, "If a company has its own brand of AI and encourages employees to use it, as many large companies are doing, the risk is significant," and the business might be wise to purchase coverage.
...
Original article on Law.com