Evolving Generative AI Usage Policies Create New Uncertainties for Insurers

June 12, 2024
5 min read

Anthropic updated its AI usage policy, categorizing insurance applications as "High Risk" and requiring human oversight and disclosure of AI use, aligning with the upcoming EU AI Act. This change adds complexity for insurtech companies and insurers, as it introduces additional requirements beyond existing US regulations. Other AI providers might follow suit, creating further uncertainty, highlighting the need for careful consideration of model licensing and regulatory compliance.

By Andrew Marble and Philip Dawson

AI language model provider Anthropic updated it’s usage policy this week1, likely in preparation for the entry into force of Europe’s AI Act2. The update adds a new “High Risk” category of use. This includes uses of their models, including its flagship Claude series, in insurance applications, specifically “Integrations related to health, life, property, disability, or other types of insurance underwriting, claims processing, or coverage decisions” along with Legal, Finance, Employment, Academic Testing, Accreditation and Admission, and Media and professional journalistic content. 

As part of bringing its policy in line with AI Act’s requirements for developers of foundational models deployed in “high-risk” applications, Anthropic’s policy requires users to have a “human in the loop”, defined as “a qualified professional in that field must review the content or decision prior to dissemination or finalization.” It also requires disclosure to customers and end users that Anthropic AI is being used. 

Human oversight is an important part of any AI system. For example, in an automated task, outlying inputs or uncertain outputs can be flagged for human review. However, Anthropic’s policy appears to cover all decisions, without regard for uncertainty or other standard methods of triaging that allow for partial automation. Additionally, the insurance use cases covered are very broad, appearing to cover not just automated decision making but also claims processing etc. that could include data extraction, document processing or other more low-risk tasks.

Anthropic's policy update reflects the EU AI Act's global influence, as AI labs and enterprises worldwide work to align their policies with its requirements for all users, including in the US.  With respect to insurance, the use of automated decision tools including AI in underwriting and pricing, or in determining health and life insurance coverage, is already subject to regulation in the United States. Notable examples include the New York DFS Circular on insurers’ use of AI, Colorado DOI’s regulation on quantitative testing, as well as bulletins adopted by the NAIC and other State insurance regulators etc. The restrictions added by Anthropic may have the effect of requiring insurtech companies and insurers operating in the US to meet this an additional layer of EU-inspired requirements

The change in policy also adds uncertainty for insurers using other third party generative AI providers, such as models developed by OpenAI or Cohere, which will likely follow suit. For now OpenAI does not appear to restrict insurance uses3. Cohere4 does have restrictions on classifying or profiling people by demographic characteristics which ironically may conflict with fairness audit requirements in some states that require the application of demographic data imputation methods. . Cohere does have restrictions on classifying or profiling people by demographic characteristics which ironically may conflict with fairness audit requirements in some states that require the application of demographic data imputation methods. 

More broadly, the changes to Anthropic’s policy highlight some of the challenges insurtech companies and insurers should expect as they integrate generative models like Claude into insurance applications. Essentially, users are at the whims of model provider’s unilateral policy changes, and can become subject to rules that diverge or even exceed regulatory requirements in the jurisdictions they operate in. The same concerns apply to “open weights” models like Meta’s Llama that, although they can be run locally, are not open source5 and contain usage restrictions6 (currently not covering insurance) that can be unilaterally changed.

With insurance apparently now considered “high-risk” it will be increasingly important for insurtech companies and insurers to carefully consider how generative AI models are licenced and restricted before building tools or products around them. Many new AI models are being released under “open” licences that in fact have use restrictions. Using models with an open source licence such as Apache 2.0 removes use restrictions and allows providers to focus on complying with regulations and not requirements set by tech companies, at the cost of possible added complexity in finding a hosting provider. Whether using an open source model or one of the big names, ongoing model evaluation that considers not only performance but the policy and regulatory landscape is of increasing importance.

1 https://www.bloomberg.com/news/articles/2024-05-13/openai-rival-anthropic-brings-claude-chatbot-to-europe

2 https://www.anthropic.com/legal/aup

3  https://openai.com/policies/terms-of-use/

4  https://docs.cohere.com/docs/usage-guidelines

5  https://www.marble.onl/posts/software-licenses-masquerading-as-open-source.html

6 https://llama.meta.com/llama3/use-policy/

Share this post

Safeguard your business with our AI Insurance

Get started today and be protected within two weeks.
Get in touch
ArrowArrow