Close Close
ThinkAdvisor
A circuitry skull symbolizing artificial intelligence

Life Health > Running Your Business

Regulators, Groups Fight Over AI Rule Fit

X
Your article was successfully shared with the contacts you provided.

Some players want new state insurance guidelines on artificial intelligence to fit like a billowy wool kaftan, and others want them to fit like a tight steel belt.

The wool versus steel fight is showing up in comments on a new model bulletin being drafted by the National Association of Insurance Commissioners’ Innovation Cybersecurity and Technology Committee.

The committee posted a second draft of the bulletin model last week on its section of the NAIC website. Comments on the new draft are due Nov. 6.

Scott Kosnoff, an insurance law specialist at Faegre Drinker, said in an email interview that the NAIC’s AI regulatory effort covers much of the same ground as Colorado’s new regulation banning uses of “external consumer data and information sources” that lead to race-based discrimination.

But “Colorado’s statute takes a prescriptive approach,” Kosnoff said. “The NAIC bulletin lays out regulatory expectations, rather than requirements.”

What it means: Many of the fights over how life and annuity issuers’ AIs and robots behave may start out looking like battles over how loose and flexible the rules should be, rather than what the goals of the rules should be.

All players seem to agree that, in principle, life and annuity issuers should not use AI or other new technologies to discriminate in an unfair way.

The nuts and bolts: Federal law leaves regulation of the business of insurance to the states. The NAIC, a group for state insurance regulators, can set voluntary guidelines but cannot normally impose rules on its own.

The new model bulletin draft is a revision of an earlier version the Innovation Committee posted in July and included in a meeting packet circulated in August.

The bulletin is part of a long-running conversation among regulators, insurers, insurance groups and consumer groups about insurers’ efforts to use new kinds of data and data analysis in the marketing, underwriting, pricing and administration of life and annuity products.

In 2019, for example, New York sent out a letter warning insurers to be prepared to show that any analytical strategies they use in new accelerated life insurance underwriting programs are reasonable, fair and transparent.

Colorado regulators approved the life anti-discrimination regulation in September.

Birny Birnbaum, a consumer advocate, has been talking about the need for AI anti-discrimination rules at NAIC events for years.

The new NAIC draft bulletin reflects AI principles the NAIC adopted in 2020.

The arguments: The Innovation Committee has posted a batch of letters commenting on the first bulletin draft that reflect many of the questions shaping the drafting process.

Sarah Wood of the Insured Retirement Institute was one of the commenters talking about the reality that insurers may have to make do with what tech companies are willing and able to provide. She urged the committee “to continue approaching this issue in a thoughtful manner so as not to create an environment where only one or two vendors are available, while others that may otherwise be compliant are shut out from use by the industry.”

Scott Harrison, co-founder of the American InsurTech Council, welcomed the flexible, principles-based approach evident in the first bulletin draft, but he suggested that the committee find ways to encourage states to get on the same page and adopt the same standards. “Specifically, we have a concern that a particular AI process or business use case may be deemed appropriate in one state, and an unfair trade practice in another,” Harrison said.

Michael Conway, Colorado’s insurance commissioner, suggested that the Innovation Committee might be able to get life insurers themselves to support many of types of strong, specific rules.  “Generally speaking, we believe we have reached a large amount of consensus with the life insurance industry on our governance regulation,” he said. “In particular, an increased emphasis on insurer transparency regarding the decisions made using AI systems that impact consumers could be an area of focus.”

Birnbaum’s Center for Economic Justice asserted that the first bulletin draft was too loose.  “We believe the process-oriented guidance presented in the bulletin will do nothing to enhance regulators’ oversight of insurers’ use of AI systems or the ability to identify and stop unfair discrimination resulting from these AI systems,” the center said.

John Finston and Kaitlin Asrow, executive deputy superintendents with the New York State Department of Financial Services, backed the idea of adding strict, specific, data-driven fairness testing strategies, such as looking at “adverse impact ratios,” or comparisons of the rates of favorable outcomes between protected groups of consumers and members of control groups, to identify any disparities.

Credit: peshkov/Adobe Stock


NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.