Algorithms Regulation: Four Things Every Insurer Should Know When Using AI
AI is reinventing the way insurance carriers are running their activity. 75 percent of them are planning to use AI to automate tasks to a large or very large extent in the next three years.
And it’s worth it! Accenture’s calculations indicate that the insurance sector could reap a 10 to 20 percent improvement in annual profitability by investing in intelligent solutions. Just in North America, this would be worth $10.4 billion to $20.8 billion a year.
When using machine learning algorithms in insurance processes (claim settlement, sales management, risk scoring, …) one of the main european regulation that applies is GDPR (General Data Protection Regulation), the regulation in EU law on data protection and privacy. Even if GDPR mainly rules use of data, it also indirectly concerns algorithms.
Here are four things AI-first insurers should pay attention to when using machine learning algorithms for decision making.
#1 — Be cautious when using special categories of personal data in machine learning algorithms
GDPR considers personal data as sensible. But there are certain types of personal data that are even more sensible — the so called special categories of personal data. They are listed explicitly in article 9. It’s data revealing:
- racial or ethnic origin,
- political opinions,
- religious or philosophical beliefs,
- trade union membership
And also: genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation.
Processing of these special categories of data is prohibited by default, except in particular cases mentioned in article 9 — like when the user has given explicit consent to the processing for one or more specified purposes.
#2 — Be ready to provide “meaningful information about the logic involved” by your automated decision-making
GDPR articles 13, 14 and 15 require the insurer to provide meaningful information about the logic involved in automated decisions using personal data. It’s not necessarily a complex explanation of the algorithms used, or disclosure of the full algorithm. But the information provided should, however, be sufficiently comprehensive for the client to understand the reasons for the decision.
In practice, when automated decisions come from machine learning models, different technics exist to explain them. This is currently a hot research area. Find out more about it in our previous post.
#3 — In some cases, expect to keep human employees in the loop
In some cases, insurance clients shall have the right not to be subject to a decision based solely on automated processing (GDPR article 22).
An example is provided in a GDPR document, describing guidelines and best practices on applying the regulation:
Let’s say an automated process produces what is a recommendation concerning a client. If a human being reviews and takes account of other factors in making the final decision, that decision would not be ‘based solely’ on automated processing.
That’s why we — at Zelros — are fervent supporters of “Augmented Employees”, letting human experts in control of AI. We deeply believe that
human expertise + AI is stronger than human expertise alone, or AI alone
#4 — Go further, and think about a Discrimination
Impact Assessment (DIA)
As a key accountability tool, GDPR Article 35 introduces a Data Protection Impact Assessment (DPIA). It is a way of showing that suitable measures have been put in place to address the risks involved in automated decision-making, and demonstrate compliance with the GDPR.
However, the DPIA protects personal data, and individual subjects — but not groups.
Even if a sensible personal data (e.g. ethnic origin) is unknown or removed when training a machine learning model, the sensible information may be implicitly contained in the remaining non-sensible data, and participate in the decision bias.
Let’s say an insurers trained a machine learning model to detect claims that can be settled without any human need. Even if this model doesn’t use any variable regarding ethnic origin, how to assess that ethnic minorities are treated equally by the system?
That’s why the french government Villani Report on AI introduced the idea of a Discrimination Impact Assessment, in addition to the GDPR DPIA.
We will provide more about this in a future article, stay tuned!
Want to discover how these best practices are integrated into Zelros AI for Augmented Insurers platform? Request a demo!
Want to join our team? Check out our open positions!