Actuaries use data for good. So, how can actuaries help address issues of insurance discrimination? At the 2022 All-Actuaries Summit, Dr Fei Huang presented joint work with Xi Xin on the topic of anti-discrimination insurance pricing.
|You can find more details in the full paper via SSRN here.|
Insurance discrimination issues have been an important topic for the insurance industry for decades and is evolving in part due to insurers’ extensive use of Big Data. That is, the increasing capacity and computational abilities of computers, availability of new and innovative sources of data, and advanced algorithms that can detect patterns in insurance activities that were previously unknown. On the one hand, the fundamental issues of insurance discrimination have not changed with Big Data. On the other hand, issues regarding privacy and use of algorithmic proxies take on increased importance as insurers’ extensive use of data and computational abilities evolve.
On the issue of insurance discrimination, a grey area in regulation has resulted from the growing use of big data analytics by insurance companies – direct discrimination is prohibited, but indirect discrimination using proxies or more complex and opaque algorithms is not clearly specified or assessed. This phenomenon has recently attracted the attention of insurance regulators all over the world. Meanwhile, various quantitative fairness metrics have been proposed and flourished in the machine learning literature with the rapid growth of artificial intelligence (AI) in the past decade.
In our paper, we aim to establish the linkage among potential and existing insurance regulations, fairness criteria, and anti-discrimination insurance pricing models. In particular, we:
- review anti-discrimination laws and regulations of different jurisdictions with a special focus on indirect discrimination in the general insurance industry;
- summarise the fairness criteria that are potentially applicable to insurance pricing, match them with different potential and existing anti-discrimination regulations, and implement them into a series of existing and newly proposed anti-discrimination insurance pricing models; and
- compare the outcome of different insurance pricing models via the fairness-accuracy trade-off and analyse the implications of using different pricing models on customer behaviour and cross-subsidies.
Under the current anti-discrimination legal framework, some jurisdictions have defined indirect discrimination (e.g. Australia and EU), but the extent to which indirect discrimination is restricted for insurance is still unclear. In reality, a common practice is that insurance companies simply avoid using or even collecting sensitive (or discriminatory) features. However, indirect discrimination may still occur when proxy variables (i.e. identifiable proxy) or opaque algorithms (i.e. unidentifiable proxy) are used. Therefore, there is an urgent need globally for insurance regulators to propose clear standards to identify and address the issues of indirect discrimination. Regulators and other stakeholders often reach a common understanding of indirect discrimination, which is defined below.
|Indirect discrimination: After avoiding direct discrimination, indirect discrimination occurs when a person is still treated unfairly compared to another person by virtue of implicit inference from their protected characteristics, based on an apparently neutral practice (e.g. proxy variables, opaque algorithms).|
In this paper, we summarised different regulation strategies and recent examples to mitigate indirect discrimination by reviewing several major insurance markets including the United States, the European Union, and Australia.
Fair machine learning experts are devoted to the discussion of algorithmic bias and fairness by introducing various fairness criteria, and most of these criteria broadly fall into two main categories: individual fairness criteria and group fairness criteria. Intuitively by their names, these fairness criteria aim to either achieve fairness at the individual or group level and an inevitable conflict may exist between group fairness and individual fairness.
In our paper, we summarised fairness criteria that are potentially applicable to insurance pricing, discussed their insurance implications, and matched them to corresponding regulation standards. Examples of fairness criteria discussed in this paper are presented below (the first two are individual fairness criteria and the remaining three are group fairness criteria).
- Fairness Through Unawareness is satisfied if the protected attribute is not explicitly used in pricing, which corresponds to avoiding direct discrimination.
- Fairness Through Awareness is satisfied if similar policyholders are charged similar premiums based on task-specific similarity metrics.
- Demographic Parity is satisfied if the premiums and the protected attribute are statistically independent.
- Disparate Impact (the Four-Fifth Rule) is a more flexible approximate version of demographic parity, which accepts the deviation of premiums between two groups within a predetermined threshold.
- Conditional Demographic Parity is satisfied if the premiums and the protected attribute are statistically independent after controlling for a set of legitimate rating factors.
Anti-discrimination insurance pricing models
Anti-discrimination insurance pricing strategies can be categorised into pre-processing (on the training data), in-processing (during model training) and post-processing (on the outputs) methods. In our paper, we implemented fairness criteria into a series of existing and newly proposed anti-discrimination insurance pricing models based on both generalised linear models (GLMs) and Extreme Gradient Boosting (XGBoost), using both pre-processing and post-processing techniques. Details of the model implications on insurance pricing and their linkage to regulation examples can be found in the paper.
Our empirical analysis using a French auto insurance dataset and assuming gender as the protected attribute suggests that:
- The performance of different anti-discrimination insurance pricing models depends on multiple factors according to our scenario analysis. Generally speaking, the anti-discrimination pricing models considered can achieve a balance between (group) fairness and prediction accuracy.
- The effects of adverse selection using fair models are limited in the empirical example and in certain scenarios, insurers can attract more low-risk consumers and fewer high-risk consumers by using fair models compared to the standard practice of just excluding the protected variable (fairness through unawareness).
- Different pricing methods (e.g. GLM v.s. XGBoost) may have different degrees of sensitivity to the protected attribute, which we suggest insurance regulators and practitioners should be aware of.
Insurers benefit from the collection of more granular data and the use of more advanced analytics techniques in the age of Big Data, but they are also capable of discriminating against protected classes more efficiently in underwriting or pricing decisions, often unintentionally and indirectly. Our research contributes to the understanding and mitigation of indirect discrimination in the insurance industry by establishing a connection between various insurance regulations, fairness criteria and anti-discrimination insurance pricing models.
We would also like to call for future research in this area, which will help address the insurance discrimination issues more broadly and empower actuaries to use data for good. For example, more multi-disciplinary research is needed to analyse insurance discrimination and pricing fairness for various lines of business from different perspectives. These collaborations will lead to more robust solutions and benefiting the policyholders, insurance markets, regulators, and society at large.
One practical difficulty of applying anti-discrimination insurance pricing models is the lack of information of protected attributes (such as race or ethnicity), which are usually not collected. More research is needed to address this issue. Additionally, explicit assessment and auditing tools are needed for both regulators and insurers to help make the regulation policies clarified. We hope that our work will stimulate discussions and contribute to future research.
|Read further coverage from the 2022 All-Actuaries Summit.|
CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.