How confident are you that your insurance pricing or underwriting models are not discriminatory?

As actuaries and insurance professionals, it’s perhaps not a question that we often ask. Why would we – isn’t insurance all about differentiating risk and finding the most accurate price for each customer? 

So, when our work is challenged by anti-discrimination legislation, the first reaction might be confusion, unawareness, or worse, dismissal.

Because it’s easy to say:

  • But that legislation does not apply to insurance, we’re special
  • But we have data – so it must be ok, don’t we have exemptions to all this?
  • My personal favourite – it’s always been done this way

But Australian law is quite clear – such reasons are not good enough!

Anti-discrimination legislation stresses that discrimination on the basis of age, gender, race, disability and other protected attributes is illegal in Australia, even for insurance, unless you meet the detailed criteria of certain insurance exemptions or an unjustifiable hardship exception for the Disability Discrimination Act.

Not only are actuaries and insurance professionals typically unaware of the details of these exemptions and exceptions, but the rise of artificial intelligence (AI) and machine learning in the industry magnifies the risk that customers will be discriminated against unintentionally.

Big data and large models analyse large volumes of granular data more quickly and create more complex and potentially more accurate models. Such complexity can lead to unexpected correlations to protected attributes, and lead to unintended discrimination. So it’s no surprise that 70% of Actuaries who responded to a recent survey felt at least moderately concerned about the risks of breaching anti-discrimination law when using AI. This highlights the need for further guidance in this area.

But such practical advice on discrimination and the effect of AI is hard to come by, especially for the insurance industry.

Last week, a Guidance Resource was launched jointly by the Australian Human Rights Commission and the Actuaries Institute.


The guidance has been prepared to help actuaries and insurers to comply with the federal anti-discrimination legislation when AI is used in pricing or underwriting insurance products.

The collaboration provides for multi-disciplinary guidance on discrimination and the effect of AI for insurers – a complex issue facing society.

A must-read for any actuary or insurance professional involved in pricing or underwriting, this Paper gives guidance on:

  • The federal anti-discrimination laws and the issues and uncertainties that may arise in the insurance industry,
  • The risks and effect of the use of AI in insurance pricing and underwriting,
  • Most importantly, practical guidance through hypothetical case studies and examples that reflect some of the present areas of uncertainty for insurers and how actuaries and insurers could mitigate the risk of breaching the law.

The AHRC was supported by the members of the Actuaries Institute Anti-Discrimination Working Group, Chaired by Chris Doman, with help from Elizabeth Baker, Raymond Bennett, Terrence Tong, Fei Huang, Will Turvey and Michael Storozhev to make this collaboration possible.

The launch was attended by over 70 senior individuals representing most major Australian insurers, consultancies, industry bodies, academia, consumer groups, regulators and government agencies.

Following a thought-provoking introduction from Elayne Grace, CEO of the Actuaries Institute of Australia, Lorraine Finlay, Australia’s Human Rights Commissioner, shared with the audience the importance of the journey and why anti-discrimination is so critical to protect human rights.

Both were joined by Chris Dolman, Executive Manager, Data and Algorithmic Ethics, IAG, and Ingrid Thorn, Head of Underwriting Life & Health Products ANZ, SwissRe, on an interactive panel.

Elayne Grace, CEO of the Actuaries Institute of Australia

Elayne challenged the audience to consider: How conscious are we of the way in which pricing and underwriting decisions are made?

Her introduction was a reminder of why all actuaries involved in pricing and underwriting must be aware of anti-discrimination laws, and how AI can affect them. Would we be comfortable explaining our decision in front of a parliamentary inquiry or Royal Commission?

Corporate Responsibility is a key agenda item for all Australian boards and organisations. Elayne reminded us “boards need to understand AI, its applications and its limitations, to ensure it does not breach consumer rights and societal expectations.”

Lorraine Finlay, Australia’s Human Rights Commissioner

The emergence of AI does not mean that AI requires a new standalone legislative framework.

Instead, Lorraine shared that Australia has protections against discrimination, and these are enshrined in federal, state and territory laws, and they already apply to decisions that are made or informed by AI and relevant decisions made by insurance companies.

However, emerging technology creates uncertainty for practitioners, and Lorraine stressed the importance of partnerships with industry and the business community as a really important way of protecting and promoting human rights.

 

Chris Dolman, Executive Manager, Data and Algorithmic Ethics, IAG

Imagine navigating gender-neutral directives in the EU without suitable guidance.

Chris’s experience in the UK with the directive reminded us of the problems that can arise when guidance is insufficient, and practitioners are left to ‘work things out for themselves’.

He shared the complexity of the issue, focusing on the case studies in the guidance resource that are so critical and relevant for those of us in products like motor, life and travel insurance. He walked through how even an apparently simple question can lead to multiple layers of complexity under the surface, showing how vital this guidance is for practitioners.

Ingrid Thorn, Head of Underwriting Life & Health Products ANZ, SwissRe

Providing a life insurance perspective, Ingrid shared the challenges the industry is having with getting the data needed, summarising that Underwriters need an evidence base to make a decision that should include Australian and global experience to quantify.

Her advice was very practical – distilling what we should take all back from the launch as the “four Cs”:

  • Be curious,
  • Challenge what’s being asked,
  • Start that conversation, and
  • And the most important one – collaboration.

Strong Audience engagement showed just how many questions there are on this topic.

That engagement also suggests the insurance industry is ready to think more deeply about how it prices and underwrites insurance.

Questions around the type of culture that is needed, the guardrails that AI requires, what fairness is, and finding the right balance for a sustainable business showed that this is only the beginning of the journey.

In fact, Lorraine ended the launch by committing that the measure of success of the guidance will be the questions that it generates.

We in the Anti-Discrimination Working Group are confident it will do just that!

Read the Guidance Resource: Artificial intelligence and discrimination in insurance pricing and underwriting here. 

 

 

 

CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.