Generative AI for Actuaries: Risks and opportunities

In the second article of this series, we dive deeper into some of the commercial and ethical aspects of generative AI for actuaries. The possibilities LLMs provide to improve productivity are explored, concentrating on the most value-adding tasks and risks associated with relying on this technology.

In Part 1 of this article series, we introduced the exciting new developments and potential applications of generative AI, a type of artificial intelligence that can create new content, such as text, images, audio and video. One of the most advanced examples of generative AI is ChatGPT, a chatbot powered by a Large Language Model (LLM) that can generate realistic and coherent text responses to any text prompt. We explored how LLMs work, what benefits they bring, and what challenges they pose for actuaries and their work. 

Applications for Actuaries

Communication

LLMs can help actuaries communicate complex mathematical concepts or other recommendations into language that is easily digestible for audiences such as the Board of a company – or the readers of Actuaries Digital. 

Many sections of this article have been written with the assistance of LLMs. Some editing and word-smithing was still needed, but working in partnership with AI saved a lot of time in the initial writing stages. One of the first uses for LLMs was translation and harnessed properly, can help actuaries communicate in languages other than their native tongue.

Education

LLMs can also be a useful tool for actuaries in education. When studying for actuarial exams, LLMs can help to clarify aspects of subject materials, rephrasing the same matter in different ways to help students better understand certain topics in preparation for their exams. For example, let’s say you are studying the Data Science Applications subject and would like to understand how neural networks can be used to solve actuarial problems. Here’s what ChatGPT had to say on this:

Source: OpenAI ChatGPT  Accessed March 2023. 

In the sample output from ChatGPT above, while the response was verbose and repetitive in style, it was tailored to completing traditional actuarial tasks as requested. Our experience has been that ChatGPT will not always provide a perfect response to a question, so it is helpful for users to ask ChatGPT to refine its answers or provide more details on a certain parts of the initial response with further prompting.

Coding

Coding in languages such as SQL, Python and R has increasingly become a BAU task for many actuarial and data professionals, particularly entry-level analysts. As a result, a significant portion of working time is spent developing and reviewing coding scripts for purposes ranging from descriptive data analysis to more advanced modelling. LLMs can be used to speed up this process and act as a personal coding assistant for actuaries.

Based on English instructions, we were able to get ChatGPT to write working Python code that can perform simple data science: inputting financial time series data and transforming it to model a response. It was also able to explain the code in detail, as shown in the ChatGPT response snippet below.

Source: OpenAI ChatGPT  Accessed March 2023.

An analyst can very easily use scripts like the one produced above, make minor tweaks where necessary, and focus more time on analysing the results and providing value-adding insights.

Text Summarisation and Generation

Actuaries are often required to convey technical concepts in simple terms. LLMs may be great at helping actuaries achieve this. To see this in action, we tried the following prompt on ChatGPT.

Source: OpenAI ChatGPT Accessed March 2023.

Apart from summarising texts, its ability to remember earlier conversations (in the second prompt, without users needing to mention the Australian Privacy Principles again), as well as to allow users to provide follow-up corrections to its responses (by clicking the thumbs-down icon), are powerful features.

Moreover, whilst it’s widely expected that ChatGPT can perform low-level automation tasks such as Q&A and text summarisation, it can also be used to perform creative tasks such as content creation. To see this in action, we tried the following prompt on ChatGPT.

Source: OpenAI ChatGPT Accessed March 2023.

In this ‘generative’ tweet, ChatGPT seems to have considered Twitter’s character limits, utilised emojis and hashtags which suited the profile of the tweet, and come up with a reasonably catchy slogan. Could this be used to streamline some marketing-related tasks for actuaries, such as writing job descriptions or creating a hiring slogan that is more engaging with qualified applicants on LinkedIn?

Some have pointed out how AI’s ability to both generate text from a brief list of points and to quickly summarise large amounts of text can lead to some rather absurd situations… 

Source: Twitter. Sam Altman, CEO of OpenAI.

On a separate note, The Australian Financial Review invites submissions on how professionals are using ChatGPT to this blog. There have been some interesting submissions already.

Risks

These LLMs are new. Whilst through the demonstrations above, they appear to be ready-set-go technology,  there may be (known and unknown) inherent risks associated with their use. Here are some of the known risks:

  • Hallucinations

Large language models can hallucinate. Hallucination in this context refers to mistakes in the generated text that are semantically or syntactically plausible but are in fact incorrect or nonsensical. Some examples of hallucinations in LLM-generated outputs include factual inaccuracies, unsupported claims, nonsensical statements, and improbable scenarios. 

Consequently, actuaries (and others) using LLMs must have expertise in the areas they ask the LLMs to write about and remain hyper-vigilant for errors in the reasoning or conclusions drawn.

  • Financial Advice

One of the concerns around using LLMs is the risk of giving inappropriate financial advice to customers. To prevent this, companies can limit the use of LLMs in chatbots to responding to general inquiries or simple customer service issues rather than allowing them to provide financial advice. Companies can also ensure that a chatbot has been thoroughly trained and vetted for accuracy and that there are human experts available to review and verify any advice given by the chatbot.

  • Jailbreaks

Although ChatGPT has been trained to remain politically correct generally, unlike the disastrous earlier results of Microsoft Tay, it has been possible to use long conversations or unusual prompts to take the conversation in strange or disturbing directions. This may potentially present brand and reputation risks for organisations looking to use LLMs in a customer-facing context.

  • Data Ethics and Privacy

Another key consideration is data ethics. There are concerns about the sensitivity of certain types of communication, particularly when it comes to topics like death and grief. For example, if an insurance company needs to draft a consolation letter to the family of someone who has passed away, using machine-written communication could be seen as insensitive or impersonal.

Furthermore, transparency around whether communication is happening with a bot or with a human is important for sound ethical conduct. If the customer were misled to believe they were speaking with a human when they were actually chatting with a bot, there is a higher chance of legal action against the company. To mitigate this, companies must be transparent about the use of ChatGPT and other LLM-powered solutions, ensure that they are collecting and using customer data ethically and in compliance with regulations, and that they have appropriate safeguards in place to protect customer privacy.[1]

To mitigate some of the risks and limitations as discussed previously, the following flowchart provides a useful thought process to follow when deciding whether it is safe to use ChatGPT for a particular application:

Source: Tiulkanov, A (2023) [Twitter] .

Concerns have been raised about submitting data, via ChatGPT, to its developer, OpenAI, but there are other LLMs that can be run securely on local hardware without data transmission to third parties. Whilst running state-of-the-art LLMs requires expensive hardware, high-quality models are likely to become increasingly accessible even for smaller businesses due to more recent innovations. 

Concluding thoughts

Generative AI models have been a recurring highlight in our Data Science Newsletter for members. In 2019, we featured the GPT-2 model but recent advances in scale have led to Large Language Models (LLMs), such as ChatGPT and GPT-4, with increasingly impressive performance, particularly in following human instructions. 

For actuaries, these models offer opportunities to improve the productivity of everyday workflows as illustrated by the examples above. In the next article, we will deep-dive into a case study of using ChatGPT for exploratory data analysis.

References

[1] For example, organisations currently have obligations related to the disclosure, collection and usage of personal information stemming from the Privacy Act and GDPR. Further, the Privacy Act is currently under review, so these obligations will continue to evolve. See https://www.ag.gov.au/rights-and-protections/publications/privacy-act-review-report for further detail.

CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.