Generative AI for Actuaries: Exploring new possibilities

Generative AI is a type of artificial intelligence that can create new content, such as text, images, audio and video. One of the most advanced examples of generative AI is ChatGPT, a chatbot released by OpenAI which is powered by a large language model (LLM) that can generate realistic and coherent text responses to any text prompt.

ChatGPT has captured public attention with its impressive ability to mimic human language and engage in conversations on various topics. It reached 100 million monthly users just two months after public release.

The area is developing quickly. GPT-4, released on 14 March 2023, now passes the US Bar – and is estimated to beat ~90% of all those sitting the exam. The previous model GPT-3.5 performed at the 10% percentile and failed the exam. 

But what does this technology mean for actuaries and their work? In this article series, we explore some of the commercial implications for LLMs, and some of the ethical and professional challenges that generative AI poses for actuaries.

How do LLMs work?

LLMs process vast amounts of natural language data with the goal of generating text that is indistinguishable from human writing and can be fine-tuned for specific tasks. They are based on a deep neural network architecture that uses the concept of “attention”, which they then use to predict the next word in a sentence, given the previous words. The more data the models are trained on, the more accurate and sophisticated their language abilities become.

The concept of “attention” was first proposed by Google researchers, originally for language translation, to learn the relationships between words in a text. A summary of the mathematical concepts is explained in this video.

Model features

LLMs like those powering ChatGPT bring a number of unique benefits that make them suitable for commercial applications. 

  • Understanding and Generating Human-like Language: LLMs have been trained on vast amounts of text data and are capable of understanding human language and generating human-like responses.
  • Multilingual Capabilities: LLMs can be trained in multiple languages, making them useful for language translation and cross-language communication.
  • Time and Cost Savings: LLMs can process vast amounts of data and generate responses quickly, making them a time- and cost-efficient solution for many applications.
  • Personalisation: LLMs can be fine-tuned to specific tasks, such as producing text in an insurance setting, making them useful for personalised recommendations and predictions.
  • Automation: LLMs can automate many tasks that previously required human intervention, such as customer support, content creation, and data analysis.
  • Reasoning: LLMs can provide reasoning for its answers to questions. While simulated, in many real world contexts its reasoning is sound and helpful, in line with the need for Explainable AI (XAI).


While these abilities are impressive, it is important to recognise current limitations. LLMs have the tendency to ‘hallucinate’, which means that responses may be factually incorrect but be delivered with the same degree of certainty. Results are ultimately unpredictable – the same prompt can generate different responses, so ensuring a consistent level of quality is not guaranteed. Training inputs are drawn from the web and may reflect biases that are present. The technology has obvious implications for cheating at school or university, given its good performance on tasks. The abilities can also be used for nefarious purposes – such as hackers using the tools to write malware more efficiently.

Commercial applications

There are a wide range of use cases for the commercial applications of LLMs. Below lists a few of them, starting from the more actuarially relevant examples:

  • Data Analysis and Document Summarisation: LLMs can be used to analyse large volumes of unstructured data, such as claims descriptions, to identify trends and insights which is very useful for regular claims analysis performed by many actuaries. The ability to summarise documents can also help actuaries quickly go through lengthy product disclosure documents to understand and compare product features.
  • Text to Code: LLMs can generate ready-to-run code along with code commentaries with a simple text prompt, which is an interactive and potentially more hands-on way for actuaries to learn about coding. Even for actuaries who already know how to code, the code snippets that LLMs provide could accelerate the development of more advanced models.
  • Chatbots and Virtual Assistants: LLMs can power chatbots and virtual assistants that can understand natural language and respond to customer inquiries, providing 24/7 customer support. 
  • Content Creation: LLMs can be used to generate content for websites, social media, and other platforms. For example, LLMs can generate product descriptions, news articles, and social media posts. 
  • Sentiment Analysis: LLMs can analyse customer reviews and social media posts to determine sentiment and help businesses understand customer feedback.
  • Language Translation: LLMs can be trained in multiple languages and used to translate content for websites, marketing materials, and other communications. In a similar vein, actuaries can use LLMs to help ‘translate’ complex actuarial modelling into language that is more suitable for a non-technical audience.
  • Personalised Recommendations: LLMs can be used to provide personalised recommendations for products, services, and content, based on user behaviour and preferences.


Overall, LLMs have many commercial applications and are becoming increasingly popular as businesses leverage the power of natural language processing (NLP) to improve customer experience, increase efficiency, and gain insights from data. Going one level lower, how do these actually help actuaries with their day-to-day work?

In a recent real-life example, one of the co-authors of this article was handed over a piece of R script without much documentation. As some actuaries may be able to relate, seeking explanations or documents for R functions over the internet may be time-consuming and typically, community sites such as Stack Overflow and R-bloggers offer only knowledge sharing in function silos (i.e. for specific function or syntax). This author was particularly impressed by ChatGPT which provided not only a comprehensive summary but also bugs and suggested fixes after simply throwing a chunk of the R script which contains multiple R functions into a ChatGPT prompt. 

This offers a glimpse of the applications of LLMs for actuaries which will be discussed in detail in Part 2 of the article series (to be published next week). Overall, Data Science Actuaries should be looking to be part of how these models are designed, built, implemented and governed within organisations.

Looking ahead

Although ChatGPT has received significant media attention, it is not the only LLM-powered solution available. The field of LLMs, and Generative AI more broadly, is developing quite quickly, with many competing models and improvements released within a number of months:

  • Microsoft has released Bing Chat (powered by GPT connected to search), which is not limited to a knowledge cut-off date and will include relevant links to sources with answers.
  • Similarly, Google has released Bard with Google search and AI functionality inside its Google Workspace, which means a powerful LLM integrated directly into Gmail, google sheets, google docs, google slides and the rest of its suite.
  • Facebook / Meta has released LLaMa, which has been refined by Stanford into the super-lightweight Alpaca which was trained in just 3 hours.
  • OpenAI has released a newer version of their LLM, GPT-4, which already appears to have far surpassed its predecessors. It also includes plugins for ChatGPT to allow it to do things such as analyse uploaded data, search the internet, execute code, and perform actions on behalf of the user such as booking a restaurant.


The future state does look promising but what does this mean for actuaries? 

Using the topical GPT-4 as an example, it’s powered by a multi-modal LLM in that it handles not only natural language but also image. In Open-AI’s live demonstration for the release of GPT-4, it was shown that GPT-4 was able to turn images of hand drawings into functional websites in a matter of minutes. Its visual capabilities also allowed it to perform well in some of the standard tests (e.g. GPT-4 passed the US Bar exam as mentioned earlier, and scored in the 99th percentile in the Biology Olympiad). For actuaries specialised in insurance operations, a future state which could potentially be reached by these LLMs is to underwrite policyholders in real-time by simply requesting images or photographs.

With these incredibly fast developments comes one major question…

How can we possibly keep up? Independent researchers have put enormous effort into quickly investigating and experimenting with ChatGPT v3.5 to understand its political correctness, biases, and areas most at risk of hallucination, and to refine “prompt engineering” techniques to produce the most effective results. With “ChatGPT Plus” based on GPT 4.0, how much of those research findings remain relevant now? And how much would remain relevant in a year?

It is easy to get caught up in the excitement, but as actuaries and data specialists, we need to be aware of the risks of integrating these new technologies into our lives and the way we work. As with the outputs of any data-driven tool, results need to be critically evaluated with careful consideration of the potential impact on stakeholders.

The capabilities and risks of LLMs will only continue to increase, but with proper care and attention, they can unlock new possibilities for innovation and progress in our field. Stay in the loop with Actuaries Digital as in the upcoming weeks, we will explore applications, especially for actuaries, and discuss the risks with using LLMs in more detail. 

CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.