Stephen Lowe, former managing director at Willis Towers Watson and past President of the Casualty Actuarial Society, shares his views on the benefits of combining actuarial knowledge and data science. This is part one of a two-part series, focusing on the requisite skills and technological advances driving current data science innovations.

An analytical arms race is disrupting the traditional insurance company business model and changing the imperatives for success. Predictive modeling is steadily expanding, reaching beyond merely being a tool for product strategy to becoming an integral function within a new data and analytics-based business model.

Necessary Skills

Since introducing predictive models to personal auto insurance more than 20 years ago, insurers have gradually expanded their use of predictive models to other insurance lines and applications. The three most common predictive modeling applications are underwriting/risk selection, evaluating fraud potential, and deciding when to order reports (such as credit), according to Willis Towers Watson’s 2015 Predictive Modeling and Big Data Survey.

Additional applications in the ranking include: premium auditing, advertising strategy, claim triage, underwriting expense efficiency, determining litigation potential, agency management/compensation, loss control and agent placement/distribution management. Released in February, the report’s conclusions were based on the responses of 61 North American property/casualty insurers.

Capitalizing on the new technological landscape requires a team that possesses three primary skills sets. The first is data hacking, which in this context does not refer to criminal activity but describes the mindset to develop solution-yielding approaches. Hacking skills include data sourcing knowledge, capabilities in data assembly and management, and experience in scrubbing and extracting information from raw data.

A facility in contemporary analytics tools built on new era math and statistics is the second necessary skill. These include Generalized Linear Models (GLMs), Classification and Regression Tree Analysis (CART), machine learning, data visualization, etc., that permit deeper insights into relationships evidenced within the data. However, access to infinite data and statistical prowess is not enough to build a truly analytics-based insurance company. For that, the third skill is required: contextual knowledge, referred to by some as domain knowledge, which includes full appreciation of insurance risk.

Context is the deep knowledge of the critical nuances and complexity of insurance that assures a focus on relevant data rather than data for its own sake. No one can adequately and effectively analyze a set of data without fully understanding its context – the environment from which it emerged. Context, for example, is necessary for considering how the predictive models should be developed for appropriate decision-making and what will happen if the external environment or the internal incentives of the decision makers change.

Finding analytical professionals with the skills and knowledge required for becoming a truly analytics-based insurer differ from traditional business skills primarily because of three incredibly rapid technological advances. First, the cost of computation and data storage is no longer a significant part of the strategic calculus. Thanks to low-cost cloud servers, insurers can gather, retain and manage massive amounts of data.

Second, data sources are plentiful and growing exponentially as monitoring devices have become ubiquitous. Automobiles will allow insurers to capture location, acceleration and speed a dozen or more times a second and analyze how usage translates into accidents. By deploying drones, home insurers can capture roof condition before and after a storm to settle damage claims.

Third, the tools and applications to assemble, manipulate and analyze data are better than ever and continue to improve. Summarizing and segmenting data is no longer necessary to make analysis manageable. Transactional-level data, even in volumes measured in terabytes, works with today’s predictive models.

Technological change has been significantly profound. It has even shifted the focus of statistics away from traditional sampling theory since an entire population can now easily be analyzed. State-of-the-art applications and contemporary programming languages such as R and Python allow insurers to handle very large and complex data sets, perform analytics, create meaningful data visualizations and build quite effective predictive models.

Further, analytic models are also changing, from merely descriptive to predictive and ultimately, to prescriptive. Claim triage applications, for example, are prescriptive because they analyze the attributes of a claim when it is reported and recommend the appropriate adjuster based on their experience and expertise.


The availability of big data coupled with technological innovation are disrupting the traditional insurance company business model, moving it to one driven by analytics. Since data scientists and actuaries generally bring different skill sets to an analytics team, they will need to cross-pollinate until individual professionals can offer all three skills necessary for successful analytics: data hacking, modern statistical prowess and intimate insurance knowledge.

A discussion on some of the challenges insurers will face in incorporating these skills in their everyday operations follows in part two of the series

CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.