Catastrophe modelling in a post Katrina world

The reinsurance industry used catastrophe risk models long before Hurricane Katrina, but Katrina challenged the standards of these models. It called into question the quality of exposure data, how the models were used and their suitability for various business applications. ANZIIF’s Edward Vukovic reports.

Ten years on from Hurricane Katrina, we can confidently say that quality of exposure data has improved dramatically. Companies across the world are taking a far more rational approach to model-based business decisions and many multinational reinsurance companies and intermediaries, have and continue to invest heavily in model research and evaluation.

But what was it like before Katrina wiped ‘the Big Easy’ out and how has that catastrophic event shaped cat modelling today?

Dr Jayanta Guin of Air Worldwide (AIR) says the risk associated with hurricanes, particularly in the US, was the best understood of all natural catastrophe perils. Given the wealth of data available, insurers had considerable insight into the behaviour and effect of hurricanes.

However, as with anything, there was always room for improvement and Katrina was a driver behind that.

“Katrina did not fundamentally change our approach to modelling the hurricane peril but there were still lessons to be learned. The main focus fell on the problem of exposure data quality which has led to significant improvements. Katrina also paved the way for enhancements in the way storm surge—and flooding in general—is modelled,” says Dr Guin.

Dr Robert Muir-Wood, of Risk Management Solustions says that before Katrina, hurricane modelling remained strongly influenced by Hurricane Andrew, which was at the far end of the spectrum for having such a small proportion of storm surge losses.  

“Katrina was the opposite of Andrew, creating more loss from flood than from wind. The flooding of New Orleans was itself a secondary consequence of the hurricane and became a catastrophe in its own right – what we now call a ‘Super-Cat’ when the secondary consequences become larger than the original catastrophe,” he adds.

The impact of Katrina on how hurricane risk models are developed has been huge, most notably changing the way modellers make assumptions.

“The understanding at the time was that intense storms at low latitudes were relatively small. Katrina, however, was enormous. That led us to make revisions in some of our assumptions,” says Dr Guin.  

He adds that Katrina also revealed insights into the vulnerability of commercial structures.

“A good example is the large number of casinos built on barges along the Mississippi coast. Today, there is much better recognition of the wide array of buildings that companies are insuring and our view of the vulnerability of commercial assets has increased as a result. In fact, I would say that overall our view of hurricane risk along the Gulf coast has increased.”

Dr Muir-Wood believes the biggest change in the modelling agenda relates to storm surge, with insurers recognising that it wasn’t just some add-on to a hurricane loss model.

“Storm surge losses are also far more concentrated than the wind losses, which gives much more opportunity to employ modelling and in terms of ground up losses storm surge could be just as important as the wind,” says Dr Muir-Wood. “This approach has been well validated in recent events such as Hurricane Ike and Superstorm Sandy, which further refined elements of our storm surge flood modelling capability, in particular around underground space.”

Modelling post Katrina

The impact of Katrina went beyond the sheer physical devastation it caused, it was the driver behind an evolution in storm surge modelling, so commonplace today.

According to Dr Guin, storm surge modelling failed to get the attention it deserved because it wasn’t thought to be a major driver of overall hurricane losses. Katrina, along with storms like Ike and Sandy have prompted a shift in thinking and helped develop more sophisticated models.

“At AIR we’ve brought to bear new science in terms of numerically-based hydrodynamic modelling, the computer power necessary to handle high-resolution elevation data, and exhaustive analysis of detailed claims data to ensure that the model, the localised nature of the hazard, and improved exposure data combine in such a way to validate well with datasets from multiple storms—not just one or two,” says Dr Guin.

“As developers of models, we need to be cautious and avoid over-calibrating to a single headline event; doing so will result in a model that will not validate well across an entire 10,000-year (or larger) catalogue of events.”

Dr Muir-Wood agrees, suggesting the old ways of modelling storm surges simply didn’t work.

”In the Gulf of Mexico storm surges at landfall are commonly much higher than you would find by using the near-shore SLOSH model, because far more storms lose intensity in the two days leading up to landfall. To capture the storm surge at landfall one has to model the wind field and the surface currents and waves generated by the wind, over far more time in the life of the storm than just for the period before landfall,” he says.

“There are really only two coupled ocean atmosphere hydrodynamic models up to the task of being good enough for generating storm surge hazard information along the US coastline: the ADCIRC model and MIKE21 developed by DHL.”

While battered buildings and flooded homes seem relatively straightforward to model, Katrina raised the issue of certain components of non-modelled losses such as damage due to polluted water, mould, tree falls and riots — aspects that were never modelled previously.

Indeed, the experience of Katrina triggered a revolution in thinking about how additional factors that drive up loss. Dr Muir-Wood tells us that there are four factors that tend to push up loss beyond the simple hazard exposure loss equation of traditional Cat modelling.

“First there is ‘economic demand surge’ – when excess demand leads to price increases in materials and labour. Second there is ‘deterioration vulnerability’ – as seen widely in houses abandoned in New Orleans after Katrina. Even where a property was not flooded, if it had a hole in the roof, after a few weeks the whole interior was contaminated with mould,” he says.

Third comes the phenomenon of ‘claims inflation’, which comes to being when insurers are so overwhelmed with claims that they let through claims below thresholds without checking and finally, there is ‘coverage expansion’.

“Coverage expansion occurs typically under political pressure forces insurers to pay beyond the terms of their policies – waiving deductibles, ignoring limits, and covering perils like flood. When the level of disruption is so high that urban areas are evacuated, so that BI losses simply run and run as seen in the Christchurch 2010 and 2011 earthquakes, we call this “super-Cat”,” explains Dr Muir-Wood.

“In terms of our broader modelling agenda we focus on trying to capture economic demand surge and claims inflation and recommend stress tests or add defaults around coverage expansion. We also apply super-Cat factors to the largest loss events affecting cities that could be prone to local evacuations.”

This article was originally published on ANZIIF on 10 September, 2015 here.

CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.