Research on Hierarchical Compartmental Reserving Models published
Over the last year I worked with Jake Morris on a research paper for the Casualty Actuarial Society. We are delighted to see it published:
Gesmann, M., and Morris, J. “Hierarchical Compartmental Reserving Models.” Casualty Actuarial Society, CAS Research Papers, 19 Aug. 2020, https://www.casact.org/research/research-papers/Compartmental-Reserving-Models-GesmannMorris0820.pdf
The paper demonstrates how one can describe the dynamics of claims processes with differential equations and probability distributions. All of this is set into a Bayesian framework that allows us to combine judgement and historical data into a consistent framework.
Unlike a ‘black-box’ machine this is very much a transparent box approach, in which all assumptions can be reviewed and challenged. And unlike some of the traditional reserving methods that are often applied to data first and with judgement applied thereafter (shoot first, aim later), we encourage users to incorporate as much expertise upfront into the model, while the data is used to judge the credibility of the input assumptions (aim first, shoot later). This is particularly helpful when entering a new product, line of business, or geography, or when changes to products and business processes would make past data a less credible predictor.
The document gives a hands-on introduction to hierarchical compartmental reserving models, and where we used mathematical notation, we have tried to explain the intuition in plain English. We provide examples, with real world case studies that can be replicated by the reader using the probabilistic programming language Stan via the brms package in R.
In our paper we bring together ideas that had been going through our heads for a number of years and that’s very satisfying.
My four big takeaways from our research are:
- Thinking about the claims process as a flow of fluids in the form of premium or exposure from one claims stage to another helps to describe the loss emergence process in a consistent way for the different metrics (Section 2).
- Modelling incremental claims payments is preferred to cumulative data. Payments are incremental by nature and as claims get more mature the payments get smaller and smaller, and with that the variance around them. Hence, modelling incremental payments simplifies the choice of a variance structure for the process distribution (Section 3.3).
- Hierarchical models provide an adaptive regularization framework in which the model learns how much weight subgroups of data should get, which can help to reduce overfitting. This is effectively a credibility weighting technique (Section 4.3).
- We have to distinguish between expected and ultimate loss ratio. The expected loss ratio describes the mean of a distribution, if we could re-run the same accident year again and again. You may call it pricing or planning loss ratio. Hence, it is not anchored to the latest cumulative paid position, unlike the ultimate loss ratio. We define the ultimate loss ratio as the final actual loss ratio for a specific year. It is the latest cumulative paid position, plus all future payments. So it is very specific to the claims experience to date. The ULR is crucial to estimate the level of reserves, while the ELR provides insight into pricing and the insurance cycle (Section 4.6).