Stan

Notes from 4th Bayesian Mixer Meetup

Last Tuesday we got together for the 4th Bayesian Mixer Meetup. Product Madness kindly hosted us at their offices in Euston Square. About 50 Bayesians came along; the biggest turn up thus far, including developers of PyMC3 (Peadar Coyle) and Stan (Michael Betancourt). The agenda had two feature talks by Dominic Steinitz and Volodymyr Kazantsev and a lightning talk by Jon Sedar. Dominic Steinitz: Hamiltonian and Sequential MC samplers to model ecosystemsDominic shared with us his experience of using Hamiltonian and Sequential Monte Carlo samplers to model ecosystems.

Fitting a distribution in Stan from scratch

Last week the French National Institute of Health and Medical Research (Inserm) organised with the Stan Group a training programme on Bayesian Inference with Stan for Pharmacometrics in Paris. Daniel Lee and Michael Betancourt, who run the course over three days, are not only members of Stan’s development team, but also excellent teachers. Both were supported by Eric Novik, who gave an Introduction to Stan at the Paris Dataiku User Group last week as well.

Notes from 3rd and 3.5th Bayesian Mixer Meetup

Two Bayesian Mixer meet-ups in a row. Can it get any better? Our third ‘regular’ meeting took place at Cass Business School on 24 June. Big thanks to Pietro and Andreas, who supported us from Cass. The next day, Jon Sedar of Applied AI, managed to arrange a special summer PyMC3 event. 3rd Bayesian Mixer meet-upFirst up was Luis Usier, who talked about cross validation. Luis is a former student of Andrew Gelman, so, of course, his talk touched on Stan and the ‘loo’ (leave one out) package in R.

Notes from 2nd Bayesian Mixer Meetup

Last Friday the 2nd Bayesian Mixer Meetup (@BayesianMixer) took place at Cass Business School, thanks to Pietro Millossovich and Andreas Tsanakas, who helped to organise the event. Bayesian Mixer at Cass First up was Davide De March talking about the challenges in biochemistry experimentation, which are often characterised by complex and emerging relations among components. The very little prior knowledge about complex molecules bindings left a fertile field for a probabilistic graphical model.

Bayesian Mixer on Meetup

We had our first successful Bayesian Mixer Meetup last Friday night at the Artillery Arms! We expected about 15 - 20 people to turn up, when we booked the function room overlooking Bunhill Cemetery and Bayes’ grave. Now, looking at the photos taken during the evening, it seems that our prior believe was pretty good.

The event started with a talk from my side about some very basic Bayesian models, which I used a while back to get my head around the concepts in an insurance context.

Hierarchical Loss Reserving with Stan

I continue with the growth curve model for loss reserving from last week’s post. Today, following the ideas of James Guszcza [2] I will add an hierarchical component to the model, by treating the ultimate loss cost of an accident year as a random effect. Initially, I will use the nlme R package, just as James did in his paper, and then move on to Stan/RStan [6], which will allow me to estimate the full distribution of future claims payments.

Loss Developments via Growth Curves and Stan

Last week I posted a biological example of fitting a non-linear growth curve with Stan/RStan. Today, I want to apply a similar approach to insurance data using ideas by David Clark [1] and James Guszcza [2]. Instead of predicting the growth of dugongs (sea cows), I would like to predict the growth of cumulative insurance loss payments over time, originated from different origin years. Loss payments of younger accident years are just like a new generation of dugongs, they will be small in size initially, grow as they get older, until the losses are fully settled.

Non-linear growth curves with Stan

I suppose the go to tool for fitting non-linear models in R is nls of the stats package. In this post I will show an alternative approach with Stan/RStan, as illustrated in the example, Dugongs: “nonlinear growth curve”, that is part of Stan’s documentation. The original example itself is taken from OpenBUGS. The data describes the length and age measurements for 27 captured dugongs (sea cows). Carlin and Gelfand (1991) model the data using a nonlinear growth curve with no inflection point and an asymptote as $x_i$ tends to infinity:

Bayesian regression models using Stan in R

It seems the summer is coming to end in London, so I shall take a final look at my ice cream data that I have been playing around with to predict sales statistics based on temperature for the last couple of weeks [1], [2], [3]. Here I will use the new brms (GitHub, CRAN) package by Paul-Christian B├╝rkner to derive the 95% prediction credible interval for the four models I introduced in my first post about generalised linear models.

Visualising the predictive distribution of a log-transformed linear model

Last week I presented visualisations of theoretical distributions that predict ice cream sales statistics based on linear and generalised linear models, which I introduced in an earlier post. Theoretical distributionsToday I will take a closer look at the log-transformed linear model and use Stan/rstan, not only to model the sales statistics, but also to generate samples from the posterior predictive distribution. The posterior predictive distribution is what I am most interested in.

Posterior predictive output with Stan

I continue my Stan experiments with another insurance example. Here I am particular interested in the posterior predictive distribution from only three data points. Or, to put it differently I have a customer of three years and I’d like to predict the expected claims cost for the next year to set or adjust the premium. The example is taken from section 16.17 in Loss Models: From Data to Decisions [1]. Some time ago I used the same example to get my head around a Bayesian credibility model.

Hello Stan!

In my previous post I discussed how Longley-Cook, an actuary at an insurance company in the 1950’s, used Bayesian reasoning to estimate the probability for a mid-air collision of two planes. Here I will use the same model to get started with Stan/RStan, a probabilistic programming language for Bayesian inference. Last week my prior was given as a Beta distribution with parameters $\alpha=1, \beta=1$ and the likelihood was assumed to be a Bernoulli distribution with parameter $\theta$: $$\begin{aligned}