I was reading Nassim Taleb’s most recent paper this morning:
and this short passage caught my attention:
It reminded me of a simple risk pricing idea I had (and used) 15 years ago that nearly got me laughed out of the room. The idea isn’t applicable to general situations – just pretty simple ones in which you have low-probability, insurance-like risk. So a situation something like:
(i) The event you trying to price has a low probability -> I’ll say 1/k, and
(ii) you have a known (and binary) payout if the event happens, and
(iii) you keep whatever premium you collect if the event doesn’t happen.
The original source of the conversations was providing insurance for Pepsi’s “Play for a Billion” contest in which there was a 1 in 1000 chance that someone would win $1 billion.
Most of the traditional ways that people think about insurance risk fall flat here. Do you want a 10% more than the “loss cost”? So, an extra $1 million to take a 1 in 1,000 shot a losing $1 billion? Maybe 50% more?
My idea was to imagine taking the bet over time and look at the probability that you would collect enough money to pay one loss before the first loss actually happened. The math behind this idea makes for a nice Calculus 1 example. Assume for simplicity that your payout will be $1.
Probability of loss ->
Premium collected each time -> , so you are getting “x” times the loss cost.
It will thus take you trials to collect enough money to pay a single loss. The probability of not paying out in the first trials is:
= since k is large.
This math gives us a very simple formula which is independent of the probability (as long as the probability is low). If, say, we want a 75% chance of collecting a $1 before paying out the $1 we have to charge 3.5x the loss cost since:
For a 95% change you’d have to charge about 9.5x the loss cost.
Anyway, as I said this approach was laughed at by just about everyone on the other side of the table. I’m glad to see serious discussion about time averages rising to the top now!