I saw an interesting tweet from Chris Long earlier in the week:
In case the underlying tweet isn’t displaying properly, here’s the question he was answering:
As I was going to write up this post I saw that he wrote a short blog post about the problem, to0:
So, seeing Long’s tweet reminded me of a problem I’ve been thinking about since my boss asked me about it back in 2002:
Which of these two bets would you prefer:
(i) I give you $10 and you pay me back $100 if a 1 in 50 chance happens.
(ii) I give you $60 and you pay me back $100 if a 1 in 2 chance happens.
There are lots of ways to think about these two bets. You might prefer the 2nd bet because you have an expectation of +$10 vs +8 for the first one. Maybe you like the first bet because you are getting paid five times the expected loss versus only 1.2 times the expected loss in the second bet.
Over the years, though, I never really made any progress on any better way to think about the problem. Seeing Long’s tweet showed a new way (and surely the right way) to think about it.
Long’s post approaches the problem via maximizing log utility – I’ll approach mine by looking at the geometric mean returns. The two approaches are the same.
If I start with $X, the geometric mean return on the first bet is:
The geometric mean return on the 2nd bet is:
A quick calculation on Wolfram Alpha shows that the first bet is better if your starting capital is between $100 and roughly $560, and the second one is better above $560:
Most betting problems I’ve seen are formulated in a slightly different way than these problems. The usual framing is you have $x and an advantage of y%, how much should you bet? Until I saw Long’s tweet tt never occurred to me to think about my problem as Kelly betting problem. So, I’m happy to have seen his tweet and happy to finally have a good way to compare different types of insurance-like bets.