We’ve traveling for a few days and I finally got some down time driving home from Boston during the day today. I have Jordan Ellenberg’s “How not to be Wrong” on audiobook and was listening to chapter 10 – “Are you there God? It’s me, Bayesian Inference.”
I really like one of the examples that he gives and want to figure out a way to turn it into a project with my kids. Here’s roughly what he goes through:
You have a coin that is drawn from a large population of coins. In the overall population 10% of the coins flip heads 60% of the time and tails 40% of the time. 80% of the coins flip heads and tails 50% of the time (a “fair” coin), and the remaining 10% flip heads 40% of the time and tails 60% of the time.
The example he works through is this one: You flip the coin you’ve selected 5 times and get heads every time. What is the probability that you have selected one of the fair coins?
It turns out the answer is about 86.5%.
Here’s what I wondered, though. Suppose that you get heads every flip. How many flips do you need to conclude that is more likely that you’ve selected one of the 60% heads coins rather than one of the fair coins?
I haven’t worked through the numbers yet, but I never seem to guess very well on problems like this. I’m trying to formulate a good guess before doing the number crunching.
Interesting, I don’t see an intuitive reason why the fair coins “steal” likelihood from the heads-deficient coins faster than the heads-surplus coins. Or, is this a case where i should look at the increase in % terms?
Where I get stuck in real life is when the prior is not very clear. For example, what if you just know that the population of coins contains 3 types with those characteristics, but you don’t know the overall population distribution. Then, what is your distribution after 5 heads? With no other information, do you start with a equal distribution between the three types or something else?