Today I dove in a little more to see if he could see some of the patterns that emerge in the distribution. We started with a quick review and a look at data from a few simulations I ran:
Next we looked at the data from four simulations with an averages of 1, 2, 3, and 4 events expected per year. It was a little hard for him to see the overall pattern, but after a few hints he was able to see what was going on:
To wrap up today, we looked at the pattern from the simulations and tried to write down the pattern that we’d expect to see for an event that happens 5 times per year on average. At the end of this video he was able to write down a formula for the general pattern!
For our project today, I though it would be fun to talk about the Poisson distribution. For me it is one of the most interesting and important ideas in probability. This question, for instance, is fascinating -> If a random event happens on average once per time period, what is the probability that it happens twice?
I started the introduction with a version of the idea I mentioned above and asked my son for some estimates of what he thought the answer would be:
Then we looked at some simulations. Here I’m looking at the idea of a random event that happens on average once per year and chopping the year up into 52 weeks:
Next I chopped the year up into 365 days – would we get different answers?
This project turned out to be a little more interesting to my son than I was expecting – I’m looking forward to exploring Poisson distributions a bit more next week.
Last week I learned about an really interesting probability problem from Pasquale Cirillo:
Today I asked him to think about the problem while I was out and when I got back home he walked me through his solution:
The last video shows his general approach – now he calculated the asnwer:
To wrap up I showed him how to modify his original argument just a bit to avoid the infinite series calculation. This is a much shorter way to solve the problem, but does require a bit more mathematical sophistication:
I really love the problem and think that it is a great one to share with kids. Even if kids can’t quite solve it, it would be really fun to hear their thought process and how they might estimate the probability of winning.
Yesterday we did a project on this fun problem from Futility Closet:
Today we finished the project by talking about the 2nd part of the problem and then having a discussion about why the answers to the two questions were different. Unfortunately there were two camera goofs by me filming this project – forgetting to zoom out in part 1 and running out of memory in part 4 – but if you go through all 4 videos you’ll still get the main idea.
Here’s the introduction to the problem and my son’s solution to the 2nd part of the problem. Again, sorry for the poor camera work.
Next we went to the computer to verify that the calculations were correct – happily, we agreed with the answer given by Futility Closet.
In the last video my son was struggling to see why the answers to the two questions were so different. I’d written two simulations to show the difference. In this part we talked about the difference, but he was still confused.
Here we try to finish the conversation about the difference, and we did get most of the way to the end. Probably just needed 30 extra seconds of recording time 😦 But, at least my son was able to see why the answers to the two questions are different and the outputs from the simulations finally made sense to him.
So, not the best project from the technical side, but still a fun problem and a really interesting idea to talk through with kids.
This problem teaches a couple of good counting lessons. Today we focused on the first part – if you have at least one ace, what is the probability that you have more than one. First, though, we talked through the problem to make sure my son understood it:
Next I asked my son to work through the calculation for the number of hands that have “at least one ace.” He made a pretty common error in that calculation, and we discussed why his calculation wasn’t quite correct:
Now we talked about how to correct the error from the last video via complementary counting:
Now that we had the number of hands that had at least one ace, we wanted to count the number of hands with more than one ace. My son was able to work through this complementary counting problem, which was really nice to see:
Finally, since we had all of our numbers written down as binomial coefficients and these numbers were going to be difficult to compute directly, we went to Mathematica for a final calculation:
Excited to continue this project tomorrow and hear my son’s explanation for the seeming paradox.
Yesterday’s discussion helped the boys understand the problem that Ellenberg is discussion in chapter 10 of his book a bit better (hopefully anyway!). Today we took a crack at replicating the calculations in the book relating to the roulette wheel example.
First we revisited the example from the book to make sure we had a good handle on the problem:
Next we talked through the details of the process that we’ll have to follow to replicate the calculations that Ellenberg does. Following the discussion here the boys did the calculations off camera:
Here we talk through the numbers that the boys found off camera – happily we agreed with the numbers in the book.
At the end of this video I introduce a slight variation on the problem – instead of getting R, R, R, R, R in a test of 5 rolls, we get an alternating sequence of R and B for 20 rolls:
Here are their answers – and a discussion of why they think the answers make sense – for the new case I introduced in part 3 of the project:
This two project combination was really fun. My younger son said that he was confused by the roulette wheel example, but I think after these two projects he understands it. I think it is a challenging example for a 9th grader to understand, but with a little discussion it is an accessible example. It certainly makes for a nice way to share some introductory ideas about Bayesian inference.
My younger son is reading Jordan Ellenberg’s How not to be Wrong and the chapter talking about Bayes’ Theorem caught his attention this week. Looking around for something related to talk about in a project, I found this interesting problem on Wikipedia:
Before talking through that problem, though, we talked about the roulette wheel example from Ellenberg’s book:
Next we began to talk through the problem from Wikipedia. This part of the project shows the initial reaction and some thoughts on the problem from the boys:
Finally, with the initial thoughts out of the way we moved on to solving the problem. My older son was seeing these ideas cold, but what was really neat to me in this part is that the ideas from Ellenberg’s book really helped my younger son see how to solve this problem:
I feel like I got a bit lucky with this project. The ideas about updating probabilities looked a bit too difficult to go through in a 15 minute project – especially since my older son was seeing them for the first time. With this introduction, though, I think we can compute / verify the updated probabilities in the roulette wheel example from Ellenberg’s book in a project tomorrow.
This morning I accidentally stumbled on an old coin flipping game we looked at last year:
I thought it would be fun to take a look at the problem again since the last look was long enough ago that the boys probably wouldn’t remember it. Here are their initial thoughts in the problem. After a bit of discussion, the boys came up with a good argument for why HHHT only would appear more often than HHHH only.
Next we looked through a simple computer program I wrote to model the situation. This isn’t the best or most clever way to write the program, but I thought it was an easy one to explain:
Finally, we looked at how the numbers would change if the sequence had 50 flips instead of 20. It was interesting to hear the boys explain why the numbers had changed – I think this extra discussion helped them understand the original problem a bit better:
Today I had my son explore a little further. He was interested to see if different starting positions led to different distributions of endings. He looked at five different starting positions. Here’s the first (with a quick review of the problem) when the urn starts with 5 black and 5 white balls and we play the game 1,000 times:
Next he looked at how the starting position with 1 black ball and 5 white balls evolved. The way the distribution of the number of white balls at the end changes is pretty amazing:
Now for the most surprising one of all – the starting position with 1 white ball and 1 black ball – it seems that ending with 1 white ball or 1001 white balls (or any amount in between!) is equally likely:
Finally he looked at the starting position with 1 black ball and 10 white balls. This one is a little less surprsing having already seen the 1 black ball and 5 white ball game, but still it was neat to see:
This is a fun little game for kids to study. It is also a nice introductory programming exercise, too. Thanks so much to Ole and Marcos for sharing their ideas!