# Dave Radcliffe’s polynomial activity part 2

Last week I saw some really neat tweets from Dave Radcliffe. For example:

Those tweets led to a fun project yesterday:

Dave Radcliffe’s polynomial activity day 1

Today I had each of the boys explore $(1 + x + x^2)^n$ mod 2 and mod 3. This is a harder exploration to do by hand (and made harder because I was out this morning and they worked on it alone). Still, it was interesting to hear what they had to day.

My younger son chose the more complicated activity of looking at the powers mod 3. Here’s what he found:

We then went to the computer to check if any of the patterns he thought were there would continue. He had some ideas but unluckily none of them worked. We’ll play more later to see if we can crack the code on the patterns:

Next I talked to my older son. He looked at powers of the polynomial $(1 + x + x^2)$ mod 2.

Here’s what he noticed:

He didn’t have any conjectures, so I showed him the picture that Dave Radcliffe tweeted and that led to him seeing some additional patterns in what he’d written down on the sheet of paper:

So, I’m glad I saw Dave’s tweets because this project is a great computer math exercise. Exploring powers of these polynomials would have been next to impossible without the computer help, but with the computer help we were able to explore a few patterns. It’ll be fun to try to find ways to explore the patterns a bit more and see what we can find.

# Dave Radcliffe’s polynomial activity day 1

Saw this really fun tweet from Dave Radcliffe yesterday:

This looked like a fun project for kids, though it wasn’t obvious how to get started. It turns out that Mathematica has a handy function called PolynomialMod[] that tells you what a polynomial looks like modulo an integer – so that made life easier!

I decided that for today’s project we’d explore $(1 + x)^n$ using Mathematica and see what patterns we could find. The introduction to today’s project involved introducing basic polynomial multiplication. Luckily, a natural way to multiply polynomials looks a lot like multiplying 2-digit numbers. I used that connection to introduce the project:

After the introduction I had the boys play on Mathematica and compute various powers of $(1 + x)^n$ starting with $(1 + x)^0$. We got a little confused between Fibonacci numbers and Pascal’s triangle, but here is what they saw:

For the last part of the project today we used PolynomialMod[] to look at the various powers of $(1 + x)^n$ in mod 2. I wanted to get them used to this Mathematica function to make it easier to explore $(1 + x + x^2)^n$ mod 2 tomorrow. After they explored the powers of $(1 + x)^n$ mod 2 up to n = 8, we talked about patterns in the numbers:

So, a fun little computer math project. It was fun to hear the kids talk about the patterns and also fun to talk about some basic ideas like polynomial multiplication and modular arithmetic. Definitely excited to explore some of the more complicated patters tomorrow.

# Sharing neural networks with kids day 2

I asked the boys what they wanted to do for our math project today and the both wanted to learn more about neural networks.  Our project from yesterday is here:

Sharing a fun neural network program with kids

and the program we are using is here:

A Neural Network Playground

The thing the boys were most interested in were the drop down features at the top of the program. Essentially their questions were:

(1) What does the activation feature do?

(2) What does regularization mean?

(3) What is the difference between classification and regression?

So . . . all of these questions are a little bit beyond 5th and 7th grade, but I did my best (though I punted on (2) to try to give a better explanation of the others).

In the first part we talked about the difference between classification and regression:

In the second part we talked about what activation functions do. This idea is probably well beyond what kids can understand in any detail, but the program actually proved to be a good tool to illustrate the point.

It is actually amazing to hear what kids have to say when they are trying to digest some of these ideas.

So, I let them play around with the program and investigate which activation functions worked well / not well with different data sets. Again, the ideas here are difficult for kids to grasp, but they did a pretty good job thanks to the help from the program.

So, a fun couple of days playing around with some simple neural network ideas with the boys. Not sure what I’m going to do if they want to learn more – ha ha!

# Choosing 3 million points on a 300-dimensional sphere

[sorry this post is a little sloppy – I had a hard stop at 7:00 pm and wanted to get it out the door.]

During the last day of my machine learning class we discussed “word2vec.” I’d heard of it before because of a couple of tweets and blog posts from Jordan Ellenberg. For example:

Jordan Ellenberg’s Messing Around with word2vec

I was reviewing this blog post during one of the breaks and stumbled on this passage:

During class I couldn’t figure out how to think through this problem, but on the bike ride home I had an idea that seemed to work.

I don’t actually know if I’ve arrived at a “close enough” answer by coincidence because higher dimensional geometry is strange.  Here are a few example projects that I’ve done with the boys which range from fun to mind blowing:

Did you know that there is a 30-60-90 triangle in a hyper-cube?

Carl Sagan on the 4th dimension

Using snap cubes to talk about the 4th dimension

Sharing 4d shapes with kids

Counting Geometric Properties in 4 and 6 dimensions

A Strange Problem I overheard Bjorn Poonen discussing

Bjorn Poonen’s n-dimensional sphere problem with kids

A fun surprise in Bjorn Poonen’s n-dimensional sphere problem

One strange thing that comes to mind immediately in the statement from Ellenberg’s blog is that it must be somehow hard to find vectors in higher dimensions that meet at angles close to 0 degrees or 180 degrees. But why?

My solution to the 3 million points on a 300 dimensional sphere problem on the way home went something like this:

(1) Use a 2x2x . . . x2 cube with center at the origin and with all verticies having coordinates in every dimension that are either +1 or -1.

(2) Assume that the first vector is the 300-dimensional vector (1,1, . . . ,1)

(3) Now form 3 million 300-dimensional vectors with +1 or -1 randomly chosen for each position in the vector.

(4) By the dot product formula for vectors, the smallest dot product will product the largest angle, so now we just have to figure out how many -1’s the vector with the most -1’s should have.

(5) The number of -1’s in our vectors should be described by the binomial distribution with a mean of 150 and a standard deviation of $5\sqrt{3}$

(6) To get to a 1 in 3 million chance we have to go out about 5 standard deviations (so about 43 away from the mean) but just to double check I ran this handy dandy Mathematica code:

sure enough, with 3,000,000 trials we expect about 1 vector to have 192 to 193 -1’s .  Let’s say 192.

(7) So, that vector is going to have 84 more -1’s than +1’s so the dot product with our vector (1,1,1, . . . . , 1) is going to be -84.    That means the cosine of the angle between the two vectors will be -0.28.

So, close to what Ellenberg stated in his blog.

I’m not sure that this method is 100% right, but think it captures the general idea.  It certainly helped me understand why it is hard for random vectors to be parallel (or anti-parallel) in high dimensions.

# Sharing a fun neural network program with kids

I just finished a summer class at MIT on machine learning.  It was a week-long professional education class and it was great.  On the 4th day of the class we discussed neural networks and played around a little with this program:

A Neural Network Playground

Most of the ideas in the class involve math concepts from linear algebra and calculus and are way beyond anything that you would expect kids to know.  This program, however, seemed like something kids could play with – so we played with it this morning.

First I introduced them – in the most basic way possible – to the idea of classification:

After that short introduction I let the boys play with the program for 15 minutes. They (of course) built the most complicated model that they could and we talked about what they saw. They actually had a pretty good intuition for what was going on – it is too bad that the math ideas required to really dive into the topic are so advanced. Even though we didn’t talk about the math, it was really fun to hear their thoughts about how the model was working.

For the last part of the project we watched a video about a computer that learned to control a model helicopter. The original video is here:

and here’s how my kids reacted to it:

So, a really fun week for me learning about data analysis and machine learning. It was nice to have a small way to share some of what I learned with the boys.

# 5 great plays / lessons from the All-Stars vs. Riot game

The 2016 All-Star tour kicked off last night in Seattle. Lots of hard work went into putting the second season together and it was great to see the games finally getting started. The game film is here:

Here are 5 things that caught my eye watching the game last night (and sorry for the slightly fuzzy videos, I was having some odd computer issues this morning):

(1) 3 good hucks to study

Early in the 3rd point Riot tries to go deep to Hana Kawai, but the throw is pretty difficult. Lateer in the point the All-Stars score on a beautiful huck from Jesse Shofner to Kate Scarth. I love Scarth’s cut in this clip – watch it a few times to see how she sets it up.

Later in the game (at 3-3) Riot again goes deep to Kawai. This throw has far more margin for error than the first one and Kawai is able to track it down.

(i) same thirds hucks, and
(ii) throwing hucks to space.

Comparing these two throws helps illustrate some of the pros and cons of these two ideas:

(2) Riot working the break side

Riot’s goal to go up 3-2 is a great example of a team doing work on the break side. Other than the scoring pass, no pass Riot takes on this possession is remotely difficult. The movement on the break side on this possession is an especially great lesson for anyone learning to play ultimate. I love the pass from Sarah Griffith to (former All-Star) Jaclyn Verzuh right before the goal. Actually, it is worth watching this clip a few times to study Griffith’s positioning during the entire possession.

Also, the scoring cut from Julia Snyder is great. I love the way she plays.

(3) A great hustle play from Jesse Shofner

The play here is a nice look from Claire Revere (who is just blows me away every time I see her play) to Kate Scarth. That throw isn’t complete, but watch the great hustle play from Shofner. Watch especially where she is on the field when the throw goes up. It would have been totally easy for her to just watch the throw – instead the All-Stars get a goal!

(4) Great athleticism from Shiori Ogawa

Everything about this play is nice – Fitzgerald to Revere to Ogawa to Fitzgerald to Ode!!! I’m sure I’ll write plenty about all of them during the tour, but Ogawa really caught my eye here.

I love the attacking idea of the throw and cut, I love the path Ogawa takes to this disc and it forced Qxhna Titcome to have to take a path that wasn’t nearly as good. I absolutely love how calm Ogawa was after the catch – both with the throw to Fitzgerald and her clear to the endzone afterwards. She’s such a great addition to the All-Star tour!

(5) Kaylor and Walhroos attacking the far side of the field

This play is definitely  aggressive, but I think still worth studying. The play reminded me a lot of how Australia attacked Japan in the Dream Cup final, which I wrote about here:

Australia’s Michelle Phillips and Moe Sameshima – I’m speechless

I love the aggressive attacking style on display here. There’s no doubt that this throw is risky, but is also extremely hard to defend. That difficulty is illustrated here since the defender – Sarah Griffith – is one of the two or three best defenders in the game.

So, a great start to the All-Star tour last night. I can’t wait to see more games!!

# Learning about tiling pentagons from Laura Taalman and Evelyn Lamb

You can find Taalman’s program here:

and our project is here:

During the project my younger son found a different tiling pattern for pentagon #10 than the one in Taalman’s program. I suspected that the tiling pattern actually related to Pentagon #1 but wasn’t sure.

When the 15th tiling pentagon was discovered last year Evelyn Lamb wrote this great article which mentioned that each pentagon was actually part of an infinite family of tiling pentagons:

Tonight I used Taalman’s program to show my younger son how to make his tiling pattern from pentagon #1.

First I had him recreate the two tiling patterns:

Next we used the amazing functionality in Taalman’s tiling pentagon program to find this tiling pattern in pentagon #1:

So, thanks to Laura Taalman and Evelyn Lamb for teaching us something about tiling pentagons tonight!