My older son is studying linear algebra out of Gil Strang’s book this year. Currently he’s in the chapter on determinants and we’ve spent the last couple of days talking about Cramer’s Rule.

As we talked about the proof of Cramer’s Rule, I was struck by how similar the ideas were to the ideas used in Fourier analysis. This morning we had a fun discussion showing how the ideas are connected.

I started by asking him to talk about Cramer’s Rule. He did a nice job, especially since his knowledge about this rule is only a few days old:

Next we played around on Mathematica with a 4×4 example and found that the solutions you get from Cramer’s Rule do indeed match the solutions you get from other methods!

Next I gave a really short introduction to a problem that initially seems very different, but has a lot of the same mathematical ideas hiding in the background -> pulling a signal out of noise:

Finally, we went back to Mathematica to play with a few examples of signals hiding in noise. We saw how the ideas from Fourier Analysis could often pull out the signal even though it wasn’t obvious at all that a signal was hiding in our data in the first place!

One thought on “Sharing basic ideas about Cramer’s Rule and Fourier Analysis with my older son”

My retrodiction of Cramer’s Rule was lost in the g+ purge. A close approximation is here: https://mathoverflow.net/questions/89069/should-the-formula-for-the-inverse-of-a-2×2-matrix-be-obvious/89079#89079
The basic idea is to start with the natural transformation
V @ Alt^{n-1} V -> Alt^n V
(where n = dim V) and swap factors around
V -> (Alt^{n-1} V)^* @ Alt^n V
Now given a linear isomorphism M : V -> W, we get a commuting square with M down the left column and (Alt^{n-1} M)^* @ (det M)^{-1} up the right column. The Alt^{n-1} explains why we take all those subdeterminants, the * why we take transpose, and finally, we see the (det M)^{-1}.

My retrodiction of Cramer’s Rule was lost in the g+ purge. A close approximation is here: https://mathoverflow.net/questions/89069/should-the-formula-for-the-inverse-of-a-2×2-matrix-be-obvious/89079#89079

The basic idea is to start with the natural transformation

V @ Alt^{n-1} V -> Alt^n V

(where n = dim V) and swap factors around

V -> (Alt^{n-1} V)^* @ Alt^n V

Now given a linear isomorphism M : V -> W, we get a commuting square with M down the left column and (Alt^{n-1} M)^* @ (det M)^{-1} up the right column. The Alt^{n-1} explains why we take all those subdeterminants, the * why we take transpose, and finally, we see the (det M)^{-1}.

Good luck sharing it with kids though!