## Sunday, August 22, 2010

### What is a matrix? (Part 2)

Now we have a new kind of mathematical toy to play with, the matrix. As I said in the previous post, the easiest way to get a sense of what a matrices do is to use them for a while. In this post, then, I just want to go over a couple useful examples.
Suppose you wish to make all vectors in ℝ² longer or shorter by some factor s ≠ 0. You can represent this by a function f(v) = sv. With a moment's work, we can verify that this is a linear function because of the distributive law. Thus, we can represent f by a matrix. To do so, remember that we calculate f for each element of a basis. For simplicity, we will use the elementary basis {x, y}. Then, f(x) = sx and f(y) = sy. By using coordinates, we can write this as f([1; 0]) = [s; 0] and f([0; 1]) = [0; s]. The matrix representation of f then becomes:
Note that if s = 1, the function f doesn't do anything. Representing f(v) = v as a matrix, we get the very special matrix called the identity matrix, written as I, 𝟙 or 𝕀:
The identity matrix has the property that for any matrix M, M𝟙 = 𝟙M = M, much like the number 1 acts.

Of course, there's no requirement that we stretch x and y by the same amount. The matrix [a 0; 0 b], for instance, stretches x by a and y by b. If one or both of a and b is negative, then we flip the direction of x or y, respectively, since -v is the vector of the same length as v but pointing in the opposite direction.

A more complicated example shows how matrices can "mix up" the different parts of a vector by rotating one into the other. Consider, for instance, a rotation of the 2D plane by some angle θ (counterclockwise, of course). This is more difficult to write down as a function, and so a picture may be useful:

By referencing this picture, we see that f(x) = cos θ x + sin θ y, while f(y) = - sin θ x + cos θ y. Thus, we can obtain the famous rotation matrix:
As a sanity check, note that if θ = 0, then Rθ = 𝟙, as we would expect for a matrix that "does nothing."
One very important note that needs to be made about matrices is that multiplication of matrices is not always (or even often) commutative. To see this we let the matrix S swap the roles of x and y; that is, S = [0 1; 1 0]. Then, consider A = SRθ and B = S. Since applying S twice does nothing (that is, S² = 𝟙), we have that BA = Rθ. On the other hand, if we calculate AB = SRθS, we find that AB = R:
(Sorry for the formatting problems with that equation.)We conclude that ABBA unless sin θ = 0, neatly demonstrating that not all the typical rules of multiplication carry over to matrices.

I'll leave it here for now, but hopefully seeing a few useful matrices makes them seem less mysterious. Until next time!