*linear*in its arguments:

One can think of a linear function as one that leaves addition and scalar multiplication alone. To see where the name comes from, let's look at a few properties of a linear function

*f*:

This implies that

*f*(0) = 0 for any linear function. Next, suppose that

*f*(

*x*) = 1 for some

*x*. Then:

This means that if

*f*represents a line passing through 0 having slope

*m*= 1 /

*x*.

So what does all this have to do with matrices? Suppose we have a linear function which takes

*vectors*as inputs. (To avoid formatting problems, I'll write vectors as lowercase letters that are italicized and underlined when they appear in text, such as

__.) In particular, let's consider a vector__

*v*__in ℝ². If we use the {__

*v*__,__

*x*__} basis discussed last time, then we can write__

*y*__=__

*v**a*+

__x__*b*. Now, suppose we have a linear function

__y__*f*: ℝ² → ℝ² (that means that takes ℝ² vectors as inputs and produces ℝ² vectors as output). We can use the linear property to specify how

*f*acts on any arbitrary vector by just specifying a few values:

This makes it plain that

*f*(

__) and__

*x**f*(

*) contain all of the necessary information to describe*

__y__*f*. Since each of these may itself be written in the {

__,__

*x*__} basis, we may as well just keep the coefficients of__

*y**f*(

__) and__

*x**f*(

*) in that basis:*

__y__We call the object

**made up of the coefficients of**

*F**f*(

*) and*

__x__*f*(

__) a matrix, and say that it has four__

*y**elements*. The element in the

*i*th row and

*j*th column is often written

*F*. Application of the function

_{ij}*f*to a vector

__can now be written as the matrix__

*v***multiplied by the column vector representation of**

*F*__:__

*v**g*: ℝ² → ℝ², then we can write out the composition (

*g*∘

*f*)(

__) =__

*v**g*(

*f*(

__)) in the same way:__

*v*That means that we can find a matrix for

*g*∘

*f*from the matrices for

*g*and

*f*. The process for doing so is what we call matrix multiplication. Concretely, if we want to find (

**)**

*AB*_{ij}, the element in the

*i*th row and

*j*th column of the product

**, we take the**

*AB**dot product*of the

*i*th row of

**and the**

*A**j*th column of

**, where the dot product of two lists of numbers is the sum of their products:**

*B*To find the dot product of any two vectors, we write them each out in the same basis and use this formula. It can be shown that which basis you use doesn't change the answer.

If this all seems arcane, then try reading through it a few times, but rest assured, it makes a lot of sense with some more practice. Next time, we'll look at some particular matrices that have some very useful applications.

## No comments:

Post a Comment