**189-251B:** Honors Linear Algebra

## Blog

**Week 1**. *January 6-10*.

This week was devoted to the basic definitions and properties
of *abstract vector spaces* over a field *F*, and of the
linear transformations between them.
We discussed what it means for a set of vectors to span a vector space,
to be linearly independent, and to be a basis for the vector space.
This led us to the notion of dimension of a finite-dimensional vector space.

Next week, we will discuss the final exam of last semester, and I will give
some solutions to the problems there. I advise that you have a look at the
final exam before then, in order to refresh your
memory about the questions whose solutions I will be presenting.

** Week 2**. *January 13-17*.

Monday and Wednesday saw the proof of
basic facts concerning linear independence
of vectors and spanning sets. A basis of a vector space *V*
was defined to be a minimal spanning set,
or, equivalently, a maximal linearly independent subset,
or, equivalently, a subset *B* of *V* such
that every vector in *V* can be
expressed *uniquely* as a (*finite*, by definition)
linear combination of the vectors in *B*.
A vector space which has a finite basis is called finite-dimensional.
We proved that the cardinality of a basis for a
given finite-dimensional vector space *V* depends only on *V*
and not on the choice of basis. This cardinality is therefore an invariant of
*V* caled its *dimension*.

We proved various useful facts about bases and linearly independent sets,
based on the slightly technical but useful *Steinitz substittion lemma*.
One of the particularly useful corollaries is that every linearly independent subset of a finite-dimensional vector space can be *completed to a basis* for *V* (and hence, its cardinality is less than the dimension of *V*.
This flexibility in the choice of basis was a key ingredient in the
*rank-nullity theorem* proved on Wednesday, which, given a linear
transformation defined on a finite-dimensional vector space * V*,
relates the dimensions of its image and kernel to the dimension of *V*.

Friday's lecture was devoted to a quick review of the questions in the final exam from last semester.

**Week 3**. *January 20-24*.

This week was devoted to some of the basic material covered in
chapter 4 of the on-line notes, most notably the idea of representing
vectors in finite dimensional vector spaces by coordinates,
and linear transformations between such spaces as matrices,
*after a suitable choice of bases* of the abstract vector spaces involved.
We dicussed quotients, the isomorphism theorem for vector spaces, and the
theorem about the dimension of the kernel and image. Hopefully this discussion
left you with a feeling of *deja vu* from our very similar treatment
of groups last semester -- and if not, go ccarefully through sections 4.2. and 4.3. of the notes.

We illustrated some of the concrete applications of the material seen in class
by discussing linear codes, which are simply subspaces of
**F**_{2}^{n},
viewed as vector spaces over the field *{0,1}* with two elements.
Friday's lecture concluded with the construction of the remarkable
*Hamming code*, the (essentially unique)
error correcting code of length 7 and dimension 4.

**Week 4**. *January 27-31*.

This week was devoted to the study of a *single* linear transformation
*T* from a vector space V *to itself*.
We saw that one can associate to such a *T* its *minimal polynomial*, and we proved that, when *V* has dimension *n*, the
degree of this minimal polynomial is at most *n*.
We also defined eigenvalues and eigenvectors, and proved that the set of eigenvalues of a linear transformation is equal to the set of roots of its minimal
polynomial.

**Week 5**. *February 3-7*.

We continued our discussion of eigenvalues, eigenvectors, and diagonalisability and its connection with the characteristic polynomial. The most important theorem we proved this week is the *primary decomposition theorem* which asserts
that a vector space V on which a linear transformation *T* is defined
can be broken up into a direct sum of *T*-stable subspaces indexed by
the irreducible factors of the minimal polynomial. If *p(x)* is such an
irreducible factor, and
*p(x)*^{e} divides
the minimal polynomial exactly, then the restriction of *T* to the associated stable subspace has
*p(x)*^{e} as its minimal polynomial.
From this we deduced that a linear transformation is diagonalisable if and only if
its minimal polynomial factors into a product of *distinct* linear factors.

**Week 6**. *February 10-14*.

This week was devoted to a discussion of determinants, which were treated
somewhat differently from the on-line notes. By way of motivation, we started
with a discussion of the notion of *volume* on the real vector space
**R**^{n}. (This volume is usually
called the length when *n=1*,
the area when *n=2*, the volume when *n=3*, and continues to be
designated as a volume for all larger values of *n*; this is just for
lack of further words in the english vocabulary, and the
*n*-dimensional
volume differs just as much from its three-dimensional avatar as the surface
area does from the length....

We saw that the function which to *(v*_{1}, ..., v_{n})
associates the signed volume
of the *n*-dimensional parallelopiped spanned by
these *n* vectors satisfies two properties: it is *multilinear* in
the *v*_{i}, and it is *alternating*. Remarkably, these
two properties are purely algebraic and
make sense over an arbitrary field * F*; indulging in the
mathematician's
taste for abstraction, a *volume function* on an *n*-dimensional
vector space over *F*
can just be defined to be a multilinear, alternating function from
*V*^{n} to *F*.

Even more remarkably, such volume functions are *essentially unique*:
any two multlinear
alternating functions in *n* variables on an *n*-dimensional
vector space just differ by a constant of proportionality.
(This important uniqueness property was proved in class by induction.)

The determinant of a transformation *T: V --> V* can then be defined as
the ratio between the volume
function *D(T v*_{1},...,T v_{n})
and
*D( v*_{1},...,v_{n}),
for * any* non-zero volume function *D*. In effect, the determinant
of *T* represents the amount by which *T* distorts (any)
volume on
*V*.

We used this somewhat abstract definition of the determinant to desribe
the well-known algorithm for computing it via row-reduction, and gave a closed
formula for the determinant of a matrix in terms of its entries, as a sum
of *n!* terms indexed by the elements of the symmetric group
on *n* letters.

With the determinant behind our belt, we were finally
able to define the
*characteristic polynomial* of a linear transformation *T*.
This polynomial, denoted *f*_{T}(x), is simply the
determinant of the transformation *x I - T*,
where *I* denotes the identity tansformation on *V*.
The explicit formula for the determinant of a matrix shows that this function of
*x* is in fact a polynomial in *x* whose degree is always equal
to *n=dim(V)*.