[McGill] [Math.Mcgill] [Back]

189-251B: Honors Linear Algebra

Blog




Week 1. January 6-10. (Chapter 1, sections 1-3 of Kostrikin-Manin).

This week was devoted to the basic definitions and properties of abstract vector spaces over a field F, and of the linear transformations between them. We discussed what it means for a set of vectors to span a vector space, to be linearly independent, and to be a basis for the vector space. This led us to the notion of dimension of a finite-dimensional vector space.






Assignment 1 (Due Wednesday, January 15). From the textbook by Kostrikin-Manin,
Page 7, Exercises 3 and 6.
Page 15, Exercises 1, 3.
Page 22, Exercise 2.




Week 2. January 13-17. (Chapter 1, sections 5,6,7 of Kostrikin-Manin).

Monday and Wednesday saw the proof of basic facts concerning linear independence of vectors and spanning sets. A basis of a vector space V was shown to be a maximal linearly independent subset, or, equivalently, a subset B of V such that every vector in V can be expressed uniquely as a (finite, by definition) linear combination of the vectors in B. We showed that every vector space has a basis, using Zorn's Lemma.
A vector space which has a finite basis is called finite-dimensional. We proved that the cardinality of a basis for a given finite-dimensional vector space V depends only on V and not on the choice of basis. This cardinality is therefore an invariant of V caled its dimension.
One of the particularly useful corollaries of our general construction of a basis is that every linearly independent subset of a finite-dimensional vector space can be completed to a basis for V (and hence, its cardinality is less than the dimension of V. This flexibility in the choice of basis was a key ingredient in the rank-nullity theorem proved on Friday, which, given a linear transformation defined on a finite-dimensional vector space V, relates the dimensions of its image and kernel to the dimension of V.




Assignment 2 (Due Wednesday, January 22). From the textbook by Kostrikin-Manin,
Page 31, Exercise 3. (But replace the last two sentences in the problem by: ``Show that $X=d/dx$ and $Y = x$ acting by left multiplication on the space of infinitely differentiable functions on $\mathbb R$ give a solution to this equation".)
Page 33, Exercises 13, 14.



Week 3. January 20-24. (Chapter 1, section 4, 6 of Kostrikin-Manin).

This week we started by dicussing quotients, the isomorphism theorem for vector spaces, and the theorem about the dimension of the kernel and image. Hopefully this discussion left you with a feeling of deja vu from our very similar treatment of groups last semester...

We then focussed on linear transformations on finite dimensional vector spaces. Thanks to the idea of representing vectors in finite dimensional vector spaces by coordinates, any linear transformation between two such vector spaces can be represented by a matrix, after a suitable choice of bases of the abstract vector spaces involved.



Assignment 3 (Due Wednesday, January 29). From the textbook by Kostrikin-Manin,
Question 1. Page 43, Exercise 6.
Question 2. Page 43, Exercise 7. Question 7 (b) is phrased in a confusing way, and should be reinterpreted in the following way. "Given integers $k_1, k_2$, and $d$, compute the number $N_q(k_1,k_2,d)$ of pairs $(V_1, V_2)$ of subspaces of $V$ of dimensional $k_1$ and $k_2$ respectively, and satisfying dim$(V_1\cap V_2)=d$."
(c) For extra credit, give a combinatorial interpretation of the limit of $N_q(k_1,k_2,d)$ as $q$ tends to $1$.

Question 3. Given a linear transformation $T$ from a finite-dimensional vector space $V$ to itself, show that there is a polynomial $p(x)$ of degree at most $n=dim(V)$ such that $p(T)=0$. (Hint: prove this first under the assumption that $T$ admits a cyclic vector, i.e., a vector $v$ for which $(v, T(v), \ldots, T^{n-1}(v))$ generates $V$ as a vector space. Then show that if the statement is true for a $T$-stable subspace $W$ of $V$, and for the quotient $V/W$ with its induced transformation ${\bar T}$, it is also true for $V$. Conclude by arguing by induction on the dimension of $V$.)


Week 4. January 27-31.

This week opened with a discussion of the determinant, based on the general notion of (alternating) multilinear forms on a vector space. This allowed us to define the determinant of a linear transformation from an $n$-dimensional vector space $V$ to itself, in terms of what it does to the one-dimensional space of alternating $n$-multilinear forms on $V$. This allowed us to define the characteristic polynomial of a linear endomorphism of a vector space.

Among the more extraneous (but nonetheless germane to the assignments!) topics we discussed was the Grassmanian of $k$-dimensional subspaces of an $n$-dimensional vector space, and of the strategies you might employ to count its cardinality, when the underlying field of scalars is a finite field. We saw that the answer to this counting problem could be expressed in terms of the Gaussian binominal coefficient or $q$-binomial coefficient, and resonates in a tantalising way with ``combinatorial" counting problems involving the more familiar binomial coefficients.





Assignment 4 (Due Wednesday, February 5).
Question 1. Let $V$ the vector space of infinitely differentiable real-valued functions on the real line, and let $D$ and $I$ be the linear transformations from $V$ to $V$ given by $$ D(f) = \frac{df}{dx}, \quad \qquad I(f)(x) := \int_0^x f(t) dt.$$ (a) Show that $DI$ is the identity, while $p:= ID$ is not.
(b) Show that $p$ is a projection, i.e., $p^2=p$, and compute the kernel and image of $p$. Describe the decomposition $V = Ker(p) \oplus {\rm Im}(p)$ explicitly.

Question 2. Let $V$ be a vector space of dimension $n$.
(a) Show that the set of $k$-multilinear forms on $V$ is a vector space. What is its dimension?
(b) What is the dimension of the set of alternating $k$-multilinear forms on $V$?

Question 3. (Optional, for extra credit.) The $q$-binomial coefficient is defined to be $$ \binom{n}{k}_{\!\!q} := \frac{(q^n-1)(q^{n-1}-1) \cdots (q-1)} {(q^{n-k}-1) \cdots (q-1) \cdot (q^k-1) \cdots (q-1)}.$$ (In last Friday's lecture, it was shown to be equal to the cardinality of the Grassmanian of $k$-dimensional subpaces of an $n$-dimensional vector space, where $q$ is the cardinality of the underlying finite field of scalars). Assume now that $q$ is an element of a field $F$, and let $x$ and $y$ be elements of a non-commutative algebra $A$ over $F$ satisfying the relation $yx=qxy$. Prove the identity $$ (x+y)^n = \sum_{k=0}^n \binom{n}{k}_{\!\!q} x^{n-k}y^k.$$ (This identity can be viewed as a non-commutative generalisation of the binomial theorem, which one recovers upon setting $q=1$.)


Week 5. February 3-7.
One of the opening themes of this week was the usefulness of the notion of quotient of vector spaces, as a way to streamline certain constructions and arguments, and avoid the seductive trap of resorting too quickly to coordinates. Thus we presented a somewhat cleaner solution to the infamous question 2 of assignment 3, and explained the proof that a linear endomorphism of a vector space of dimension $n$ always satisfies a polynomial of degree at most $n$, following the strategy proposed in question 3 of assignment 3.

We then continued our study of a single linear transformation T from a vector space V to itself. We defined eigenvalues and eigenvectors, and proved that the set of eigenvalues of a linear transformation is equal to the set of roots of its minimal polynomial. This material is quite standard and will be found in any linear algebra textbook: Chapter I.8 of Kostrikin-Manin, or Section II. of Knapp's book for a slightly more leisurely pace, that reflects more closely what we've covered in the notes.

On the Friday, those who were not snowed in saw one of the concrete applications of the material seen in class so far, to the notion of voting paradoxes. Namely, we explained how the principal of linear superposition that is at the heart of linear algebra provides the ideal language to analyze concrete things like voter preferences. Mundane facts from linear algebra can then lead to rather surprising and seemingly paradoxical conclusions: for instance, that in an election to choose the leader of the opposition party, the most popular candidate is not necessarily the best choice to beat the incumbent.

Assignment 5 (Due Wednesday, February 12).
Question 1. Let $V$ the vector space of polynomials over a field $F$, and let $q$ be a non-zero element of $F$.

(a) Let $D$ and $Q$ be the functions from $V$ to $V$ given by $$ D(f) = \frac{df}{dx}, \quad \qquad Q(f)(x) := f(qx).$$ Show that $D$ and $Q$ are linear transformations, and that they satisfy the commutation relation $D Q = q Q D$ evoked in question 3 of assignment 4.

(b) Let $A$ be a finite-dimensional algebra over a field $F$, and let $D$ and $Q$ be invertible elements satisfying the relation $DQ = qQD$. Show that if $D$ and $Q$ act on a vector space $V$, then $D$ maps the $\lambda$-eigenspace for $Q$ to the $\lambda/q$-eigenspace for $Q$.

(c) With assumptions as in (b), show that $q$ is necessarily a root of unity in the field $F$.

Question 2. Let $T:V\rightarrow V$ be an endomorphism of a finite dimensional vector space over the field ${\mathbb Z}/p{\mathbb Z}$ with $p$ elements, satisfying the equation $T^p=T$. Show that $T$ is diagonalisable.

Question 3. If $T$ is an endomorphism of finite order, i.e., $T^n=1$ for some $n\ge 1$, give a necessary and sufficient condition on the field $F$ for $T$ to be diagonalisable.

Question 4. Give an example of a linear transformation $T$ from a finite dimensional vector space $V$ to itself, for which $V$ is not equal to the direct sum of the kernel of $T$ and the image of $T$. (Even though the dimensions of these two vector spaces always add up to the dimension of $V$.)


Week 6. February 10-14.
References: Knapp, Chapter II.8 and Chapter V. Kostrikin Manin, Ch. I.8.

We continued our discussion of eigenvalues, eigenvectors, and diagonalisability and its connection with the minimal and characteristic polynomials. The most important theorem we proved is the primary decomposition theorem which asserts that a vector space V endowed with a linear endomorphism T can be broken up into a direct sum of T-stable subspaces indexed by the irreducible factors of the minimal polynomial. If p(x) is such an irreducible factor, and p(x)e divides the minimal polynomial exactly, then the restriction of T to the associated stable subspace has p(x)e as its minimal polynomial. From this we deduced that a linear transformation is diagonalisable if and only if its minimal polynomial factors into a product of distinct linear factors.

Assignment 6 (Due Wednesday, February 19).

Question 1.
Let $V$ be an $n$-dimensional vector space over a field $F$, and let $R$ be an $F$-subalgebra of the endomorphism algebra End$_F(V)$. (I.e., a subring, which is also a vector space over $F$.) Pick a non-zero vector $v\in V$. Show that the function $\phi_v: R \rightarrow V$ sending $T\in R$ to $Tv$ is a linear map of $F$-vector spaces.

Assume for the rest of the questions that $R$ is a commutative $F$-algebra.

Question 2.
Show that the kernel of $\phi_v$ (as a linear map) is also an ideal of the ring $R$. Show that if this map is surjective, it is also injective.

Question 3.
The commutative subalgebra $R\subset {\rm End}_F(V)$ is said to be semisimple if any $R$-stable subsapce $W\subset V$ admits an $R$-stable complementary subspace. Using the result of question 2, show that the dimension of a commutative semisimple subalgebra $R$ of ${\rm End}_F(V)$ is at most $n$, by abstracting the approach used to tackle question 3 of assignment 3. (You have thus shown that a commutative semisimple $F$-subalgebra of End$_F(V)$ has dimension at most $n= dim(V)$, rather than the cruder estimate of $n^2$, valid even if $R$ is not commutative).
For extra credit: show that the semisimplicity assumption is necessary by constructing a $5$-dimensional commutative $F$-subalgebra of $M_4(F)$.

Question 4.
Show that the following matrices with complex entries $$ M_1 = \left( \begin{array}{ccccc} 5297 & 2163 & 3121 & 444 & 511 \\ 153 & 421 & -312 & 222 & 111 \\ 17& 271 & -5 & 2134 & 1789 \\ 1312 & 212 & 916 & 412 & 9871 \\ 809 & 341 & 746 & 611 & 9651 \end{array}\right), \qquad M_2 = \left( \begin{array}{ccccc} 5297 & 2163 & 3121 & 444 & 511 \\ 153 & 421 & -312 & 222 & 111 \\ 17& 271 & -5 & 2134 & 1789 \\ 1312 & 212 & 916 & 412 & 9871 \\ 809 & 341 & 746 & 611 & 9650 \end{array}\right) $$ are not conjugate to each other.



Week 7. February 17-21.

This is the week of the midterm exam, so we will refrain from seriously embarking on any new material.

On Monday, I will try to do a bit of review of the material of the previous weeks, in preparation for the midterm.

On Wednesday we will definitely have a review session for the exam. Come with questions!

I will be absent right after the morning lecture on Wednesday, and hence will be unable to hold office hours on that day. Haining Wang will be administering the midterm on Thursday evening.

On Friday, we started with a discussion of duality, bilinear forms, and inner products.



Some food for thought.

The following questions are worth mulling over, as part of your preparation for the midterm exam this Thursday.

Remember that this exam is scheduled this week, on Thursday, February 20, from 6 to 9 PM, in ENGMC 204.

Most of these questions should be similar, in terms of level of difficulty, to those that will come up in the mitderm. Those with a * are meant to make you think a bit longer.

1. If $S$ is a set and $F$ is a field, recall that ${\cal F}(S,F)$ denoted the vector space of $F$-valued functions on $F$, and that ${\cal F}_0(S,F)$ denotes the vector space of functions on $S$ with finite support, i.e., functions that are zero outside a finite subset of $S$.

(a) Show that every vector space $V$ over $F$ is isomorphic to ${\cal F}_0(S,F)$, for a suitable set $S$. Show then that the dual of $V$ is isomorphic to ${\cal F}(S,F)$, for the same $S$.

(*b) Show that there are vector spaces $V$ over $F$ that are not isomorphic to ${\cal F}(S,F)$ for any set $S$.



2. Let $T:V \rightarrow {\bar W}$ be a linear transformation, and let $p:W\rightarrow {\bar W}$ be a surjective linear transformation.

(a) Show that there is a linear transformation ${\tilde T}:V \rightarrow W$ for which $ p\circ {\tilde T} = T$.

(b) Show that a transformation ${\tilde T}$ with this property is uniquely determined, up to adding to it an element of ${\rm hom}(V,ker(p))$.



3. Let $a$ and $b$ be real numbers, and let $V$ be the set of $\bf R$-valued sequences $(a_n)_{n\ge 0}$ satisfying the linear recurrence relation of order two $$ a_{n+1} = -a a_n - b a_{n-1}, \qquad \mbox{ for all } n \ge 1.$$

(a) Show that $V$ is a vector subspace of the real vector space of real valued sequences, and compute its dimension.

(b) Compute the minimal polynomial $p_T(x)$ of the ``shift operator" $T$ sending a sequence $(a_n)$ to the sequence $(a_{n+1})_{n \ge 0}$, and show that it is equal to the characteristic polynomial of $T$.

(c) Show that every sequence in $V$ tends to $0$ if and only if the (complex) roots of $p_T(x)$ have absolute value strictly less than $1$.

(d) Show that there are sequences in $V$ that tend to $\infty$ exponentially, if and only if $f_T(x)$ has at least one complex root of absolute value strictly greater than $1$.

(e) If $f_T(x) = (x-\lambda)^2$, show that every sequence in $V$ is of the form $a_n = \lambda^n(an+b)$ for suitable real numbers $a$ and $b$.

(*f) Generalise this discussion to the set $V$ of sequences satisfying a linear recurrence relation of order $r$, in the obvious sense.



4. Let $T$ be a linear transformation on a vector space $V$ of dimension $n$, and let $A$ be the set of linear transformations on $V$ that commute with $T$.

(a) Show that $A$ is an algebra over $F$: i.e., an $F$ vector space, which is also closed under composition of endomorphisms.

(b) Assume that $T$ admits a cyclc vector, i.e., there is a vector $v\in V$ for which $(v, Tv, T^2 v, \ldots, T^{n-1}v)$ spans $V$. Show that the algebra $A$ is commutative, has dimension equal to $n$, and is generated by $T$ as an $F$-algebra.



5.
(a) Let $G=GL_n(F)$ be the group of invertible $n\times n$ matrices with entries in the finite field $F$ with $q$ elements. What is the cardinality of $G$?

(b) Same question for the group $SL_n(F)$ of matrices in $G$ of determinant $1$.

(c) Same question for the quotient of $G$ by its center (this group is often denoted $PGL_n(F) = GL_n(F)/F^\times$).

(*d) What is the kernel of the natural homomorphism from $SL_n(F)$ to $PGL_n(F)$? Use this to calculate the cardinality of the image.



Assignment 7 (Due Wednesday, February 26).

Question 1.
Let $W$ be an $F$-vector subspace of a finite dimensional vector space $V$. Show that $((V/W)^*)^\perp = W. $ What can you say when the finite dimensionality assumption is dropped?

Question 2.
Let $V_1$ and $V_2$ be subspaces of a vector space $V$. Show that $(V_1+V_2)^\perp = V_1^\perp \cap V_2^\perp$, and that $(V_1\cap V_2)^\perp = V_1^\perp + V_2^\perp$.



Week 8. February 24-28.
Main reference: Chapter III of Knapp, and this chapter of Sheldon Axler's book, ``linear algebra done right".
This week, was devoted to a few digressions concerning the midterm, and the notion of actions of groups on sets, and the orbit stabiliser theorem. We just touched on the basic notions and rudiments of bilinear forms on vector spaces in the Monday lecture.



Assignment 8 (Due Wednesday, March 11).

Question 1.
Show that a bilinear form $B:V\times V \rightarrow F $ on a finite-dimensional vector space $V$ is left degenerate if and only if it is right degenerate.

Question 2.
Let $V$ be the space of trace zero endomorphisms of a two-dimensional vector space $W$ with entries in a field $F$, and let $\langle \ , \ \rangle:V\times V\rightarrow F$ be the function defined by $\langle S,T\rangle = {\rm Trace}(S\circ T)$. Show that this pairing is a symmetric, non-degenerate bilinear form on $V$,and write down its matrix relative to a suitable basis for $V$.

Question 3.
Show that the space $(V,\langle , \rangle)$ of Question 2, with $F={\mathbb R}$ is not isomorphic to the Euclidean space ${\bf R}^3$ with the standard dot product. Conclude that the matrix you calculated in Question 2 is not of the form $A A^t$ (where $A^t$ denotes the transpose of $A$) for any $3\times 3$ matrix with real entries.

Question 4.
Let $V$ be the inner product space of Questions 2 and 3, and let $G={\rm Aut}_F(W) \simeq GL_2(F)$ be the group of linear automorphisms of $W$. Show that any $g \in G$ acts by conjugation on $V$, sending $T$ to $ g\star T := g \circ T \circ g^{-1}$, and that this action preserves the bilinear pairing on $V$, in the sense that $$ \langle g\star T_1, g\star T_2 \rangle = \langle T_1, T_2\rangle, \quad \mbox{ for all } g\in G, \mbox{ and all } T_1, T_2\in V.$$ Use this to construct an injective homomorphism from ${\rm PGL}_2(F)$ (the quotient of ${\rm GL}_2(F)$ by the normal subgorup $F^\times$ of scalar matrices) to the orthogonal group of $(V,\langle \ , \ \rangle)$.





Week 9. March 9-13.
Main reference: Chapter III of Knapp, and this chapter of Sheldon Axler's book, ``linear algebra done right".
Building on the generalities concerning duality and bilinear pairings on vector spaces, we focussed this week on inner product spaces, also known as Euclidean spaces over the field $F={\mathbb R}$ of real numbers, ant also introduced the notion of Hermitian spaces over the field of complex numbers. We proved the basic facts about inner product spaces: Cauchy-Schwartz inequality, parallelogram law, the triangle inequality,...

We would have concluded with a discussion of orthogonal projection and minimisation problems, with an application to linear regression, if not for the prophylactic measures taken by the university on Friday. Since what I was planning to cover is quite necessary for the assignment which you are (still) urged to finish by this coming Wednesday, let me quickly summarise it in writing. All the details are well explaining in the references above (and you will learn quite efficiently by reading through it on your own.)

At the end of Wednesday's lecture, we mentionned that every finite dimensional inner product space has an orthonormal basis. The process whereby a basis $(v_1,\ldots, v_n)$ can be turned into an orthonormal one is known as the Gram-Schmidt orthonormalisation process. It applies to finite-dimensional spaces, but breaks down for infinite dimensional ones. The problem is that, while every maximal linearly independent collection of vectors in $V$ necessarily spans it, the same cannot be said of a maximal orthonormal system of vectors. (Finding an example of a maximal, orthonormal system of vectors in an inner product space that fails to span it is an amusing way to pass the time if you find yourself in quarantine.)

The existence of orthonormal bases implies that if $W$ is a finite-dimensional subspace of a (not necessarily finite dimensional) inner product space $V$, then every vector $v$ can be written uniquely as $w + w'$, with $w\in W$ and $w'$ in $W^\perp$, the orthogonal complement of $W$. To see this, choose an orthonormal basis $(e_1,\ldots, e_n)$ for $W$, set $w = (v,e_1)e_1 + \cdots + (v,e_n)e_n$, and let $w'$ be what is left over. The vector $w$ is called the orthogonal projection of $v$ onto $W$.

An important property of the orthogonal projection of $v$ onto $W$ is that it is the unique vector $w\in W$ for which the distance $||v-w||$ is minimised. This is simply because, if $u$ is any other vector in $W$, we have $$||v-u||^2 = || (v-w) + (w-u) ||^2 = || v-w||^2 + ||w-u||^2 \ge ||v-w||^2,$$ where the penultimate equality follows from the pythagoras theorem in light of the fact that $v-w$ belongs to $W^\perp$ and $w-u$ belongs to $W$, so that these vectors are orthogonal to each other.

A nice practical application of this simple principle occurs in understanding the formulas for linear regression. Suppose you are given $n$ data points $(x_1,y_1)$, $\ldots$, $(x_n,y_n)$ consisting of pairs of real numbers and you want to find the linear equation $y = ax+b$ that provides a best fit for this data in the sense of least squares, i.e., for which the quantity $$ \sum_{j=1}^n (y_j - (ax_j+b))^2$$ is minimised. The trick for finding $a$ and $b$ is to let $W$ be the two dimensional subspace of $\mathbb R^n$ spanned by $v_1 := (x_1,\ldots, x_n)$ and $v_2:=(1,\ldots, 1)$, and to observe that $(a,b)$ are the coordinates, relative to the basis $(v_1,v_2)$, of the orthogonal projection of the vector $(y_1,\ldots, y_n)$ onto $W$. I leave it to you to work out the exact equations for $a$ and $b$, since this will be useful in handling Question 1 of Assignment 9 below.



Assignment 9 (Due Wednesday, April 1). Note the revised due date. Your assignment should be uploaded onto MyCourses. Although the university prevents us from requesting that you hand in your assignment earlier, you are of course free to upload it to MyCourses at any time before the April 1 due date.

Question 1. Let $(x_1,y_1), \ldots, (x_n,y_n)$ be a collection of $n$ points in ${\mathbb R}^2$. Find the quadratic equation $y = ax^2+bx+c$ which provides the best fit to this data in the sense of least squares, i.e., find the real constants $(a,b,c)$ for which $$ \sum_{i=1}^n (y_i - (ax_i^2+b x_i + c))^2$$ is minimised.

Question 2. Let $T:V\rightarrow V$ be a linear map on inner product spaces. Show that ${\rm ker}(T) = {\rm Image}(T^*)^\perp$.

Question 3. Show, without doing any calculation, that there is a polynomial $p(x)$ of degree $\le n$ with real coefficients satisfying $$ \int_0^1 p(x)f(x) dx = f(0),$$ for all polynomials $f$ of degree $\le n$. Compute $p(x)$ when $n=2$.











Two week hiatus. March 16-27.

The university authorities have decreed that there shall be no formal classes, either on or off line, for this period. I encourage you to make use of this unexpected pause in the course schedule to review the material we have covered so far, and to interact with me and your fellow students using the discussion group I have opened on MyCourses. Your goal should be to be fresh and well prepared when we resume classes in two weeks, and to avoid forgetting what you learned in the previous 9 weeks!

Here are a few suggestions, to help you combat the anxiety that might come from being deprived of linear algebra lectures for the next two weeks.

$\bullet$ I encourage you to review the chapter from Axler's book which we covered in the prematurely interrupted week 9 (an easy read) and to go over Section III.1. of Knapp (a somewhat more challenging read, supposing a greater degree of maturity, but still more amenable to independent study that Kostrikin-Manin.)

$\bullet$ This youtube channel set up by Sheldon axler contains some videos that are informative and pleasant to watch. The first 23 lectures (of around 15 minutes each) contain review of material we've already covered, and lectures 24-34 are closely related to what we eill be covering in the first two weeks once we resume lectures.

$\bullet$ If you want something a bit more challenging, why not have another go at Kostrikin-Manin, particularly chapter 2.

$\bullet$ If you are not too averse to doing your mathematics in the language of Molière, let me recommend, again, the book of Colmez: anything you might read there, related either directly or indirectly to the course, is guaranteed to lift your spirits!

$\bullet$ If, on the other hand, you feel linear algebra being crowded out by more pressing concerns, take a look at the following Wikipedia article, and try to read more about it.

$\bullet$ You may also want to read ahead, in order to get a head start on the material that will be covered in the last two weeks of the course; to this end I am giving you a rough preview, below, of how the course will conclude once we resume.

$\bullet$ Of course, exchanging questions and ideas on the MyCourses discussion group is an excellent way to stay in touch. I will strive to respond to your questions and remarks reasonably promptly, as they come in.





Week 10. March 30-April 3.
Main reference: Chapter III.2 and III.3 of Knapp.
Week 10 was devoted to an in-depth discussion of the notion of adjoints of linear transformations on inner product spaces, and the proof of the spectral theorem, following the discussion in Sections III.2 and III.3 of Knapp.
Here are the notes for the on-line lectures.

Monday, March 30.
Wednesday, April 1.
Friday, April 3.

The recordings of the Friday lecture is available on MyCourses. Unfortunately the Monday and Wednesday lectures were lost because of techinical difficulties with the recording feature of zoom.



Assignment 10. (Due Wednesday, April 8).



Question 1.
(a) Let $V$ be the space of endomorphisms of a two-dimensional vector space $W$ over a field $F$, and let $\overline T := {\rm trace}(T)-T$. Show that $T\mapsto \overline T$ is a linear involution (transformation of order $2$) which satisfies $\overline{T_1 T_2} = \overline T_2 \overline T_1$, and that $\overline T T = T\overline T$ belongs to $F$ (i.e, is a scalar multiple of the identity transformation.) What is this scalar?

(b) Let $\langle \ , \ \rangle:V\times V\rightarrow F$ be the function defined by $\langle S,T\rangle = {\rm Trace}(S\circ \overline T)$. Show that this pairing is a symmetric, non-degenerate bilinear form on $V$,and write down its matrix relative to a suitable basis for $V$.

(c) Let $G$ be the subgroup of ${\rm Aut}_F(W) \times {\rm Aut}_F(W)$ consisting of pairs $(a,b)$ for which $a\bar a = b\bar b$. Construct a non-trivial homomorphism from $G$ to the orthogonal group of $V$ endowed with the quadratic form constructed in (b).

Question 2. Let $M$ be a complex invertible $n\times n$ matrix. Show that it can be written as a product $P U$ where $U$ is an upper-triangular matrix and $P$ is a unitary matrix. (Hint: Letting $(e_1,\ldots, e_n)$ be the standard basis of ${\mathbb C}^n$, apply the Gram-Schmidt procedure to the basis $(M e_1, \ldots, M e_n)$ of ${\mathbb C}^n$ to get an orthonormal basis $(e_1',\ldots, e_n')$ for ${\mathbb C}^n$, and let $P$ be the matrix relating $(e_1,\ldots, e_n)$ to $(e_1', \ldots, e_n')$. )


Question 3. From first principles, (i.e., using only the defintions), show that a nilpotent self-adjoint operator is zero. Use this to show that, if $T$ is self-adjoint and $W\subset V$ is the generalised eigenspace for $T$ with the eigenvalue $\lambda$, then $W$ is equal to the eigenspace attached to this eigenvalue.



Week 11. April 6-8.
Main reference: The end of Chapter III.3 of Knapp.
This week will only consist of two lectures, on Monday and Wednesday, because of Easter Friday and Monday. (Note however that next week's Tuesday will be a Monday, and we will have a final meeting on that day.

On Monday, we will discuss the polar decomposition of normal operators on an inner product space, and then talk about some applications of the spectral theorem to fourier analysis on finite groups and to spectral graph theory.

On Wednesday, I will finish whatever I didn't manage to cover on Monday.

Here are the notes for the on-line lectures.

Monday, April 6.
Wednesday, April 8.



Final week. Tuesday, April 14.
We devoted the last session on Tuesday to finishing our rudimentary discussion of spectral graph theory, say8ing a few things about Assignment 10, and doing a review of the material, notably the Jordan canonical form.

Here are the notes for the on-line lecture.

Tuesday, April 14.