You are on page 1of 5

MATH 113: COMMENTS / SOLUTIONS TO THE MIDTERM

Here are some excellent solutions from your classmates - hopefully this con-
vinces you that it’s possible to produce good write-ups in timed conditions! (Though
of course you won’t be penalised for your write-up, as long as you include all the
important points.) I didn’t grade 4,5,7 so I can’t comment on common mistakes in
those. Nor can I guarantee that the solutions here would get full marks (although
I consider them perfect).

1. Let V be a finite dimensional vector space, and let U, W be a subspaces of V


of dimensions 3 and 4 respectively. What are the possible dimensions of U + W?

Solution (Sewon Jang).

dim(U + W ) = dim U + dim W − dim U ∩ W

0 ≤ dim U ∩ W ≤ max(dim U, dim W ) = 3


3 + 4 − 3 ≤ dim(U + W ) ≤ 3 + 4
4 ≤ dim(U + W ) ≤ 7

I marked this leniently because I wrote U ∪ W when I meant to U + W. A lot


of you gave examples where dim(U + W ) = 4 or 7, but that doesn’t explain why
4, 5, 6, 7 are the only possibilities. The neat way is to use inequalities: recall that
U ∩ W ⊂ U implies dim U ∩ W ≤ dim U.

1
MATH 113: COMMENTS / SOLUTIONS TO THE MIDTERM 2

2. Write down the matrix corresponding to the differentiation map D : P2 (R) →


P1 (R), with respect to the basis {1, z, z2 } of P2 (R) and the basis {1, z} of P1 (R)
(Here D ( p) = p0 , and you may assume that D is linear).

Solution (Dawson Zhou). D (1) = 0; D (z) = 1; D (z2 ) = 2z. Thus we have

D (1) = 0(1) + 0( z )
D ( z ) = 1(1) + 0( z )
D ( z2 ) = 0(1) + 2( z )
" #
0 1 0
Our matrix is M( D ) = .
0 0 2
You just have to apply the algorithm here: the image of the first basis vector
goes in the first column, the image of the second basis vector goes in the second
column, and so on. Some of you stated this algorithm in your answer, which
isn’t necessary, but then you’re sure to get partial credit if you make mistakes in
applying the map.

3. Give, with proof, a basis of P3 (R) consisting of polynomials of degree exactly 3.


(You may use without proof that P3 (R) is 4-dimensional.)

Solution (David Wu). Consider the candidate basis of length 4

{ x3 , x3 + x2 , x3 + x, x3 + 1}
(all are polynomials of degree 3 and thus members of P3 .)
Consider a linear combination of these vectors

a1 x3 + a2 ( x3 + x2 ) + a3 ( x3 + x ) + a4 ( x3 + 1) = 0; a1 , a2 , a3 , a4 ∈ R

⇒ ( a1 + a2 + a3 + a4 ) x 3 + a2 x 2 + a3 x + a4 = 0
Matching coefficients of each power in the polynomials
a4 = 0; a3 = 0; a2 = 0 and a1 + a2 + a3 + a4 = 0
Thus a1 = − a2 − a3 − a4 = 0 − 0 − 0 = 0
Hence this collection of vectors is linearly independent. Since P3 has dimension
4 and we know that every linearly independent collection of vectors in P3 with
length 4 is a basis for P3 , our candidate basis is indeed a basis.
MATH 113: COMMENTS / SOLUTIONS TO THE MIDTERM 3

Checking either linear independence or spanning is fine, provided you explain


why that’s enough.

4. Let V be a finite dimensional vector space, and suppose S ∈ L(V ). Prove


that S is invertible if and only if S is injective.
I didn’t mark this question so I didn’t look for good answers, sorry. “Invertible”
here means “have a linear inverse”, so you were expected to construct a linear
inverse map for an injective S ∈ L(V ) (after using rank-nullity to deduce that
it’s also surjective) - see Proposition 3.17.

5. Let V be a finite dimensional vector space over C, and S, T ∈ L(V ). Assume


that TS = ST and show that then T and S have a common eigenvector.

Solution (Yi-Shi Wei). T is an operator on V ⇒ T has an eigenvalue λ (since


V is a finite-dimensional vector space over C)
⇒ ∃v 6= 0, v ∈ null( T − λI )
Let’s prove that null( T − λI ) is invariant under S. Take any u ∈ null( T − λI ).
Then Tu = λu. By assumption

T (Su) = TS(u) = ST (u) = S( Tu) = S(λu) = λSu

∴ Su ∈ null( T − λI ), so null( T − λI ) is invariant under S.


Now we can consider S as an operator on null( T − λI ). It must have an
eignvector v ∈ null( T − λI ). Then this v is the common eigenvector of S and T.

6. Let V be a finite dimensional vector space, and suppose S, T ∈ L(V ). Show


that dim nullS + dim nullT − dim null(ST ) ≥ 0.

Solution (based on Lucy Schoen’s). Look at S0 , the restriction of S to


rangeT.
By rank-nullity:

dim rangeT = dim nullS0 + dim rangeS0 (1)


dim V = dim nullT + dim rangeT (2)
dim V = dim nullST + dim rangeST (3)
MATH 113: COMMENTS / SOLUTIONS TO THE MIDTERM 4

(1)+(2)-(3) ⇒ 0 = dim nullS0 + dim rangeS0 + dim nullT − dim nullST −


dim rangeST
rangeS0 = rangeST: for all w ∈ rangeS0 , w = S0 (v) for some v ∈ rangeT. So
v = T (u) for some u ∈ V, hence w = S0 ( Tu) = ST (u) ∈ rangeST. Conversely,
if w ∈ rangeST, then w = ST (u) for some u ∈ V, so w = S( Tu) = S0 ( Tu) ∈
rangeS0 .
So dim rangeS0 = dim rangeST, and 0 = dim nullS0 + dim nullT − dim nullST
nullS0 ⊂ nullS: if v ∈ nullS0 , then Sv = S0 v = 0 so v ∈ nullS. So dim nullS0 ≤
dim nullS, and 0 ≤ dim nullS + dim nullT − dim nullST.

This is a different solution from the one I gave for pset 3. A couple of people
attempted a third method, which uses bases and doesn’t need restriction maps
nor rank-nullity. Whichever way you decide to do this, you should explain all
inequalities between dimensions by proving the appropiate inclusions of subspaces.

7. Let V be the vector space of 2 × 2 matrices.


(i) What is the dimension of V?
(ii) Let A be a fixed 2 × 2 matrix. Show that T ( B) = AB defines a linear operator
on V.
(iii) Show that there exists a T ∈ L(V ) that is not given by matrix multiplication;
in other words, there is no 2 × 2 matrix A such that T ( B) = AB for all B ∈ V.
Hint: it is easier than it may seem, and you need not write down T explicitly.

Solution (based on Armadeep Singh’s). (i) any M ∈ V can be represented


1 0 0 1 0 0 0 0
as a linear combination of , , , and these are linearly in-
0 0 0 0 1 0 0 1
dependent, so this is a basis for V. So dim V = 4.
(ii) A is a 2x2 matrix, and so are elements of V, so matrix multiplication by A
is defined. The product of two 2x2 matrices is again a 2x2 matrix, so rangeT ⊂ V
ie T ∈ L(V ).

T ( B1 + B2 ) = A( B1 + B2 )
= AB1 + AB2 matrix multiplication is distributive
= T ( B1 ) + T ( B2 )
MATH 113: COMMENTS / SOLUTIONS TO THE MIDTERM 5

T (αB) = A(αB)
= αAB scalar matrices commute with other matrices
= αT ( B)
(iii) Define T• : V → L(V ), T• ( A) = TA where TA = AB. By ii, TA indeed
belongs to L(V ).
We check that T• is a linear map:

TA1 + A2 ( B) = ( A1 + A2 ) B
= A1 B + A2 B matrix multiplication is distributive
= TA1 ( B) + TA2 ( B)

TαA ( B) = (αA) B
= α( AB) matrix multiplication is associative
= αTA ( B)
dim V = 4, dim L(V ) = (dim V )2 = 16 so, by Rank-Nullity theorem, T• cannot
be surjective. So ∃ some T ∈ L(V )\rangeT• , that is, T is not given by B → AB
for any A ∈ V.

Amy Pang, 2010

You might also like