You are on page 1of 89

Linear Algebra

A Solution Manual for


Axler (1997), Lax (2007), and Roman
(2008)
Jianfei Shen
School of Economics, The University of New South Wales
Sydney, Australia
2009
I hear, I forget;
I see, I remember;
I do, I understand.
Old Proverb
Contents
Part I Linear Algebra Done Right (Axler, 1997)
1 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Finite-Dimensional Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6 Inner-Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7 Operators on Inner-Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Part II Linear Algebra and Its Application (Lax, 2007)
8 Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
9 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10 Linear Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
11 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
12 Determinant and Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
v
Acronyms
Notation Description
U EV U is a subspace of V
L(A) The set of operators
R
T
The range of T

T
The null space of T
F, 1 The eld on which a vector (linear) space is dened
V U V is isomorphic to U
xj D x CY The coset Y in A and x is called a coset representative for xj
A{Y The quotient space module Y
P
n
(F) The set of polynomials with degree 6 n, whose coecients are in F
Sym(A) The set of all permutations of the set A: the symmetric group on A
sign(c) The signature of a permutation c
vii
Part I
Linear Algebra Done Right (Axler, 1997)
1
VECTOR SPACES
As You Should Verify
Remark 1.1. 1 = { P(F): (3) = 0] is a subspace of P(F).
Proof. The additive identity 0
P(F)
is in the set; let . q 1, then ( q)(3) =
(3) q(3) = 0; for any a F and 1, we have (a)(3) = a 0 = 0. |L
Remark 1.2. If U
1
. . . . . U
n
are subspaces of V , then the sum U
1
U
n
is a
subspace of V .
Proof. First, 0 U
i
for all U
i
implies that 0 = 0 0

n
iD1
U
i
. Now
let u. v

n
iD1
U
i
. Then u =

n
iD1
u
i
and v =

n
iD1
v
i
, where u
i
. v
i
U
i
, and
so u v =

n
iD1
(u
i
v
i
)

n
iD1
U
i
since u
i
v
i
U
i
for all i . Finally, let
u =

n
iD1
u
i


n
iD1
U
i
and a F. Then au =

n
iD1
(au
i
)

n
iD1
U
i
. |L
Exercises
I Exercise 1.3 (1.1). Suppose a and b are real numbers, not both 0. Find real
numbers c and J such that 1,(a bi ) = c Ji .
Solution. Note that for z C with z = 0, there exists a unique w C such
that zw = 1; that is, w = 1,z. Let z = a bi and w = c Ji . Then
(a bi )(c Ji ) = (ac bJ) (aJ bc)i = 1 0i
yields

ac bJ = 1.
aJ bc = 0
==

c = a,(a
2
b
2
).
J = b,(a
2
b
2
).
|L
I Exercise 1.4 (1.2). Show that (1
_
3i ),2 is a cube root of 1.
3
4 CHAPTER 1 VECTOR SPACES
Proof. We have
_
_
1
_
3i
2
_
_
3
=
_
_
1
_
3i
2
_
_
2

_
_
1
_
3i
2
_
_
=
_
_

1
2

_
3
2
i
_
_

_
_

1
2

_
3
2
i
_
_
= 1. |L
I Exercise 1.5 (1.3). Prove that (v) = v for very v V .
Proof. We have v (v) = 0, so by the uniqueness of additive inverse, the
additive inverse of v, i.e., (v), is v. |L
I Exercise 1.6 (1.4). Prove that if a F, v V , and av = 0, then a = 0 or
v = 0.
Proof. Suppose that v = 0 and a = 0. Then v = 1 v = (av),a = 0,a = 0. A
contradiction. |L
IExercise 1.7 (1.5). For each of the following subsets of F
3
, determine whether
it is a subspace of F
3
:
a. U =

(.
1
. .
2
. .
3
) F
3
: .
1
2.
2
3.
3
= 0
_
;
b. U =

(.
1
. .
2
. .
3
) F
3
: .
1
2.
2
3.
3
= 4
_
;
c. U =

(.
1
. .
2
. .
3
) F
3
: .
1
.
2
.
3
= 0
_
;
d. U =

(.
1
. .
2
. .
3
) F
3
: .
1
= 5.
3
_
.
Solution. (a) Additive Identity: 0 U; Closed under Addition: Let x. y U,
then x y = (.
1
,
1
. .
2
,
2
. .
3
,
3
), and (.
1
,
1
) 2(.
2
,
2
) 3(.
3
,
3
) =
(.
1
2.
2
3.
3
) (,
1
2,
2
3,
3
) = 0 0 = 0; that is, x y U. Closed under
Scalar Multiplication: Pick any a F and x U. Then a.
1
2 (a.
2
) 3 (a.
3
) =
a (.
1
2.
2
3.
3
) = 0, i.e., ax U. In sum, U is a subspace of F
3
, and actually,
U is a hyperplane through the 0.
(b) U is not a subspace because 0 U.
(c) Let x = (1. 1. 0) and y = (0. 0. 1). Then x. y U, but x y = (1. 1. 1) U.
(d) 0 U; Let x. y U. Then .
1
,
1
= 5(.
3
,
3
). Let a F and x U. Then
a.
1
= a 5.
3
. |L
I Exercise 1.8 (1.6). Give an example of a nonempty subset U of R
2
such that
U is closed under addition and under taking additive inverses (meaning u U
whenever u U), but U is not a subspace of R
2
.
Solution. Let U = Z
2
, which is not closed under scalar multiplication. |L
CHAPTER 1 VECTOR SPACES 5
I Exercise 1.9 (1.7). Give an example of a nonempty subset U of R
2
such that
U is closed under scalar multiplication, but U is not a subspace of R
2
.
Solution. Let
U =
_
(.. ,) R
2
: . = ,
_
L
_
(.. ,) R
2
: . = ,
_
.
In this case, (.. .) (.. .) = (2.. 0) U unless . = 0. |L
I Exercise 1.10 (1.8). Prove that the intersection of any collection of subspaces
of V is a subspace of V .
Proof. Let {U
i
] be a collection of subspaces of V . (i) Every U
i
is a subspace,
then 0 U
i
for all i and so 0
_
U
i
. (ii) Let x. y
_
U
i
. Then x. y U
i
for all i
and so x y U
i
, which implies that x y
_
U
i
. (iii) Let a F and x
_
U
i
.
Then ax U
i
for all i implies that ax
_
U
i
. |L
I Exercise 1.11 (1.9). Prove that the union of two subspaces of V is a subspace
of V if and only if one of the subspaces is contained in the other.
Proof. Let U and W be two subspaces of V . The If part is trivial. So we
focus on the Only if part. Let U L W be a subspace. Suppose U W and
W U. Pick x U W and y W U. Then x y U; for otherwise
y = (x y) x U; similarly, x y W. But then x y U L W, which
contradicts the fact that x. y U L W and U L W is a subspace.
A nontrivial vector space V over an innite eld F is not the union of a nite
number of proper subspaces; see Roman (2008, Theorem 1.2). |L
I Exercise 1.12 (1.10). Suppose that U is a subspace of V . What is U U?
Solution. Since U U and U U is the smallest subspace containing U, we
have U U U; on the other hand, U U U is clear. Hence, U U = U. |L
I Exercise 1.13 (1.11). Is the operation of addition on the subspaces of V com-
mutative? Associative?
Solution. Yes. Let U
1
, U
2
and U
3
be subspaces of V .
U
1
U
2
= {u
1
u
2
: u
1
U
1
. u
2
U
2
]
= {u
2
u
1
: u
2
U
2
. u
1
U
1
]
= U
2
U
1
.
Similarly for associativity. |L
I Exercise 1.14 (1.12). Does the operation of addition on the subspaces of V
have an additive identity? Which subspaces have additive inverses?
6 CHAPTER 1 VECTOR SPACES
Solution. The set {0] is the additive identity: U {0] = {u 0: u U] = U.
Only the set {0] has additive inverse. Suppose that U is a subspace, and its
additive inverse is W, i.e., U W = {u w: u U and w W] = {0]. Since
0 U, we have 0 w = 0 for all w W, which means that W = {0]. But it is
clearly that U {0] = {0] i U = {0]. |L
I Exercise 1.15 (1.13). Prove or give a counterexample: if U
1
, U
2
, W are sub-
spaces of V such that U
1
W = U
2
W , then U
1
= U
2
.
Solution. Suppose U
1
. U
2
_ W. Then U
1
W = U
2
W for any U
1
and U
2
.
Hence, the statement is false in general. |L
I Exercise 1.16 (1.14). Suppose U is the subspace of P(F) consisting of all
polynomials of the form (:) = a:
2
b:
5
, where a. b F. Find a subspace W
of P(F) such that P(F) = U W .
Solution. Let
W =
_
P(F): (:) = a
0
a
1
: a
3
:
3
a
4
:
4
_
. |L
I Exercise 1.17 (1.15). Prove or give a counterexample: if U
1
, U
2
, W are sub-
spaces of V such that V = U
1
W and V = U
2
W, then U
1
= U
2
.
Solution. Let V = R
2
, W = {(.. 0) R
2
: . R], U
1
= {(.. .) R
2
: . R], and
U
2
= {(.. .) R
2
: . R]. Then
U
1
W =
_
(. ,. .) R
2
: .. , R
_
= R
2
= V.
U
2
W =
_
(. ,. .) R
2
: .. , R
_
= R
2
= V.
U
i
W = {(0. 0)] . i = 1. 2.
Therefore, V = U
i
W for i = 1. 2, but U
1
= U
2
. |L
2
FINITE-DIMENSIONAL VECTOR SPACES
As You Should Verify
Remark 2.1 (p.22). The span of any list of vectors in V is a subspace of V .
Proof. If U = ( ), dene span(U) = {0], which is a subspace of V . Now let
U = (v
1
. . . . . v
n
) be a list of vectors in V . Then span(U) = {

n
iD1
a
i
v
i
: a
i
F].
(i) 0 =

n
iD1
0v
i
span(U). (ii) Let u =

n
iD1
a
i
v
i
and v =

n
iD1
b
i
v
i
. Then
u v =

n
iD1
(a
i
b
i
)v
i
span(U). (iii) For every u =

n
iD1
a
i
v
i
, we have
au =

n
iD1
(aa
i
)v
i
span(U). |L
Remark 2.2 (p.23). P
n
(F) is a subspace of P(F).
Proof. (i) 0
P(F)
P
n
(F) since its degree is o < m by denition. (ii) Let
=

I
iD0
a
I
:
I
and q =

n
}D0
b
}
:
}
, where . n 6 m and a
I
. b
n
= 0. Without loss
of generality, suppose > n. Then q =

n
iD0
(a
i
b
i
) :
i

I
}DnC1
a
}
:
}

P
n
(F). (iii) It is easy to see that if P
n
(F) then a P
n
(F). |L
Exercises
I Exercise 2.3 (2.1). Prove that if (v
1
. . . . . v
n
) spans V , then so does the list
(v
1
v
2
. v
2
v
3
. . . . . v
n1
v
n
. v
n
) obtained by subtracting from each vector
(except the last one) the following vector.
Proof. We rst show that span(v
1
. . . . . v
n
) _ span(v
1
v
2
. . . . . v
n1
v
n
. v
n
).
Suppose that V = span(v
1
. . . . . v
n
). Then, for any v V , there exist a
1
. . . . . a
n

F such that
7
8 CHAPTER 2 FINITE-DIMENSIONAL VECTOR SPACES
v = a
1
v
1
a
2
v
2
a
n
v
n
= a
1
(v
1
v
2
) (a
1
a
2
)v
2
a
3
v
3
a
n
v
n
= a
1
(v
1
v
2
) (a
1
a
2
)(v
2
v
3
) (a
1
a
2
a
3
)v
3
a
4
v
4
a
n
v
n
=
n1

iD1
_
_
_
_
_
i

}D1
a
}
_
_
(v
i
v
iC1
)
_

_ a
n
v
n
span(v
1
v
2
. v
2
v
3
. . . . . v
n1
v
n
. v
n
).
For the converse direction, let u span(v
1
v
2
. v
2
v
3
. . . . . v
n1
v
n
. v
n
).
Then there exist b
1
. . . . . b
n
F such that
u = b
1
(v
1
v
2
) b
2
(v
2
v
3
) b
n1
(v
n1
v
n
) b
n
v
n
= b
1
v
1
(b
2
b
1
)v
2
(b
3
b
2
)v
3
(b
n
b
n1
)v
n
span(v
1
. . . . . v
n
). |L
IExercise 2.4 (2.2). Prove that if (v
1
. . . . . v
n
) is linearly independent in V , then
so is the list (v
1
v
2
. v
2
v
3
. . . . . v
n1
v
n
. v
n
) obtained by subtracting from each
vector (except the last one) the following vector.
Proof. Let
0 =
n1

iD1
a
i
(v
i
v
iC1
) a
n
v
n
= a
1
v
1
(a
2
a
1
)v
2
(a
n
a
n1
)v
n
.
Since (v
1
. . . . . v
n
) is linear independent, we have a
1
= a
2
a
1
= = a
n
a
n1
=
0, i.e., a
1
= a
2
= = a
n
= 0. |L
I Exercise 2.5 (2.3). Suppose (v
1
. . . . . v
n
) is linearly independent in V and
w V . Prove that if (v
1
w. . . . . v
n
w) is linearly dependent, then w
span(v
1
. . . . . v
n
).
Proof. If (v
1
w. . . . . v
n
w) is linearly dependent, then there exists a list
(a
1
. . . . . a
n
) = 0 such that
n

iD1
a
i
(v
i
w) =
n

iD1
a
i
v
i

_
_
n

iD1
a
i
_
_
w = 0. (2.1)
Since (a
1
. . . . . a
n
) = 0, we know that

n
iD1
a
i
= 0. It follows from (2.1) that
w =
n

iD1
_
_
_a
i

}D1
a
}
_
_
_v
i
span(v
1
. . . . . v
n
). |L
I Exercise 2.6 (2.4). Suppose m is a positive integer. Is the set consisting of 0
and all polynomials with coecients in F and with degree equal to m a subspace
of P(F)?
CHAPTER 2 FINITE-DIMENSIONAL VECTOR SPACES 9
Solution. No. Consider . q with
(:) = a
0
a
1
: a
n
:
n
.
q(:) = b
0
b
1
: a
n
:
n
.
where a
n
= 0. Then (:)q(:) = (a
0
b
0
)(a
1
b
1
): (a
n1
b
n1
):
n1
,
whose degree is less than or equal to m1. Hence, this set of polynomials with
degree equal to m is not closed under addition. |L
I Exercise 2.7 (2.5). Prove that F
1
is innite dimensional.
Proof. Suppose that F
1
is nite dimensional. Then every linearly indepen-
dent list of vectors in a nite dimensional vector space can be extended to a
basis of the vector space. Consider the following list
((1. 0. 0. 0. . . .). (0. 1. 0. 0. . . .). (0. 0. 1. 0. . . .). . . . . (0. . . . . 1. 0. . . .)).
where each vector is in F
1
, and the length of the above list is n. It is easy
to show that this list is linearly independent, but it can not be expanded to a
basis of F
1
. |L
I Exercise 2.8 (2.6). Prove that the real vector space consisting of all continu-
ous real-valued functions on the interval 0. 1| is innite dimensional.
Proof. Consider the following set {(:) P(F): : 0. 1|], which is a sub-
space, but is innite dimensional. |L
IExercise 2.9 (2.7). Prove that V is innite dimensional if and only if there is a
sequence v
1
. v
2
. . . . of vectors in V such that (v
1
. . . . . v
n
) is linearly independent
for every positive integer n.
Proof. Let V be innite dimensional. Clearly, there exists a nonzero vector
v
1
V ; for otherwise, V = {0] and so V is nite dimensional. Since V is innite
dimensional, span(v
1
) = V ; hence there exists v
2
V span(v
1
); similarly,
span(v
1
. v
2
) = V ; thus we can choose v
3
V span(v
1
. v
2
). We thus construct
an innite sequence v
1
. v
2
. . . .
We then use the Induction Principle to prove that for every positive inte-
ger n, the list (v
1
. . . . . v
n
) is linearly independent. Obviously, v
1
is linear inde-
pendent since v
1
= 0. Let us assume that (v
1
. . . . . v
n
) is linear independent
for some positive integer n. We now show that (v
1
. . . . . v
n
. v
nC1
) is linear in-
dependent. If not, then there exist a
1
. . . . . a
n
. a
nC1
F, not all 0, such that

nC1
iD1
a
i
v
i
= 0. We must have a
nC1
= 0: if a
nC1
= 0, then

n
iD1
a
i
v
i
= 0 implies
that a
1
= = a
n
= a
nC1
= 0 since (v
1
. . . . . v
n
) is linear independent by the
induction hypothesis. Hence,
v
nC1
=
n

iD1
(a
i
,a
nC1
)v
i
.
10 CHAPTER 2 FINITE-DIMENSIONAL VECTOR SPACES
i.e. v
nC1
span(v
1
. . . . . v
n
), which contradicts the construction of (v
1
. . . . . v
nC1
).
Conversely, assume that there exists an innite sequence v
1
. v
2
. . . . of vec-
tors in V , and (v
1
. . . . . v
n
) is linearly independent for any positive integer n.
Suppose V is nite dimensional; that is, there is a spanning list of vectors
(u
1
. . . . . u
n
) of V , and such that the length of every linearly independent list of
vectors is less than or equal to m ( by Theorem 2.6). A contradiction. |L
I Exercise 2.10 (2.8). Let U be the subspace of R
5
dened by
U =
_
(.
1
. .
2
. .
3
. .
4
. .
5
) R
5
: .
1
= 3.
2
and .
3
= 7.
4
_
.
Find a basis of U.
Proof. A particular basis of U can be ((3. 1. 0. 0. 0). (0. 0. 7. 1. 0). (0. 0. 0. 0. 1)). |L
I Exercise 2.11 (2.9). Prove or disprove: there exists a basis (
0
.
1
.
2
.
3
) of
P
3
(F) such that none of the polynomials
0
.
1
.
2
.
3
has degree 2.
Proof. Notice that
0
= 1,
1
= :,
0
2
= :
2
, and
3
= :
3
is a standard basis
of P
3
(F), but
0
2
has degree 2. So we can let
2
=
0
2

3
= :
2
:
3
. Then
span(
0
.
1
.
2
.
3
) = P
3
(F) and so (
0
.
1
.
2
.
3
) is a basis of P
3
(F) by Theo-
rem 2.16. |L
I Exercise 2.12 (2.10). Suppose that V is nite dimensional, with dimV = n.
Prove that there exist one-dimensional subspaces U
1
. . . . . U
n
of V such that
V = U
1
U
n
.
Proof. Let (v
1
. . . . . v
n
) be a basis of V . For each i = 1. . . . . n, let U
i
= span(v
i
).
Then each U
i
is a subspace of V and so U
1
U
n
_ V . Clearly, dimV =

n
iD1
dimU
i
= n. By Proposition 2.19, it suces to show that V _ U
1
U
n
.
It follows because for every v V ,
v =
n

iD1
a
i
v
i
U
1
U
n
. |L
I Exercise 2.13 (2.11). Suppose that V is nite dimensional and U is a sub-
space of V such that dimU = dimV . Prove that U = V .
Proof. Let (u
1
. . . . . u
n
) be a basis of U. Since (u
1
. . . . . u
n
) is linearly indepen-
dent in V and the length of (u
1
. . . . . u
n
) is equal to dimV , it is a basis of V .
Therefore, V = span(u
1
. . . . . u
n
) = U. |L
I Exercise 2.14 (2.12). Suppose that
0
.
1
. . . . .
n
are polynomials in P
n
(F)
such that
}
(2) = 0 for each . Prove that (
0
.
1
. . . . .
n
) is not linearly inde-
pendent in P
n
(F).
CHAPTER 2 FINITE-DIMENSIONAL VECTOR SPACES 11
Proof. dimP
n
(F) = m1 since (1. :. . . . . :
n
) is a basis of P
n
(F). If (
0
. . . . .
n
)
is linear independent, then it is a basis of P
n
(F) by Proposition 2.17. Then
=

n
iD0

i
for every P
n
(F). Take an arbitrary P
n
(F) with (2) = 0
and we get a contradiction. |L
I Exercise 2.15 (2.13). Suppose U and W are subspaces of R
S
such that
dimU = 3, dimW = 5, and U W = R
S
. Prove that U W = {0].
Proof. Since R
S
= U W and dimR
S
= dimU dimV , we have R
S
= U V
by Proposition 2.19; then Proposition 1.9 implies that U W = {0]. |L
I Exercise 2.16 (2.14). Suppose that U and W are both ve-dimensional sub-
spaces of R
9
. Prove that U W = {0].
Proof. If U W = {0], then dimU W = dimU dimW dimU W =
5 5 0 = 10 > 9; but U W _ R
9
. A contradiction. |L
I Exercise 2.17 (2.15). Prove or give a counterexample that
dimU
1
U
2
U
3
=dimU
1
dimU
2
dimU
3
dimU
1
U
2
dimU
1
U
3
dimU
2
U
3
dimU
1
U
2
U
3
.
Solution. We construct a counterexample to show the proposition is faulse.
Let
U
1
=
_
(.. 0) R
2
: . R
_
.
U
2
=
_
(0. .) R
2
: . R
_
.
U
3
=
_
(.. .) R
2
: . R
_
.
Then U
1
U
2
= U
1
U
3
= U
2
U
3
= U
1
U
2
U
3
= {(0. 0)]; hence
dimU
1
U
2
= dimU
1
U
3
= dimU
2
U
3
= dimU
1
U
2
U
3
= 0.
But dimU
1
U
2
U
3
= 2 since U
1
U
2
U
3
= R
2
. |L
I Exercise 2.18 (2.16). Prove that if V is nite dimensional and U
1
. . . . . U
n
are
subspaces of V , then dimU
1
U
n
6

n
iD1
dimU
i
.
Proof. Let (u
1
i
. . . . . u
n
i
i
) be a basis of U
i
for each i = 1. . . . . m. Then
n

iD1
dimU
i
=
n

iD1
n
i
.
Let
(u
1
1
. . . . . u
n
1
1
. . . . . u
1
n
. . . . . u
n
m
n
) = T.
12 CHAPTER 2 FINITE-DIMENSIONAL VECTOR SPACES
Clearly, U
1
U
n
= span(T), and dimspan(T) 6

n
iD1
n
i
by Theorem 2.10.
Therefore, dimU
1
U
n
6

n
iD1
dimU
i
. |L
IExercise 2.19 (2.17). Suppose V is nite dimensional. Prove that if U
1
. . . . . U
n
are subspaces of V such that V = U
1
U
n
, then dimV =

n
iD1
dimU
i
.
Proof. Let the list (u
1
i
. . . . . u
n
i
i
) be a basis of U
i
for all i = 1. . . . . m. Then

n
iD1
dimU
i
=

n
iD1
n
i
. Let
(u
1
1
. . . . . u
n
1
1
. . . . . u
1
n
. . . . . u
n
m
n
) = U.
Then span(U) = V . We show that (u
1
1
. . . . . u
n
1
1
. . . . . u
1
n
. . . . . u
n
m
n
) is linear inde-
pendent. Let
0 = (a
1
1
u
1
1
a
n
1
1
u
n
1
1
)

u
1
(a
1
n
u
1
n
a
n
m
n
u
n
m
n
)

u
m
.
Then

n
iD1
u
i
= 0 and so u
i
= 0 for each i = 1. . . . . m (since V =

n
iD1
U
i
). But
then a
1
1
= = a
n
m
n
= 0. Thus, (u
1
1
. . . . . u
n
m
n
) is linear independent and spans
V , i.e. it is a basis of V . |L
3
LINEAR MAPS
As You Should Verify
Remark 3.1 (p. 40). Given a basis (v
1
. . . . . v
n
) of V and any choice of vectors
w
1
. . . . . w
n
W, we can construct a linear map T: V W such that
T(a
1
v
1
a
n
v
n
) = a
1
w
1
a
n
w
n
.
where a
1
. . . . . a
n
are arbitrary elements of F. Then T is linear.
Proof. Let u. w V with u =

n
iD1
a
i
v
i
and v =

n
iD1
b
i
v
i
; let a F. Then
T(u v) = T
_
_
n

iD1
(a
i
b
i
)v
i
_
_
=
n

iD1
(a
i
b
i
)w
i
=
n

iD1
a
i
w
i

n

iD1
b
i
w
i
= Tu Tv.
and
T(au) = T
_
_
n

iD1
(aa
i
)v
i
_
_
=
n

iD1
(aa
i
)w
i
= a
_
_
n

iD1
a
i
w
i
_
_
= aTu. |L
Remark 3.2 (p. 40-41). Let S. T L(V. W ). Then S T. aT L(V. W ).
Proof. As for S T, we have (S T) (u v) = S (u v) T(u v) = Su
Sv Tu Tv = (S T) (u) (S T) (v), and (S T) (av) = S (av) T(av) =
a (S T) (v).
As for aT, we have (aT) (u v) = a
_
T(u v)
_
= a Tu Tv| = aTu aTv =
(aT) u (aT) v, and (aT) (bv) = a
_
T(bv)
_
= abTv = b (aT) v. |L
13
14 CHAPTER 3 LINEAR MAPS
Exercises
I Exercise 3.3 (3.1). Show that every linear map from a one-dimensional vec-
tor space to itself is multiplication by some scalar. More precisely, prove that if
dimV = 1 and T L(V. V ), then there exists a F such that Tv = av for all
v V .
Proof. Let w V be a basis of V . Then Tw = aw for some a F. For an
arbitrary v V , there exists b F such that v = bw. Then
Tv = T(bw) = b(Tw) = b(aw) = a(bw) = av. |L
I Exercise 3.4 (3.2). Give an example of a function : R
2
R such that
(av) = a(v) for all a R and all v R
2
but is not linear.
Proof. For any v = (
1
.
2
) R
2
, let
(
1
.
2
) =

1
if
1
=
2
0 if
1
=
2
.
Now consider u. v R
2
with u
1
= u
2
,
1
=
2
, but u
1

1
= u
2

2
> 0. Notice
that
(u v) = u
1

1
> 0 = (u) (v).
Hence, is not linear. |L
IExercise 3.5 (3.3). Suppose that V is nite dimensional. Prove that any linear
map on a subspace of V can be extended to a linear map on V . In other words,
show that if U is a subspace of V and S L(U. W ), then there exists T L(V. W )
such that Tu = Su for all u U.
Proof. Let (u
1
. . . . . u
n
) be a basis of U, and extend it to a basis of V :
(u
1
. . . . . u
n
. v
1
. . . . . v
n
).
Choose n vectors w
1
. . . . . w
n
from W. Dene a map T: V W by letting
T
_
_
n

iD1
a
i
u
i

n

}D1
a
}
v
}
_
_
=
n

iD1
a
i
Su
i

n

}D1
a
}
w
}
.
It is trivial to see that Su = Tu for all u U. So we only show that T is a
linear map. Let u. v V with u =

n
iD1
a
i
u
i

n
}D1
a
}
v
}
and v =

n
iD1
b
i
u
i

n
}D1
b
}
v
}
; let a F. Then
CHAPTER 3 LINEAR MAPS 15
T(u v) = T
_
_
n

iD1
(a
i
b
i
)u
i

n

}D1
_
a
}
b
}
_
v
}
_
_
=
n

iD1
(a
i
b
i
)Su
i

n

}D1
(a
}
b
}
)w
}
=
_
_
n

iD1
a
i
Su
i

n

}D1
a
}
w
}
_
_

_
_
n

iD1
b
i
Su
i

n

}D1
b
}
w
}
_
_
= Tu Tv.
and
Tau = T
_
_
_a
_
_
n

iD1
a
i
u
i

n

}D1
a
}
v
}
_
_
_
_
_ = T
_
_
n

iD1
aa
i
u
i

n

}D1
aa
}
v
}
_
_
=
n

iD1
aa
i
Su
i

n

}D1
aa
}
w
}
= a
_
_
n

iD1
a
i
Su
i

n

}D1
a
}
w
}
_
_
= aTu. |L
I Exercise 3.6 (3.4). Suppose that T is a linear map from V to F. Prove that if
u V is not in
T
, then
V =
T
{au: a F].
Proof. Let T L(V. F). Since u V
T
, we get u = 0 and Tu = 0. Thus,
dimR
T
> 1. Since dimR
T
6 dimF = 1, we get dimR
T
= 1. It follows from
Theorem 3.4 that
dimV = dim
T
1 = dim
T
dim{au: a F]. (3.1)
Let (v
1
. . . . . v
n
) be a basis of
T
. Then (v
1
. . . . . v
n
. u) is linear independent
since u span(v
1
. . . . . v
n
) =
T
. It follows from (3.1) that (v
1
. . . . . v
n
. u) is a
basis of V (by Proposition 2.17). Therefore
V = span(v
1
. . . . . v
n
. u) =
_
_
_
n

iD1
a
i
v
i
au: a
1
. . . . . a
n
. a F
_
_
_
=
_
_
_
n

iD1
a
i
v
i
: a
1
. . . . . a
n
F
_
_
_
{au: a F]
=
T
{au: a F].
(3.2)
It follows from (3.1) and (3.2) that V =
T
{au: a F] by Proposition 2.19. |L
16 CHAPTER 3 LINEAR MAPS
I Exercise 3.7 (3.5). Suppose that T L(V. W ) is injective and (v
1
. . . . . v
n
) is
linearly independent in V . Prove that (Tv
1
. . . . . Tv
n
) is linearly independent in
W .
Proof. Let
0 =
n

iD1
a
i
Tv
i
= T
_
_
n

iD1
a
i
v
i
_
_
.
Then

n
iD1
a
i
v
i
= 0 since
T
= {0]. The linear independence of (v
1
. . . . . v
n
)
implies that a
1
= = a
n
= 0. |L
I Exercise 3.8 (3.6). Prove that if S
1
. . . . . S
n
are injective linear maps such that
S
1
S
n
makes sense, then S
1
S
n
is injective.
Proof. We use mathematical induction to prove this claim. It holds for n = 1
trivially. Let us suppose that S
1
S
n
is injective if S
1
. . . . . S
n
are. Now assume
that S
1
. . . . . S
nC1
are all injective linear maps. Let T = S
1
S
nC1
. For every
v
T
we have
0 = Tv = (S
1
S
n
) (S
nC1
v) .
But the above display implies that S
nC1
v = 0 since (S
1
S
n
) is injective by the
induction hypothesis, which implies further that v = 0 since S
nC1
is injective.
This proves that
T
= {0] and so T is injective. |L
I Exercise 3.9 (3.7). Prove that if (v
1
. . . . . v
n
) spans V and T L(V. W ) is
surjective, then (Tv
1
. . . . . Tv
n
) spans W .
Proof. Since T is surjective, for any w W, there exists v V such that
Tv = w; since V = span(v
1
. . . . . v
n
), there exists (a
1
. . . . . a
n
) F
n
such that
v =

n
iD1
a
i
v
i
. Hence,
w = T
_
_
n

iD1
a
i
v
i
_
_
=
n

iD1
a
i
Tv
i
.
that is, W = span(Tv
1
. . . . . Tv
n
). |L
I Exercise 3.10 (3.8). Suppose that V is nite dimensional and that T
L(V. W ). Prove that there exists a subspace U of V such that U
T
= {0]
and R
T
= {Tu: u U].
Proof. Let (u
1
. . . . . u
n
) be a basis of
T
, which can be extended to a basis
(u
1
. . . . . u
n
. v
1
. . . . . v
n
) of V . Let U = span(v
1
. . . . . v
n
). Then U
T
= {0] (see
the proof of Proposition 2.13).
To see R
T
= {Tu : u U], take an arbitrary v V . Then
Tv = T
_
_
n

iD1
a
i
u
i

n

}D1
a
}
v
}
_
_
= T
_
_
n

}D1
a
}
v
}
_
_
= Tu
CHAPTER 3 LINEAR MAPS 17
for some u =

n
}D1
a
}
v
}
U. |L
I Exercise 3.11 (3.9). Prove that if T is a linear map from F
4
to F
2
such that

T
=
_
(.
1
. .
2
. .
3
. .
4
) F
4
: .
1
= 5.
2
and .
3
= 7.
4
_
.
then T is surjective.
Proof. Let
v
1
=

5
1
0
0

and v
2
=

0
0
7
1

.
It is easy to see that (v
1
. v
2
) is a basis of
T
; that is, dim
T
= 2. Then
dimR
T
= dimF
4
dim
T
= 4 2 = 2 = dimF
2
.
and so T is surjective. |L
I Exercise 3.12 (3.10). Prove that there does not exist a linear map from F
5
to
F
2
whose null space equals
_
(.
1
. .
2
. .
3
. .
4
. .
5
) F
5
: .
1
= 3.
2
and .
3
= .
4
= .
5
_
.
Proof. It is easy to see that the following two vectors consist of a basis of
T
if T L(F
5
. F
2
):
v
1
=

3
1
0
0
0

and v
2
=

0
0
1
1
1

.
Then, dim
T
= 2 and so dimR
T
= 52 = 3 > dimF
2
, which is impossible. |L
I Exercise 3.13 (3.11). Prove that if there exists a linear map on V whose null
space and range are both nite dimensional, then V is nite dimensional.
Proof. If dim
T
< o and dimR
T
< o, then dimV = dim
T
dimR
T
<
o. |L
I Exercise 3.14 (3.12). Suppose that V and W are both nite dimensional.
Prove that there exists a surjective linear map from V onto W if and only if
dimW 6 dimV .
Proof. If there exists a surjective linear map T L(V. W ), then dimW =
dimR
T
= dimV dim
T
> dimV .
Now let dimW 6 dimV . Let (v
1
. . . . . v
n
) be a basis of V , and let (w
1
. . . . . w
n
)
be a basis of W, with m 6 n. Dene T L(V. W ) by letting
18 CHAPTER 3 LINEAR MAPS
T
_
_
n

iD1
a
i
v
i

n

}DnC1
a
}
v
}
_
_
=
n

iD1
a
i
w
i
.
Then for every w =

n
iD1
a
i
w
i
W, there exists v =

n
}D1
a
}
v
}
such that
Tv = w, i.e. T is surjective. |L
I Exercise 3.15 (3.13). Suppose that V and W are nite dimensional and that
U is a subspace of V . Prove that there exists T L(V. W ) such that
T
= U if
and only if dimU > dimV dimW .
Proof. For every T L(V. W ), if
T
= U, then dimU = dimV dimR
T
>
dimV dimW.
Now let dimU > dimV dimW. Let (u
1
. . . . . u
n
) be a basis of U, which can
be extended to a basis (u
1
. . . . . u
n
. v
1
. . . . . v
n
) of V . Let (w
1
. . . . . w
]
) be a basis
of W. Then m > (mn) implies that n 6 . Dene T L(V. W ) by letting
T
_
_
n

iD1
a
i
u
i

n

}D1
a
}
v
}
_
_
=
n

}D1
a
}
w
}
.
Then
T
= U. |L
I Exercise 3.16 (3.14). Suppose that W is nite dimensional and T L(V. W ).
Prove that T is injective if and only if there exists S L(W. V ) such that ST is the
identity map on V .
Proof. Suppose rst that ST = Id
V
. Then for any u. v V with u = v, we have
u = (ST)u = (ST)v = v; that is, S(Tu) = S(Tv), and so Tu = Tv.
For the inverse direction, let T be injective. Then dimV 6 dimW by Corol-
lary 3.5. Also, dimW < o. Let (v
1
. . . . . v
n
) be a basis of V . It follows from
Exercise 3.7 that (Tv
1
. . . . . Tv
n
) is linearly independent, and so can be extended
to a basis (Tv
1
. . . . . Tv
n
. w
1
. . . . . w
n
) of W. Dene S L(W. V ) by letting
S (Tv
i
) = (ST) v
i
= v
i
. and S (w
i
) = 0
V
. |L
I Exercise 3.17 (3.15). Suppose that V is nite dimensional and T L(V. W ).
Prove that T is surjective if and only if there exists S L(W. V ) such that TS is
the identity map on W .
Proof. If TS = Id
W
, then for any w W, we have T(Sw) = Id
W
(w) = w, that
is, there exists Sw V such that T(Sw) = w, and so T is surjective.
If T is surjective, then dimW 6 dimV . Let (w
1
. . . . . w
n
) be a basis of W, and
let (v
1
. . . . . v
n
) be a basis of V with n > m. Dene S L(W. V ) by letting
Sw
i
= v
i
. with Tv
i
= w
i
. |L
CHAPTER 3 LINEAR MAPS 19
I Exercise 3.18 (3.16
1
). Suppose that U and V are nite-dimensional vector
spaces and that S L(V. W ), T L(U. V ). Prove that
dim
ST
6 dim
S
dim
T
.
Proof. We have W
S
V
T
U. Since
R
ST
= (ST) U| = S
_
TU|
_
= S R
T
| .
we have
dimR
ST
= dimS R
T
|.
Let N be the complement of R
T
so that V = R
T
N; then
dimV = dimR
T
dimN. (3.3)
and
R
S
= S V | = S R
T
| S N| .
It follows from Theorem 2.18 that
dimR
S
= dimS R
T
| dimS N| dimS R
T
| S N|
6 dimS R
T
| dimS N|
6 dimS R
T
| dimN
= dimR
ST
dimN.
and hence that
dimV dim
S
= dimR
S
6 dimR
ST
dimN
= dimR
ST
dimV dimR
T
.
(3.4)
where the last equality is from (3.3). Hence, (3.4) becomes
dimR
T
dim
S
6 dimR
ST
.
or equivalently,
dimU dim
T
dim
S
6 dimU dim
ST
:
that is,
dim
ST
6 dim
S
dim
T
. |L
I Exercise 3.19 (3.17). Prove that the distributive property holds for matrix
addition and matrix multiplication.
Proof. Let A = a
i}
| Mat (m. n. F), B = b
i}
| Mat
_
n. . F
_
, and C = c
i}
|
Mat
_
n. . F
_
. Then B C = b
i}
c
i}
| Mat
_
n. . F
_
. It is evident that AB and
1
See Halmos (1995, Problem 95, p.270).
20 CHAPTER 3 LINEAR MAPS
AC are m matrices. Further,
A(B C) =

a
11
a
1n
.
.
.
.
.
.
.
.
.
a
n1
a
nn

b
11
c
11
b
1]
c
1]
.
.
.
.
.
.
.
.
.
b
n1
c
n1
b
n]
c
n]

n
iD1
a
1i
b
i1

n
iD1
a
1i
c
i1


n
iD1
a
1i
b
i]

n
iD1
a
1i
c
i]
.
.
.
.
.
.
.
.
.

n
iD1
a
ni
b
i1

n
iD1
a
ni
c
i1


n
iD1
a
ni
b
i]

n
iD1
a
ni
c
i]

= AB AC.
|L
I Exercise 3.20 (3.18). Prove that matrix multiplication is associative.
Proof. Similar to Exercise 3.19. |L
I Exercise 3.21 (3.19). Suppose T L(F
n
. F
n
) and that
M(T) =

a
11
a
1n
.
.
.
.
.
.
a
n1
a
nn

.
where we are using the standard bases. Prove that
T(.
1
. . . . . .
n
) =
_
_
n

iD1
a
1i
.
i
. . . . .
n

iD1
a
ni
.
i
_
_
for every (.
1
. . . . . .
n
) F
n
.
Proof. We need to prove that Tx = M(T) x for any x F
n
. Let (e
n
1
. . . . . e
n
n
)
be the standard basis for F
n
, and let (e
n
1
. . . . . e
n
n
) be the standard basis for F
n
.
Then
T(.
1
. . . . . .
n
) = T
_
_
n

iD1
.
i
e
n
i
_
_
=
n

iD1
.
i
Te
n
i
=
n

iD1
.
i
n

}D1
a
}i
e
n
}
=
_
_
n

iD1
a
1i
.
i
. . . . .
n

iD1
a
ni
.
i
_
_
. |L
I Exercise 3.22 (3.20). Suppose (v
1
. . . . . v
n
) is a basis of V . Prove that the
function T: V Mat (n. 1. F) dened by Tv = M(v) is an invertible linear map
of V onto Mat (n. 1. F); here M(v) is the matrix of v V with respect to the basis
(v
1
. . . . . v
n
).
Proof. For every v =

n
iD1
b
i
v
i
V , we have
CHAPTER 3 LINEAR MAPS 21
M(v) =

b
1
.
.
.
b
n

.
Since av =

n
iD1
(ab
i
) v
i
for any a F, we have M(av) = aM(v). Further,
for any u =

n
iD1
a
i
v
i
V , and any v =

n
iD1
b
i
v
i
V , we have u v =

n
iD1
(a
i
b
i
)v
i
; hence, M(u v) = M(u) M(v). Therefore, T is a liner map.
We now show that T is invertible by proving T is bijective. (i) If Tv =
(0. . . . . 0)
0
, then v =

n
iD1
0v
i
= 0
V
; that is,
T
= {0
T
]. Hence, T is injective.
(ii) Take any M = (a
1
. . . . . a
n
)
0
Mat (n. 1. F). Let v =

n
iD1
a
i
v
i
. Then Tv = M;
that is, T is surjective. |L
I Exercise 3.23 (3.21). Prove that every linear map from Mat (n. 1. F) to
Mat (m. 1. F) is given by a matrix multiplication. In other words, prove that if
T L(Mat (n. 1. F) . Mat (m. 1. F)).
then there exists an mn matrix A such that TB = AB for every B Mat (n. 1. F).
Proof. A basis of Mat (m. n. F) consists of those mn matrices that have 0 in all
entries except for a 1 in one entry. Therefore, a basis for Mat (n. 1. F) consists
of the standard basis of F
n
,
_
e
n
1
. . . . . e
n
n
_
, where, for example, e
n
1
= (1. 0. . . . . 0)
0
.
For any T L(Mat (n. 1. F) . Mat (m. 1. F)), let
A
(nn)
:
_
Te
n
1
Te
n
n
_
.
Then for any B =

n
iD1
a
i
e
n
i
Mat (n. 1. F), we have
TB = T
_
_
n

iD1
a
i
e
n
i
_
_
=
n

iD1
a
i
Te
n
i
= AB. |L
I Exercise 3.24 (3.22). Suppose that V is nite dimensional and S. T L(V ).
Prove that ST is invertible if and only if both S and T are invertible.
Proof. First assume that both S and T are invertible. Then (ST)
_
T
1
S
1
_
=
SIdS
1
= Id and
_
T
1
S
1
_
(ST) = Id. Hence, ST is invertible and (ST)
1
=
T
1
S
1
.
Now suppose that ST is invertible, so it is injective. Take any u. v V with
u = v; then (ST) u = (ST) v; that is,
u = v == S(Tu) = S(Tv). (3.5)
But then Tu = Tv, which implies that T is invertible by Theorem 3.21. Finally,
for any u. v V with u = v, there exist u
0
. v
0
V with u
0
= v
0
such that u = Tu
0
and v = Tv
0
. Hence, by (3.5), u = v implies that
Su = S(Tu
0
) = S(Tv
0
) = Sv:
22 CHAPTER 3 LINEAR MAPS
that is, S is injective, too. Applying Theorem 3.21 once again, we know that S
is invertible. |L
I Exercise 3.25 (3.23). Suppose that V is nite dimensional and S. T L(V ).
Prove that ST = Id if and only if TS = Id.
Proof. We only prove the only if part; the if part can be proved similarly. If
ST = Id, then ST is bijective and so invertible. Then by Exercise 3.24, both S
and T are invertible. Therefore,
ST = Id == S
1
ST = S
1
Id == T = S
1
== TS = S
1
S = Id. |L
I Exercise 3.26 (3.24). Suppose that V is nite dimensional and T L(V ).
Prove that T is a scalar multiple of the identity if and only if ST = TS for every
S L(V ).
Proof. If T = aId for some a F, then for any S L(V ), we have
ST = aSId = aS = aIdS = TS.
For the converse direction, assume that ST = TS for all S L(V ). |L
I Exercise 3.27 (3.25). Prove that if V is nite dimensional with dimV > 1,
then the set of noninvertible operators on V is not a subspace of L(V ).
Proof. Since every nite-dimensional vector space is isomorphic to some F
n
,
we just focus on F
n
. For simplicity, consider F
2
. Let S. T F
2
with
S (a. b) = (a. 0) and T(a. b) = (0. b) .
Obviously, both S and T are noninvertible since they are not injective; however,
S T = Id is invertible. |L
I Exercise 3.28 (3.26). Suppose n is a positive integer and a
i}
F for i. =
1. . . . . n. Prove that the following are equivalent:
a. The trivial solution .
1
= = .
n
= 0 is the only solution to the homogeneous
system of equations
n

kD1
a
1k
.
k
= 0
.
.
.
n

kD1
a
nk
.
k
= 0.
b. For every c
1
. . . . . c
n
F, there exists a solution to the system of equations
CHAPTER 3 LINEAR MAPS 23
n

kD1
a
1k
.
k
= c
1
.
.
.
n

kD1
a
nk
.
k
= c
n
.
Proof. Let
=

a
11
a
1n
.
.
.
.
.
.
a
n1
a
nn

.
If we let Tx = x, then by Exercise 3.22, T L(F
n
. F
n
). (a) implies that
T
=
{0]; hence
dimR
T
= n 0 = n.
Since R
T
is a subspace of F
n
, we have R
T
= F
n
, that is, T is surjective: for any
(c
1
. . . . . c
n
), there is a unique solution (.
1
. . . . . .
n
). |L
4
POLYNOMIALS
I Exercise 4.1 (4.1). Suppose m and n are positive integers with m 6 n. Prove
that there exists a polynomial P
n
(F) with exactly m distinct roots.
Proof. Let
(:) =
n

iD1
(: z
i
)
n
i
.
where z
1
. . . . . z
n
F are distinct and

n
iD1
m
i
= n. |L
I Exercise 4.2 (4.2). Suppose that :
1
. . . . . :
nC1
are distinct elements of F and
that n
1
. . . . . n
nC1
F. Prove that there exists a unique polynomial P
n
(F)
such that (:
}
) = n
}
for = 1. . . . . m1.
Proof. Let
i
(.) =

}i
(. :
}
). Then deg
i
= m and
i
(:
}
) = 0 if and only if
i = . Dene
(.) =
nC1

iD1
n
i

i
(:
i
)

i
(.).
Then deg = m and
(:
}
) =
n
1

1
(:
1
)

1
(:
}
)
n
}

}
(:
}
)

}
(:
}
)
n
nC1

nC1
(:
nC1
)

nC1
(:
}
)
= n
}
. |L
I Exercise 4.3 (4.3). Prove that if . q P(F), with = 0, then there exist
unique polynomials s. r P(F) such that q = s r and deg r < deg .
Proof. Assume that there also exist s
0
. r
0
P(F) such that q = s
0
r
0
and
deg r
0
< deg . Then
(s s
0
) (r r
0
) = 0.
If s = s
0
, then deg (s s
0
)deg (r r
0
) = deg (s s
0
)deg deg (r r
0
) >
0; but deg 0 = o. Hence, s = s
0
and so r = r
0
. |L
I Exercise 4.4 (4.4). Suppose P(C) has degree m. Prove that has m
distinct roots if and only if and its derivative
0
have no roots in common.
25
26 CHAPTER 4 POLYNOMIALS
Proof. If z is a root of , then we can write as (:) = (: z)q(:). Then

0
(:) = q(:) (: z)q
0
(:).
So z is also a root for
0
if and only if z is a root of q; that is, z is a multiple
root. A contradiction. |L
I Exercise 4.5 (4.5). Prove that every polynomial with odd degree and real
coecients has a real root.
Proof. If P(R) with deg is odd, then (o) < 0 and (o) > 0. Then
there exists .

R such that (.

) = 0. |L
5
EIGENVALUES AND EIGENVECTORS
As You Should Verify
Remark 5.1 (p.80). Fix an operator T L(V ), then the function from P(F) to
L(V ) given by (T) is linear.
Proof. Let the mapping be : P(F) L(V ) with () = (T). For any . q
P(F), we have (q) = (q)(T) = (T)q(T) = ()(q). For any a F,
we have (a) = (a)(T) = a(T) = a(). |L
Exercises
I Exercise 5.2 (5.1). Suppose T L(V ). Prove that if U
1
. . . . . U
n
are subspaces
of V invariant under T, then U
1
U
n
is invariant under T.
Proof. Take an arbitrary u U
1
U
n
; then u = u
1
u
n
, where u
i
U
i
for every i = 1. . . . . m. Therefore, Tu = Tu
1
Tu
n
U
1
U
n
since
Tu
i
U
i
. |L
I Exercise 5.3 (5.2). Suppose T L(V ). Prove that the intersection of any
collection of subspaces of V invariant under T is invariant under T.
Proof. Let the collection {U
i
E V : i 1] of subspaces of V invariant under T,
where 1 is an index set. Let U =
_
i2J
U
i
. Then u U
i
for every i 1 if u U,
and so Tu U
i
for every i 1. Then Tu U; that is, U is invariant under T. |L
I Exercise 5.4 (5.3). Prove or give a counterexample: if U is a subspace of V
that is invariant under every operator on V , then U = {0] or U = V .
Proof. Assume that U = {0] and U = V . Let (u
1
. . . . . u
n
) be a basis of U,
which then can be extended to a basis (u
1
. . . . . u
n
. v
1
. . . . . v
n
) of V , where n > 1
since U = V . Dene an operator T L(V ) by letting T(a
1
u
1
a
n
u
n

27
28 CHAPTER 5 EIGENVALUES AND EIGENVECTORS
b
1
v
1
b
n
v
n
) = (a
1
a
n
b
1
b
n
)v
1
. Then U fails be invariant
clearly. |L
I Exercise 5.5 (5.4). Suppose that S. T L(V ) are such that ST = TS. Prove
that
T2Id
is invariant under S for every z F.
Proof. If u
T2Id
, then (T zId)(u) = Tu zu = 0; hence
S(Tu zu) = S0 == STu zSu = 0
== TSu zSu = 0
== (T zId)(Su) = 0:
that is, Su
T2Id
. |L
I Exercise 5.6 (5.5). Dene T L(F
2
) by T(n. :) = (:. n). Find all eigenvalues
and eigenvectors of T.
Proof. Tu = zu implies that (:. n) = (zn. z:). Hence, z
1
= 1, z
2
= 1, and the
corresponding eigenvectors are (1. 1) and (1. 1). Since dimF
2
= 2, they are
the all eigenvalues and eigenvectors of T. |L
I Exercise 5.7 (5.6). Dene T L(F
3
) by T(:
1
. :
2
. :
3
) = (2:
2
. 0. 5:
3
). Find all
eigenvalues and eigenvectors of T.
Proof. If z F is a eigenvalue of T and (:
1
. :
2
. :
3
) = 0 is a corresponding
eigenvector, then T(:
1
. :
2
. :
3
) = z(:
1
. :
2
. :
3
), that is,

2:
2
= z:
1
(i)
0 = z:
2
(ii).
5:
3
= z:
3
(iii).
(5.1)
v If :
2
= 0, then z = 0 from (ii); but then :
2
= 0 from (i). A contradiction.
Hence, :
2
= 0 and (5.1) becomes

0 = z:
1
(i
0
)
5:
3
= z:
3
(iii
0
).
(5.2)
v If :
3
= 0, then z = 5 from (iii
0
); then (i
0
) implies that :
1
= 0. Hence, z = 5 is
an eigenvalue, and the corresponding eigenvector is (0. 0. 1).
v If :
1
= 0, then z = 0 from (i
0
); then (iii
0
) implies that :
3
= 0. Hence, z = 0 is
an eigenvalue, and the corresponding eigenvector is (1. 0. 0). |L
I Exercise 5.8 (5.7). Suppose n is a positive integer and T L(F
n
) is dened
by
T(.
1
. . . . . .
n
) =
_
_
n

iD1
.
i
. . . . .
n

iD1
.
i
_
_
:
CHAPTER 5 EIGENVALUES AND EIGENVECTORS 29
in other words, T is the operator whose matrix (with respect to the standard
basis) consists of all 1s. Find all eigenvalues and eigenvectors of T.
Proof. If z F is an eigenvalue of T and (.
1
. . . . . .
n
) = 0 is a corresponding
eigenvector, then

n
iD1
.
i
= 0 and

n
iD1
.
i
.
.
.

n
iD1
.
i

z.
1
.
.
.
z.
n

.
Hence, z = 0, .
i
= 0 for all i = 1. . . . . n, and z.
1
= = z.
n
implies that
.
1
= = .
n
, and so the unique eigenvalue of T is (

n
iD1
.
i
),.
i
= n. Then an
eigenvector to n is (1. . . . . 1). |L
I Exercise 5.9 (5.8). Find all eigenvalues and eigenvectors of the backward
shift operator T L(F
1
) dened by T(:
1
. :
2
. :
3
. . . .) = (:
2
. :
3
. . . .).
Proof. For any z F with z = 0, we have T(z. z
2
. z
3
. . . .) = (z
2
. z
3
. . . .) =
z(z. z
2
. . . .); hence, every z = 0 is an eigenvalue of T. We nowshowthat z = 0 is
also an eigenvalue: let z = (:
1
. 0. . . .) with :
1
= 0. Then Tz = (0. 0. . . .) = 0 z. |L
I Exercise 5.10 (5.9). Suppose T L(V ) and dimR
T
= k. Prove that T has at
most k 1 distinct eigenvalues.
Proof. Suppose that T has more than or equal to k2 distinct eigenvalues. We
take the rst k 2 eigenvalues: z
1
. . . . . z
kC2
. Then there are k 2 correspond-
ing nonzero eigenvectors, u
1
. . . . . u
kC2
, satisfying Tu
1
= z
1
u
1
. . . . . Tu
kC2
=
z
kC2
u
kC2
. Since the k 2 eigenvectors are linearly independent, the list
(z
1
u
1
. . . . . z
kC2
u
kC2
) is linearly independent, too (there are n 1 vectors if one
z is zero). Obviously, the above list is in R
T
, which means that dimR
T
> k 1.
A contradiction. |L
I Exercise 5.11 (5.10). Suppose T L(V ) is invertible and z F {0]. Prove
that z is an eigenvalue of T if and only if 1,z is an eigenvalue of T
1
.
Proof. If z = 0 be an eigenvalue of T, then there exists a nonzero u V such
that Tu = zu. Therefore,
T
1
(Tu) = T
1
(zu) == u = zT
1
u == T
1
u = u,z:
that is, 1,z is an eigenvalue of T
1
. The other direction can be proved with the
same way. |L
I Exercise 5.12 (5.11). Suppose S. T L(V ). Prove that ST and TS have the
same eigenvalues.
Proof. Let z be an eigenvalue of ST, and u = 0 be the corresponding eigenvec-
tor. Then (ST)u = zu. Therefore,
30 CHAPTER 5 EIGENVALUES AND EIGENVECTORS
T(STu) = T(zu) == (TS)(Tu) = z(Tu).
Hence, if Tu = 0, then z is an eigenvalue of TS, and the corresponding eigen-
vector is Tu; if Tu = 0, then (ST)u = S(Tu) = 0 implies that z = 0 (since u = 0).
In this case, T is not injective, and so TS is not injective (by Exercise 3.24). But
this means that there exists v = 0 such that (TS)v = 0 = 0v; that is, 0 is an
eigenvalue of TS. The other direction can be proved with the same way. |L
I Exercise 5.13 (5.12). Suppose T L(V ) is such that every vector in V is an
eigenvector of T. Prove that T is a scalar multiple of the identity operator.
Proof. Let T = (v
1
. . . . . v
n
) be a basis of V and take arbitrary v
i
and v
}
from
T. Then there are z
i
and z
}
such that Tv
i
= z
i
v
i
and Tv
}
= z
}
v
}
. Since v
i
v
}
is
also an eigenvector, there is z F such that T(v
i
v
}
) = z(v
i
v
}
). Therefore,
z
i
v
i
z
}
v
}
= zv
i
zv
}
:
that is, (z
i
z) v
i

_
z
}
z
_
v
}
= 0. Since (v
i
. v
}
) is linearly independent, we
have z
i
= z
}
= z. Hence, for any v =

n
iD1
a
i
v
i
V , we have
Tv = T
_
_
n

iD1
a
i
v
i
_
_
=
n

iD1
a
i
zv
i
= z
_
_
n

iD1
a
i
v
i
_
_
= zv.
i.e., T = zId. |L
IExercise 5.14 (5.13). Suppose T L(V ) is such that every subspace of V with
dimension dimV 1 is invariant under T. Prove that T is a scalar multiple of
the identity operator.
Proof. Let dimV = n and (v
1
. v
2
. . . . . v
n
) be a basis of V . We rst show that
there exists z
1
F such that Tv
1
= z
1
v
1
.
Let V
1
= {av
1
: a F] and U
1
= span(v
2
. . . . . v
n
). Then for any v =

n
iD1
a
i
v
i
V , we have
Tv = T
_
_
a
1
v
1

iD2
a
i
v
i
_
_
= a
1
Tv
1
T
_
_
n

iD2
a
i
v
i
_
_
= a
1
_
_
n

}D1
b
}
v
}
_
_
T
_
_
n

iD2
a
i
v
i
_
_
= (a
1
b
1
) v
1

_
_
_
n

iD2
(a
1
b
i
) v
i
T
_
_
n

iD2
a
i
v
i
_
_
_

_
V
1
U
1
.
where T
_
n
iD2
a
i
v
i
_
U
1
since U
1
is invariant under T.
CHAPTER 5 EIGENVALUES AND EIGENVECTORS 31
Since V = V
1
U
1
and dimV = dimV
1
dimU
1
, we have V = V
1
U
1
by Proposition 2.19, which implies that V
1
U
1
= {0] by Proposition 1.9. If
Tv
1
V
1
, then Tv
1
= 0 and Tv
1
U
1
; hence, there exist c
2
. . . . . c
n
F not all
zero such that
Tv
1
=
n

iD2
c
i
v
i
.
Without loss of generality, we suppose that c
n
= 0.
Let V
n
= {av
n
: a F] and U
n
= span(v
1
. . . . . v
n1
). Similarly, V = V
n
U
n
and V
n
U
n
= {0]. Since U
n
is invariant under T, we have Tv
1
U
n
, that is,
Tv
1
=

n1
}D1
J
}
v
}
, but which means that c
n
= 0 A contradiction. We thus proved
that Tv
1
V
1
, i.e., there is z
1
F such that Tv
1
= z
1
v
1
. But this way can be
applied to every v
i
. Therefore, every v
i
is an eigenvector of T. By Exercise 5.13,
T is a scalar multiple of the identity operator. |L
I Exercise 5.15 (5.14). Suppose S. T L(V ) and S is invertible. Prove that if
P(F) is a polynomial, then (STS
1
) = S(T)S
1
.
Proof. Let (z) = a
0
a
1
z a
2
z
2
a
n
z
n
. Then
(STS
1
) = a
0
Id a
1
(STS
1
) a
2
(STS
1
)
2
a
n
(STS
1
)
n
.
We also have
(STS
1
)
n
= (STS
1
) (STS
1
) (STS
1
)
n2
= (ST
2
S
1
) (STS
1
)
n2
=
= ST
n
S
1
.
Therefore,
S(T)S
1
= S(a
0
Id a
1
T a
2
T
2
a
n
T
n
)S
1
= a
0
Id a
1
(STS
1
) a
2
(ST
2
S
1
) a
n
(ST
n
S
1
)
= (STS
1
). |L
I Exercise 5.16 (5.15). Suppose F = C, T L(V ), P(C), and a C. Prove
that a is an eigenvalue of (T) if and only if a = (z) for some eigenvalue z of
T.
Proof. If z is an eigenvalue of T, then there exists v = 0 such that Tv = zv.
Thus,
(T)|(v) = (a
0
Id a
1
T a
2
T
2
a
n
T
n
)v
= a
0
v a
1
Tv a
2
T
2
v a
n
T
n
v
= a
0
v a
1
zv a
2
T(zv) a
n
T
n1
(zv)
= a
0
v (a
1
z)v (a
2
z
2
)v (a
n
z
n
)v
= (z)v:
32 CHAPTER 5 EIGENVALUES AND EIGENVECTORS
that is, (z) is an eigenvalue of (T).
Conversely, let a C be an eigenvalue of (T) = a
0
Id a
1
T a
n
T
n
,
and v be the corresponding eigenvector. Then (T)(v) = av; that is,
_
(a
0
a)Id a
1
T a
n
T
n
_
v = 0.
It follows from Corollary 4.8 that the above display can be rewritten as follows:
_
c(T z
1
Id) (T z
n
Id)
_
v = 0. (5.3)
where c. z
1
. . . . . z
n
C and c = 0. Hence, for some i = 1. . . . . m, we have
(T z
i
Id)v = 0; that is, z
i
is an eigenvalue of T. |L
I Exercise 5.17 (5.16). Show that the result in the previous exercise does not
hold if C is replaced with R.
Proof. Let T L(R
2
) dened by T(n. :) = (:. n). Then T has no eigen-
value (see p. 78). But T
2
(n. :) = T(:. n) = (n. :) has an eigenvalue: let
(n. :) = z(n. :); then

n = zn
: = z:.
Hence, z = 1. |L
I Exercise 5.18 (5.17). Suppose V is a complex vector space and T L(V ).
Prove that T has an invariant subspace of dimension for each = 1. . . . . dimV .
Proof. Suppose that dimV = n. Let (v
1
. . . . . v
n
) be a basis of V with respect to
which T has an upper-triangular matrix (by Theorem 5.13)
M
_
T. (v
1
. . . . . v
n
)
_
=

z
1

z
2
.
.
.
0 z
n

.
Then it follows from Proposition 5.12 that the claim holds. |L
I Exercise 5.19 (5.18). Give an example of an operator whose matrix with
respect to some basis contains only 0s on the diagonal, but the operator is in-
vertible.
Proof. Let T L(R
2
). Take the standard basis
_
(0. 1) . (1. 0)
_
of R
2
, with respect
to which T has the following matrix
M(T) =
_
0 1
1 0
_
.
Then T(.. ,) = M(T) (.. ,)
0
=
_
,. .
_
is invertible. |L
CHAPTER 5 EIGENVALUES AND EIGENVECTORS 33
I Exercise 5.20 (5.19). Give an example of an operator whose matrix with
respect to some basis contains only nonzero numbers on the diagonal, but the
operator is not invertible.
Proof. Consider the standard basis
_
(1. 0) . (0. 1)
_
of R
2
. Let T L(R
2
) be de-
ned as T(.. ,) = (.. 0). Then T is not injective and so is not invertible. Its
matrix is
M(T) =
_
1 0
0 0
_
. |L
I Exercise 5.21 (5.20). Suppose that T L(V ) has dimV distinct eigenvalues
and that S L(V ) has the same eigenvectors as T (not necessarily with the same
eigenvalues). Prove that ST = TS.
Proof. Let dimV = n. Let z
1
. . . . . z
n
be n distinct eigenvalues of T and
(v
1
. . . . . v
n
) be n eigenvector corresponding to the eigenvalues. Then (v
1
. . . . . v
n
)
is independent and so is a basis of V . Further, the matrix of T with respect to
(v
1
. . . . . v
n
) is given by
M
_
T. (v
1
. . . . . v
n
)
_
=

z
1
0 0
0 z
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 z
n

.
Since S has the same eigenvectors as T, so for any v
i
, there is some

z
i
such
that Sv
i
=

z
i
v
i
. For every v =

n
iD1
a
i
v
i
V we have
(ST)(v) = S
_
_
_T
_
_
n

iD1
a
i
v
i
_
_
_

_ = S
_
_
n

iD1
a
i
Tv
i
_
_
= S
_
_
n

iD1
a
i
z
i
v
i
_
_
=
n

iD1
(a
i
z
i
) Sv
i
=
n

iD1
_
a
i
z
i

z
i
_
v
i
.
and
(TS)(v) = T
_
_
_S
_
_
n

iD1
a
i
v
i
_
_
_

_ = T
_
_
n

iD1
a
i
Sv
i
_
_
= T
_
_
n

iD1
_
a
i

z
i
_
v
i
_
_
=
n

iD1
_
a
i

z
i
_
Tv
i
=
n

iD1
_
a
i
z
i

z
i
_
v
i
.
34 CHAPTER 5 EIGENVALUES AND EIGENVECTORS
Hence, ST = TS. |L
IExercise 5.22 (5.21). Suppose P L(V ) and P
2
= P. Prove that V =
P
R
P
.
Proof. By Theorem 3.4, dimV = dim
P
dimR
P
, so it suces to show that
V =
P
R
P
by Proposition 2.19. Take an arbitrary v V . Since P
2
= P, we
have
P
2
v = Pv == P (Pv v) = 0 == Pv v
P
:
that is, there exists u
P
such that Pv v = u. Therefore,
v = u Pv
P
R
P
. |L
I Exercise 5.23 (5.22). Suppose V = U W , where U and W are nonzero
subspaces of V . Find all eigenvalues and eigenvectors of P
U,W
.
Proof. We rst show that z = 0 is an eigenvalue of P
U,W
. Since W = {0], we
can take w W with w = 0. Obviously, w V and w can be written as w = 0w
uniquely. Then
P
U,W
(w) = 0 = 0w:
that is, 0 is an eigenvalue of P
U,W
and any w W with w = 0 is an eigenvector
corresponding to 0.
Now let us check whether there is eigenvalue z = 0. If there is an eigenvalue
z = 0 under P
U,W
, then there exists v = u w = 0, where u U and w W,
such that P
U,W
(v) = zv, but which means that
u = z(u w) .
Then w = (1 z) u,z U since z = 0, and which implies that w = 0 since
V = U W forces U W = {0]. Therefore, v = u = 0 and
P
U,W
(v)P
U,W
(u) = u = 1 u.
that is, z = 1 is the unique nonzero eigenvalue of P
U,W
. |L
I Exercise 5.24 (5.23). Give an example of an operator T L(R
4
) such that T
has no (real) eigenvalues.
Proof. Our example is based on (5.4). Let T L(R
4
) be dened by
T(.
1
. .
2
. .
3
. .
4
) = (.
2
. .
1
. .
4
. .
3
).
Suppose that z is a (real) eigenvalue of T; then
CHAPTER 5 EIGENVALUES AND EIGENVECTORS 35

z.
1
= .
2
z.
2
= .
1
z.
3
= .
4
z.
4
= .
3
.
If z = 0, then (.
1
. .
2
. .
3
. .
4
) = 0. So z = 0. It is evident that
.
1
= 0 == .
2
= 0. and .
3
= 0 == .
4
= 0.
Suppose that .
1
= 0. Then from the rst two equations we have
z
2
.
2
= .
2
== z
2
= 1.
which has no solution in R. Hence, .
1
= .
2
= 0 when z = 0. Similarly, we can
show that .
3
= .
4
= 0 if z = 0. |L
I Exercise 5.25 (5.24). Suppose V is a real vector space and T L(V ) has
no eigenvalues. Proves that every subspace of V invariant under T has even
dimension.
Proof. If U is invariant under T and dimU is odd, then T
U
L(U) has an
eigenvalue. But this implies that T has an eigenvalue. A contradiction. |L
6
INNER-PRODUCT SPACES
As You Should Verify
Remark 6.1 (p. 113). The orthogonal projection P
U
has the following proper-
ties:
a. P
U
L(V );
b. R
P
U
= U;
c.
P
U
= U
?
;
d. v P
U
v U
?
for every v V ;
e. P
2
U
= P
U
;
f. [P
U
v[ 6 [v[ for every v V ;
g. P
U
v =

n
iD1
(v. e
i
) e
i
for every v V , where (e
1
. . . . . e
n
) is a basis of U.
Proof. (a) For any v. v
0
V , we have
P
U
(v v
0
) = P
U
_
(u w) (u
0
w
0
)
_
= P
U
_
(u u
0
) (ww
0
)
_
= u u
0
= P
U
v P
U
v
0
.
where u. u
0
U and w. w
0
U
?
. Also it is true that P
U
(av) = aP
U
v. Therefore,
P
U
L(V ).
(b) Write every v V as v = u w, where u U and w U
?
. Since P
U
v = u,
we have one direction that R
P
U
_ U. For the other direction, notice that U =
P
U
U| _ R
P
U
.
(c) If v
P
U
, then 0 = P
U
v = u; that is, v = 0 w with w U
?
. This proves
that
P
U
_ U
?
. The other inclusion direction is clear.
(d) For every v V , we have v = u w, where u U and w U
?
. Hence,
v P
U
v = (u w) u = w U
?
.
(e) For every v V , we have P
2
U
v = P
U
(P
U
v) = P
U
u = u = P
U
v.
37
38 CHAPTER 6 INNER-PRODUCT SPACES
(f) We can write every v V as v = u w with u U and w U
?
; therefore,
[v[ = [u w[

= [u[ [w[ > [u[ = [P
U
v[, where (+) holds since U J U
?
.
(g) It follows from Axler (1997, 6.31, p.112). |L
Remark 6.2 (p. 119-120). Verify that the function T T

has the following


properties:
a. (S T)

= S

for all S. T L(V. W );


b. (aT)

= a T

for all a F and T L(V. W );


c. (T

= T for all T L(V. W );


d. Id

= Id, where Id is the identity operator on V ;


e. (ST)

= T

for all T L(V. W ) and S L(W. U).


Proof. (a) ((S T)v. w) = (Sv. w)(Tv. w) = (v. S

w)(v. T

w) = (v. (S

)w) .
(b) ((aT)v. w) = a(Tv. w) = a(v. T

w) = (v. ( a T

) (w)).
(c) (T

w. v) = (v. T

w) = (Tv. w) = (w. Tv).


(d)

Idv. w

= (v. w) =

v. Idw

.
(e) ((ST)v. w) = (S(Tv). w) = (Tv. S

w) = (v. (T

)w). |L
Exercises
I Exercise 6.3 (6.1). Prove that if x, y are nonzero vectors in R
2
, then (x. y) =
[x[ [y[ cos 0, where 0 is the angle between x and y.
Proof. Using notation as in Figure 6.1, the law of cosines states that
[x y[
2
= [x[
2
[y[
2
2[x[ [y[ cos 0. (6.1)
x
y
x y
0
Figure 6.1. The law of cosines
After inserting [x y[
2
= (x y. x y) = [x[
2
[y[
2
2 (x. y) into (6.1),
we get the conclusion. |L
I Exercise 6.4 (6.2). Suppose u. v V . Prove that (u. v) = 0 if and only if
[u[ 6 [u av[ for all a F.
CHAPTER 6 INNER-PRODUCT SPACES 39
Proof. If (u. v) = 0, then (u. av) = 0 and so
[u av[
2
= (u av. u av) = [u[
2
[av[
2
> [u[
2
.
Now suppose that [u[ 6 [u av[ for any a F. If v = 0, then (u. v) = 0 holds
trivially. Thus we assume that v = 0. We rst have
[u av[
2
= (u av. u av)
= (u. u av) (av. u av)
= [u[
2
a(u. v) a(u. v) [av[
2
= [u[
2
[av[
2
a(u. v) a(u. v)
= [u[
2
[av[
2
2Re
_
a(u. v)
_
.
Therefore, [u[ 6 [u av[ for all a F implies that for all a F,
2Re
_
a(u. v)
_
> [av[
2
= [a[
2
[v[
2
. (6.2)
Take a = (u. v), with > 0; then (6.2) becomes
2

(u. v)

2
6

(u. v)

2
[v[
2
. (6.3)
Let = 1,[v[
2
. Then (6.3) becomes
2

(u. v)

2
6

(u. v)

2
.
Hence, (u. v) = 0. |L
I Exercise 6.5 (6.3). Prove that
_

n
}D1
a
}
b
}
_
2
6
_

n
}D1
a
2
}
_ _

n
}D1
b
2
}
,
_
for
all a
}
. b
}
R.
Proof. Since a
}
. b
}
R, we can write any a = (a
1
. . . . . a
n
) and b = (b
1
. . . . . b
n
)
as a = (a
0
1
. a
0
2

_
2
. . . . . a
0
n

_
n
) and b = (b
0
1
.
_
2b
0
2
. . . . .
_
nb
0
n
) for some a
0
=
(a
0
1
. . . . . a
0
n
) and b
0
= (b
0
1
. . . . . b
0
n
). Then
_
_
n

}D1
a
}
b
}
_
_
2
=
_
_
n

}D1
a
0
}
b
0
}
_
_
2
=

a
0
. b
0

2
.
n

}D1
a
2
}
=
n

}D1

a
0
2
}

=
n

}D1
a
0
2
}
= [
a
0
[
2
.
and
n

}D1
b
2
}

=
n

}D1
b
0
2
}

=
n

}D1
b
0
2
}
= [
b
0
[
2
.
Hence, by the Cauchy-Schwarz Inequality,
40 CHAPTER 6 INNER-PRODUCT SPACES
_
_
n

}D1
a
}
b
}
_
_
2
=

a
0
. b
0

2
6 [
a
0
[
2
[
b
0
[
2
=
_
_
n

}D1
a
2
}
_
_
_
_
n

}D1
b
2
}

_
_
. |L
I Exercise 6.6 (6.4). Suppose u. v V are such that [u[ = 3, [u v[ = 4, and
[u v[ = 6. What number must [v[ equal?
Solution. By the parallelogram equality, [u v[
2
[u v[
2
= 2([u[
2
[v[
2
),
so we have [v[ =
_
17. |L
I Exercise 6.7 (6.5). Prove or disprove: there is an inner product on R
2
such
that the associated norm is given by [(.
1
. .
2
)[ =[.
1
[ [.
2
[ for all (.
1
. .
2
) R
2
.
Proof. There is no such inner product on R
2
. For example, let u = (1. 0) and
v = (0. 1). Then [u[ = [v[ = 1 and [u v[ = [u v[ = 2. But then the
Parallelogram Equality fails. |L
I Exercise 6.8 (6.6). Prove that if V is a real inner-product space, then (u. v) =
([u v[
2
[u v[
2
),4 for all u. v V .
Proof. If V is a real inner-product space, then for any u. v V ,
[u v[
2
[u v[
2
4
=
(u v. u v) (u v. u v)
4
=
_
[u[
2
2(u. v) [v[
2
_

_
[u[
2
2(u. v) [v[
2
_
4
= (u. v). |L
I Exercise 6.9 (6.7). Prove that if V is a complex inner-product space, then
(u. v) =
[u v[
2
[u v[
2
[u i v[
2
i [u i v[
2
i
4
for all u. v V .
Proof. If V is a complex inner-product space, then for any u. v V we have
[u v[
2
= (u v. u v) = (u. u v) (v. u v)
= [u[
2
(u. v) (v. u) [v[
2
.
[u v[
2
= (u v. u v) = (u. u v) (v. u v)
= [u[
2
(u. v) (v. u) [v[
2
.
[u i v[
2
i = (u i v. u i v) i =
_
(u. u i v) (i v. u i v)
_
i
=
_
[u[
2
i (u. v) i (v. u) i i [v[
2
_
i
= [u[
2
i (u. v) (v. u) [v[
2
i.
and
CHAPTER 6 INNER-PRODUCT SPACES 41
[u i v[
2
i = (u i v. u i v) i =
_
(u. u i v) (i v. u i v)
_
i
=
_
[u[
2
i (u. v) i (v. u) i i [v[
2
_
i
= [u[
2
i (u. v) (v. u) [v[
2
i.
Hence,
[u v[
2
[u v[
2
[u i v[
2
i [u i v[
2
i
4
=
2(u. v) 2 (v. u) 2(u. v) 2 (v. u)
4
= (u. v). |L
I Exercise 6.10 (6.10). On P
2
(R), consider the inner product given by
(. q) =
_
1
0
(.) q (.) d..
Apply the Gram-Schmidt procedure to the basis (1. .. .
2
) to produce an or-
thonormal basis of P
2
(R).
Solution. It is clear that e
1
= 1 since [1[
2
=
_
1
0
(1 1) d. = 1. As for e
2
, let
e
2
=
. (.. e
1
) e
1
_
_
. (.. e
1
) e
1
_
_
.
Since
(.. e
1
) =
_
1
0
. d. =
1
2
:
we have
e
2
=
. 1,2
[. 1,2[
=
. 1,2
_
_
1
0
_
. 1,2
_
2
d.
=
_
3 (2. 1) .
As for e
3
,
e
3
=
.
2
(.
2
. e
1
)e
1
(.
2
. e
2
)e
2
_
_
.
2
(.
2
. e
1
)e
1
(.
2
. e
2
)e
2
_
_
.
Since
_
.
2
. e
1
_
=
_
1
0
.
2
d. =
1
3
.
_
.
2
. e
2
_
=
_
1
0
.
2
_
_
3 (2. 1)
_
d. =
_
3
6
.
and
_
_
.
2
(.
2
. e
1
)e
1
(.
2
. e
2
)e
2
_
_
=
_
_
1
0
_
.
2
.
1
6
_
2
d. =
_
1,5
6
.
we know that
42 CHAPTER 6 INNER-PRODUCT SPACES
e
3
=
.
2
. 1,6
_
1,5
_
6
=
_
5
_
6.
2
6. 1
_
. |L
I Exercise 6.11 (6.11). What happens if the Gram-Schmidt procedure is ap-
plied to a list of vectors that is not linearly independent.
Solution. If (v
1
. . . . . v
n
) is not linearly independent, then
e
}
=
v
}

v
}
. e
}

e
1

v
}
. e
}1

e
}1
_
_
_v
}

v
}
. e
}

e
1

v
}
. e
}1

e
}1
_
_
_
may not be well dened since if v
}
span(v
1
. . . . . v
}1
), then
_
_
_v
}

v
}
. e
}

e
1

v
}
. e
}1

e
}1
_
_
_ = 0. |L
IExercise 6.12 (6.12). Suppose V is a real inner-product space and (v
1
. . . . . v
n
)
is a linearly independent list of vectors in V . Prove that there exist exactly 2
n
orthonormal lists (e
1
. . . . . e
n
) of vectors in V such that
span(v
1
. . . . . v
}
) = span(e
1
. . . . . e
}
)
for all {1. . . . . m].
Proof. Given the linearly independent list (v
1
. . . . . v
n
), we have a correspond-
ing orthonormal list (e
1
. . . . . e
n
) by the Gram-Schmidt procedure, such that
span(v
1
. . . . . v
}
) = span(e
1
. . . . . e
}
) for all {1. . . . . m].
Now, for every i = 1. . . . . m, the list (e
1
. . . . . e
i
. . . . . e
n
) is also an orthonor-
mal list; further,
span(e
1
. . . . . e
i
) = span(e
1
. . . . . e
i
).
The above shows that there are at least 2
n
orthonormal lists satisfying the
requirement.
On the other hand, if there is an orthonormal list (f
1
. . . . . f
n
) satisfying
span(v
1
. . . . . v
}
) = span(f
1
. . . . . f
}
)
for all {1. . . . . m], then span(v
1
) = span(f
1
) implies that
f
1
=
v
1
[
v
1[
= e
1
:
Similarly, span(e
1
. e
2
) = span(v
1
. v
2
) = span(e
1
. f
2
) implies that
f
2
= a
1
e
1
a
2
e
2
. for some a
1
. a
2
R.
Then the orthonormality implies that
(e
1
. a
1
e
1
a
2
e
2
) = 0 == a
1
= 0.
(a
2
e
2
. a
2
e
2
) = 1 == a
2
= 1:
CHAPTER 6 INNER-PRODUCT SPACES 43
that is, f
2
= e
2
. By induction, we can show that f
i
= e
i
for all i = 1. . . . . m,
and this completes the proof. |L
I Exercise 6.13 (6.13). Suppose (e
1
. . . . . e
n
) is an orthonormal list of vectors
in V . Let v V . Prove that [v[
2
=

(v. e
1
)

(v. e
n
)

2
if and only if
v span(e
1
. . . . . e
n
).
Proof. It follows from Corollary 6.25 that the list (e
1
. . . . . e
n
) can be ex-
tended to an orthonormal basis (e
1
. . . . . e
n
. f
1
. . . . . f
n
) of V . Then by Theorem
6.17, every vector v V can be presented uniquely as v =

n
iD1
(v. e
i
) e
i

n
}D1

v. f
}

f
}
, and so
[v[
2
=
_
_
_
_
_
_
n

iD1
(v. e
i
) e
i

n

}D1

v. f
}

f
}
_
_
_
_
_
_
2
=
_
n

iD1
(v. e
i
) e
i

n

}D1

v. f
}

f
}
.
n

iD1
(v. e
i
) e
i

n

}D1

v. f
}

f
}
_
=
n

iD1

(v. e
i
)

}D1

v. f
}

2
.
Hence,
[v[
2
=

(v. e
1
)

(v. e
n
)

2
==

v. f
}

= 0. V = 1. . . . . n
== v =
n

iD1
(v. e
i
) e
}
== v span(e
1
. . . . . e
n
). |L
I Exercise 6.14 (6.14). Find an orthonormal basis of P
2
(R) such that the dif-
ferentiation operator on P
2
(R) has an upper-triangular matrix with respect to
this basis.
Solution. Consider the orthonormal basis (1.
_
3(2. 1).
_
5(6.
2
6. 1)) =
(e
1
. e
2
. e
3
) in Exercise 6.10. Let T be the dierentiation operator on P
2
(R). We
have
Te
1
= 0 span(e
1
).
Te
2
=
_
_
3(2. 1)
_
0
= 2
_
3 span(e
1
. e
2
).
and
Te
3
=
_
_
5(6.
2
6. 1)
_
0
= 12
_
5. 6
_
5 span(e
1
. e
2
. e
3
).
It follows from Proposition 5.12 that T has an upper-triangular matrix. |L
I Exercise 6.15 (6.15). Suppose U is a subspace of V . Prove that dimU
?
=
dimV dimU.
Proof. We have V = U U
?
; hence,
44 CHAPTER 6 INNER-PRODUCT SPACES
dimV = dimU dimU
?
dimU U
?
= dimU dimU
?
:
that is, dimU
?
= dimV dimU. |L
I Exercise 6.16 (6.16). Suppose U is a subspace of V . Prove that U
?
= {0] if
and only if U = V .
Proof. If U
?
= {0], then V = U U
?
= U {0] = U. To see the converse
direction, let U = V . For any w U
?
, we have (w. w) = 0 since w U
?
_ V =
U; then w = 0, that is, U
?
= {0]. |L
I Exercise 6.17 (6.17). Prove that if P L(V ) is such that P
2
= P and ev-
ery vector in
P
is orthogonal to every vector in R
P
, then P is an orthogonal
projection.
Proof. For every w R
P
, there exists v
w
V such that Pv
w
= w. Hence,
Pw = P(Pv
w
) = P
2
v
w
= Pv
w
= w.
By Exercise 5.22, V =
P
R
P
if P
2
= P. Then any v V can be uniquely
written as v = u w with u
P
and w R
P
, and
Pv = P(u w) = Pw = w.
Hence, P = P
R
P
when
P
J R
P
. |L
I Exercise 6.18 (6.18). Prove that if P L(V ) is such that P
2
= P and [Pv[ 6
[v[ for every v V , then P is an orthogonal projection.
Proof. It follows from the previous exercise that if P
2
= P, then Pv = w for
every v V , where v is uniquely written as v = u w with u
P
and w R
P
.
It now suces to show that
P
J R
P
. Take an arbitrary v = u w V ,
where u
P
and w R
P
. Then [Pv[ 6 [v[ implies that
(Pv. Pv) = (w. w) 6 (u w. u w) == [u[
2
6 2Re
_
(u. w)
_
.
The above inequality certainly fails for some v if (u. w) = 0 (see Exercise 6.4).
Therefore,
P
J R
P
and P = P
R
P
. |L
I Exercise 6.19 (6.19). Suppose T L(V ) and U is a subspace of V . Prove that
U is invariant under T if and only if P
U
TP
U
= TP
U
.
Proof. It follows from Theorem 6.29 that V = U U
?
.
Only if: Suppose that U is invariant under T. For any v = u w with u U
and w U
?
, we have
(P
U
TP
U
) (v) = (P
U
T) (u) = P
U
(Tu) = Tu.
CHAPTER 6 INNER-PRODUCT SPACES 45
where the last equality holds since u U and U is invariant under T. We
also have
(TP
U
) (v) = Tu.
If: Now suppose that P
U
TP
U
= TP
U
. Take any u U and we have
Tu = T
_
P
U
(u)
_
= (TP
U
) (u) = (P
U
TP
U
) (u) = P
U
(Tu) U
by the denition of P
U
. This proves that U is invariant under T. |L
I Exercise 6.20 (6.20). Suppose T L(V ) and U is a subspace of V . Prove that
U and U
?
are both invariant under T if and only if P
U
T = TP
U
.
Proof. Suppose rst that both U and U
?
are both invariant under T. Then for
any v = u w, where u U and w U
?
, we have
(P
U
T) (v) = (P
U
T) (u w) = P
U
(Tu Tw) = Tu.
and (TP
U
) (v) = Tu.
Now suppose P
U
T = TP
U
. For any u U, we have Tu = (TP
U
) (u) =
(P
U
T) (u) = P
U
(Tu) U. Applying the previous argument to U
?
proves that
U
?
is invariant. |L
I Exercise 6.21 (6.21). In R
4
, let U = span((1. 1. 0. 0) . (1. 1. 1. 2)). Find u U
such that [u (1. 2. 3. 4)[ is as small as possible.
Solution. We rst need to nd the orthonormal basis of U. Using the Gram-
Schmidt procedure, we have
e
1
=
(1. 1. 0. 0)
[(1. 1. 0. 0. )[
=
_
_
2,2.
_
2,2. 0. 0
_
.
and
e
2
=
(1. 1. 1. 2) ((1. 1. 1. 2) . e
1
) e
1
_
_
(1. 1. 1. 2) ((1. 1. 1. 2) . e
1
) e
1
_
_
=
_
0. 0.
_
5,5. 2
_
5,5
_
.
Then by 6.35,
P
U
(1. 2. 3. 4) = ((1. 2. 3. 4) . e
1
) e
1
((1. 2. 3. 4) . e
2
) e
2
=
_
3,2. 3,2. 11,5. 22,5
_
.
Remark 6.22. We can use Maple to obtain the orthonormal basis easily:
>with(LinearAlgebra):
>v1:=<1,1,0,0>:
>v2:=<1,1,1,2>:
>GramSchmidt({v1,v2}, normalized)
|L
I Exercise 6.23 (6.22). Find P
3
(R) such that (0) = 0,
0
(0) = 0, and
_
1
0
[2 3. (.)[
2
d. is as small as possible.
46 CHAPTER 6 INNER-PRODUCT SPACES
Proof. (0) =
0
(0) = 0 implies that (.) = a.
2
b.
3
, where a. b R. We
want to nd U span(.
2
. .
3
) such that distance from q = 2
3
. to U is as
small as possible. With the Gram-Schmidt procedure, the orthonomal basis is
e
1
=
.
2
[
.
2
[
=
.
2
_
_
1
0

.
2
.
2

d.
=
_
5.
2
.
and
e
2
=
.
3

_
_
1
0

_
5.
5

d.
_
_
5.
2
_
_
_
_
_
_
.
3

_
_
1
0

_
5.
5

d.
_
_
5.
2
_
_
_
_
_
_
=
.
3

5
6
.
2
_
7

42
= 6
_
7.
3
5
_
7.
2
.
Hence,
P
U
(2 3.) =
_
_
1
0
(2 3.)
_
5.
2
d.
_
_
5.
2

_
_
1
0
(2 3.)
_
6
_
7.
3
5
_
7.
2
_
d.
_
_
6
_
7.
3
5
_
7.
2
_
. |L
I Exercise 6.24 (6.24). Find a polynomial q P
2
(R) such that

_
1
2
_
=
_
1
0
(.)q(.) d.
for every P
2
(R).
Solution. For every P
2
(R), we dene a function T: P
2
(R) R by letting
T = (1,2). It is clear that T L(P
2
(R). R).
It follows from Exercise 6.10 that (e
1
. e
2
. e
3
) = (1.
_
3(2.1).
_
5(6.
2
6.1))
is an orthonormal basis of P
2
(R). Then, by Theorem,
T = T
_
(. e
1
) e
1
(. e
2
) e
2
(. e
3
) e
3
_
= (. T(e
1
)e
1
T(e
2
)e
2
T(e
3
)e
3
) :
hence,
q(.) = e
1
(1,2)e
1
e
2
(1,2)e
2
e
3
(1,2)e
3
= 1 0
_
5
2
_
_
5(6.
2
6. 1)
_
=
3
2
15. 15.
2
. |L
I Exercise 6.25 (6.25). Find a polynomial q P
2
(R) such that
CHAPTER 6 INNER-PRODUCT SPACES 47
_
1
0
(.)(cos .) d. =
_
1
0
(.)q(.) d.
for every P
2
(R).
Solution. As in the previous exercise, we let T:
_
1
0
(.)(cos .) d. for
every P
2
(R). Then T L(P
2
(R). R). Let
q(.) = T(e
1
)e
1
T(e
2
)e
2
T(e
3
)e
3
= 12,
2
24.,
2
. |L
IExercise 6.26 (6.26). Fix a vector v V and dene T L(V. F) by Tu = (u. v).
For a F, nd a formula for T

a.
Proof. Take any u V . We have (Tu. a) =

(u. v). a

= (u. v)a = (u. av); thus,


T

a = av. |L
I Exercise 6.27 (6.27). Suppose n is a positive integer. Dene T L(F
n
) by
T(:
1
. . . . . :
n
) = (0. :
1
. . . . . :
n1
). Find a formula for T

(:
1
. . . . . :
n
).
Solution. Take the standard basis of F
n
, which is also a orgonormal basis of
F
n
. We then have
T(1. 0. 0. . . . . 0) = (0. 1. 0. 0. . . . . 0) .
T(0. 1. 0. . . . . 0) = (0. 0. 1. 0. . . . . 0) .

T(0. 0. . . . . 0. 1) = (0. 0. 0. 0. . . . . 0) .
Therefore, M(T) is given by
M(T) =

0 0 0 0
1 0 0 0
0 1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1 0

.
and so
M(T

) =

0 1 0 0
0 0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1
0 0 0 0

.
Then T

(:
1
. . . . . :
n
) = M(T

)M(:
1
. . . . . :
n
) = (:
2
. :
3
. . . . . :
n1
. :
n
. 0). |L
I Exercise 6.28 (6.28). Suppose T L(V ) and z F. Prove that z is an eigen-
value of T if and only if

z is an eigenvalue of T

.
48 CHAPTER 6 INNER-PRODUCT SPACES
Proof. If z F is an eigenvalue of T, then there exists v = 0 such that Tv = zv.
Take w V with w = 0. Then
(Tv. w) = (zv. w) = (v.

zw) = (v. T

w).
which implies that T

w =

zw; that is,

z is an eigenvalue of T

. With the same


logic, we can show the inverse direction. |L
I Exercise 6.29 (6.29). Suppose T L(V ) and U is a subspace of V . Prove that
U is invariant under T if and only if U
?
is invariant under T

.
Proof. Take any u U and w U
?
. If U is invariant under T, then Tu U
and so
0 = (Tu. w) =

u. T

:
that is, T

w U
?
. Applying T

then we obtain the inverse direction. |L


I Exercise 6.30 (6.30). Suppose T L(V. W ). Prove that
a. T is injective if and only if T

is surjective;
b. T is surjective if and only if T

is injective.
Proof. (a) If T is injective, then dim
T
= 0. Then
dimR
T
= dimR
T
= dimV dim
T
= dimV.
i.e., T L(W. V ) is surjective. If T

is surjective, then dimR


T
= dimV and so
dim
T
= dimV dimR
T
= dimV dimR
T
= 0.
that is, T L(V. W ) is injective.
(b) Using the fact that (T

= T and the result in part (a) we get (b) immedi-


ately. |L
I Exercise 6.31 (6.31). Prove that dim
T
= dim
T
dimW dimV and
dimR
T
= dimR
T
for every T L(V. W ).
Proof. It follows from Proposition 6.46 that
T
= (R
T
)
?
. Since R
T
is a sub-
space of W, and W = R
T
(R
T
)
?
, we thus have
dimV = dim
T
dimR
T
= dim
T
dimW dimR
?
T
= dim
T
dimW dim
T
.
(6.4)
which proves the rst claim. As for the second equality, we rst have
dimR
T
= dimV dim
T
.
dimR
T
= dimW dim
T
.
CHAPTER 6 INNER-PRODUCT SPACES 49
Thus, dimR
T
dimR
T
= 0 by (6.4), that is, dimR
T
= dimR
T
. |L
I Exercise 6.32 (6.32). Suppose A is an m n matrix of real numbers. Prove
that the dimension of the span of the columns of A (in R
n
) equals the dimension
of the span of the rows of A (in R
n
).
Proof. Without loss of generality, we can assume that T L(R
n
. R
n
) is the
linear map induced by A, where A corresponds to an orthonormal basis of R
n
and an orthonormal basis of R
n
; that is, Tx = Ax for all x R
n
. By Proposition
6.47, we know that for any y R
n
,
T

y = A
0
y.
where A
0
is the (conjugate) transpose of A. Let
A =
_
a
1
a
n
_
=

b
1
.
.
.
b
n

.
Then
A
0
=

a
0
1
.
.
.
a
0
n

=
_
b
0
1
b
0
n
_
.
It is easy to see that
span(a
1
. . . . . a
n
) = R
T
. and span(a
0
1
. . . . . a
0
n
) = R
T
.
It follows from Exercise 6.31 that dimR
T
= dimR
T
. |L
7
OPERATORS ON INNER-PRODUCT SPACES
As You Should Verify
Remark 7.1 (p.131). If T is normal, then T zId is normal, too.
Proof. Note that (T zId)

= T

zId. For any v V ,


(T zId)(T

zId)v = (T zId)(T

zv)
= T(T

zv) z (T

zv)
= TT

zTv zT

v [z[
2
v.
and
(T

zId)(T zId)v = (T

zId) (Tv zv)


= T

(Tv zv)

z (Tv zv)
= T

Tv zT

zTv [z[
2
v.
Hence, (T zId)(T

zId) = (T

zId)(T zId) since TT

= T

T. |L
Exercises
I Exercise 7.2 (7.1). Make P
2
(R) into an inner-product space by dening
(. q) =
_
1
0
(.) q (.) d.. Dene T L(P
2
(R)) by T
_
a
0
a
1
. a
2
.
2
_
= a
1
..
a. Show that T is not self-adjoint.
b. The matrix of T with respect to the basis
_
1. .. .
2
_
is

0 0 0
0 1 0
0 0 0

.
This matrix equals its conjugate transpose, even though T is not self-adjoint.
Explain why this is not a contradiction.
51
52 CHAPTER 7 OPERATORS ON INNER-PRODUCT SPACES
Proof. (a) Suppose T is self-adjoint, that is, T = T

. Take any . q P
2
(R) with
(.) = a
0
a
1
. a
2
.
2
and q (.) = b
0
b
1
. b
2
.
2
. Then (T. q) = (. T

q) =
(. Tq) implies that
_
1
0
(a
1
.)
_
b
0
b
1
. b
2
.
2
_
d. =
_
1
0
_
a
0
a
1
. a
2
.
2
_
(b
1
.) d..
that is,
a
1
b
0
2

a
1
b
1
3

a
1
b
2
4
=
a
0
b
1
2

a
1
b
1
3

a
2
b
1
4
. (7.1)
Let a
1
= 0, then (7.1) becomes 0 = a
0
b
1
,2, which fails to hold for any a
0
b
1
= 0.
Therefore, T = T

.
(b)
_
1. .. .
2
_
is not an orthonormal basis. See Proposition 6.47. |L
I Exercise 7.3 (7.2). Prove or give a counterexample: the product of any two
self-adjoint operators on a nite-dimensional inner-product space is self-adjoint.
Proof. The claim is incorrect. Let S. T L(V ) be two self-adjoint operators.
Then (ST)

= T

= TS. It is not necessarily that ST = TS since multiplication


is not commutable.
For example, let S. T L(R
2
) be dened by the following matrices (with
respect to the stand basis of R
2
):
M(S) =
_
0 1
1 0
_
M(T) =
_
1 0
0 0
_
.
Then both S and T are self-adjoint, but ST is not since M(S)M(T) = M(T)M(S).
|L
I Exercise 7.4 (7.3). a. Show that if V is a real inner-product space, then the
set of self-adjoint operators on V is a subspace of L(V ).
b. Show that if V is a complex inner-product space, then the set of self-adjoint
operators on V is not a subspace of L(V ).
Proof. (a) Let L
sa
(V ) be the set of self-adjoint operators. Obviously, 0 = 0

since for any v. w we have 0 = (0v. w) = (v. 0w) = (v. 0

w). To see L
sa
(V )
is closed under addition, let S. T L
sa
(V ). Then (S T)

= S

= S T
implies that S T L
sa
(V ). Finally, for any a F and T L
sa
(V ), we have
(aT)

= aT

= aT L
sa
(V ).
(b) If V is a complex inner-product, then (aT)

= aT

= aT, so L
sa
(V ) is not
a subspace of L(V ). |L
I Exercise 7.5 (7.4). Suppose P L(V ) is such that P
2
= P. Prove that P is an
orthogonal projection if and only if P is self-adjoint.
Proof. If P
2
= P, then V =
P
R
P
(by Exercise 5.22), and Pw = w for every
w R
P
(by Exercise 6.17).
CHAPTER 7 OPERATORS ON INNER-PRODUCT SPACES 53
Suppose rst that P = P

. Take arbitrary u
P
and w R
P
. Then
(u. w) = (u. Pw) =

u. P

= (Pu. w) = (0. w) = 0.
Hence,
P
J R
P
and so P = P
R
P
.
Now suppose that P is an orthogonal projection. Then there exists a sub-
space U of V such that V = U U
?
and Pv = u if v = u w with u U
and w U
?
. Take arbitrary v
1
. v
2
V with v
1
= u
1
w
1
and v
2
= u
2
w
2
.
Then (Pv
1
. v
2
) = (u
1
. u
2
w
2
) = (u
1
. u
2
). Similarly, (v
1
. Pv
2
) = (u
1
w
1
. u
2
) =
(u
1
. u
2
). Thus, P = P

. |L
I Exercise 7.6 (7.5). Show that if dimV > 2, then the set of normal operators
on V is not a subspace of L(V ).
Proof. Let L
n
(V ) denote the set of normal operators on V and dimV > 2. Let
S. T L
n
(V ). It is easy to see that
(S T) (S T)

= (S T)
_
S

_
=
_
S

_
(S T)
generally since matrix multiplication is not commutable. |L
I Exercise 7.7 (7.6). Prove that if T L(V ) is normal, then R
T
= R
T
.
Proof. T L(V ) is normal if and only if [Tv[ = [
T

v
[ for all v V (by
Proposition 7.6). Then v
T
== [Tv[ = 0 == [
T

v
[ = 0 == v
T
,
i.e.,
T
=
T
. It follows from Proposition 6.46 that
R
T
=
?
T
=
?
T
= R
T
. |L
I Exercise 7.8 (7.7). Prove that if T L(V ) is normal, then
T
k =
T
and
R
T
k = R
T
for every positive integer k.
Proof. It is evident that
T
_
T
k . So we take any v
T
k with v = 0 (if

T
k = {0], there is nothing to prove). Then
_
T

T
k1
v. T

T
k1
v
_
=
_
TT

T
k1
v. T
k1
v
_
=
_
T

TT
k1
v. T
k1
v
_
=
_
T

T
k
v. T
k1
v
_
= 0.
and so
_
T

T
k1
_
v = 0. Now
_
T
k1
v. T
k1
v
_
=
_
T
k2
v. T

T
k1
v
_
= 0
implies that T
k1
v = 0, that is, v
T
k1. With the same logic, we can show
that v
T
k2, . . ., v
T
. |L
54 CHAPTER 7 OPERATORS ON INNER-PRODUCT SPACES
I Exercise 7.9 (7.8). Prove that there does not exist a self-adjoint operator
T L(R
3
) such that T(1. 2. 3) = (0. 0. 0) and T(2. 5. 7) = (2. 5. 7).
Proof. Suppose there exists such a operator T L(R
3
). Then
(T(1. 2. 3) . (2. 5. 7)) = ((0. 0. 0) . (2. 5. 7)) = 0.
but
((1. 2. 3) . T(2. 5. 7)) = ((1. 2. 3) . (2. 5. 7)) = 0. |L
I Exercise 7.10 (7.9). Prove that a normal operator on a complex inner-
product space is self-adjoint if and only if all its eigenvalues are real.
Proof. It follows from Proposition 7.1 that every eigenvalue of a self-adjoint
operator is real, so the only if part is clear.
To see the if part, let T L(V ) be a normal operator, and all its eigenvalues
be real. Then by the Complex Spectral Theorem, V has an orthonormal basis
consisting of eigenvectors of T. Hence, M(T) is diagonal with respect this basis,
and so the conjugate transpose of M(T) equals to M(T) since all eigenvalues
are real. |L
I Exercise 7.11 (7.10). Suppose V is a complex inner-product space and T
L(V ) is a normal operator such that T
9
= T
S
. Prove that T is self-adjoint and
T
2
= T.
Proof. Let T L(V ) be normal and v V . Then by Exercise 7.8,
T
S
(Tv v) = 0 == Tv v
T
8 =
T
== T(Tv v) = 0 == T
2
= T.
By the Complex Spectral Theorem, there exists an orthonormal basis of V
such that M(T) is diagonal, and the entries on the diagonal line consists of the
eigenvalues (z
1
. . . . . z
n
) of T. Now T
2
= T implies that M(T)M(T) = M(T); that
is,
z
2
i
= z
i
. i = 1. . . . . n.
Then each z
i
R. It follows from Exercise 7.10 that T is self-adjoint. |L
IExercise 7.12 (7.11). Suppose V is a complex inner-product space. Prove that
every normal operator on V has a square root.
Proof. By the Complex Spectral Theorem, there exists an orthonormal basis
of V such that M(T) is diagonal, and the entries on the diagonal line consists
of the eigenvalues (z
1
. . . . . z
n
) of T. Let S L(V ) be an operator whose matrix
is
CHAPTER 7 OPERATORS ON INNER-PRODUCT SPACES 55
M(S) =

_
z
1
0 0
0
_
z
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
_
z
n

.
Then S
2
= T; that is, S is a square root of T. |L
I Exercise 7.13 (7.12). Give an example of a real inner-product space V and
T L(V ) and real numbers . with
2
< 4 such that T
2
T Id is not
invertible.
Proof. We use a normal, but not self-adjoint operator on V (See Lemma 7.15).
Let
M(T) =
_
0 1
1 0
_
.
Then
M(T
2
) =
_
1 0
0 1
_
.
If we let = 0 and = 1, then
_
T
2
Id
_
_
.. ,
_
=
_
1 0
0 1
__
.
,
_

_
.
,
_
=
_
0
0
_
for all
_
.. ,
_
R
2
. Thus, T
2
Id is not injective, and so is not invertible. |L
I Exercise 7.14 (7.13). Prove or give a counterexample: every self-adjoint op-
erator on V has a cube root.
Proof. By the Spectral Theorem, for any self-adjoint operator on V there is a
orthonormal basis (e
1
. . . . . e
n
) such that
M(T) =

z
1
0
.
.
.
0 z
n

.
where there may some i with z
i
= 0. Then it is clear that there exists a matrix
M(S) with
M(S) =

3
_
z
1
0
.
.
.
0
3
_
z
n

such that M(S)|


3
= M(T). Let S be the operator with the matrix M(S) and so S
is the cube root of T. |L
56 CHAPTER 7 OPERATORS ON INNER-PRODUCT SPACES
I Exercise 7.15 (7.14). Suppose T L(V ) is self-adjoint, z F, and c > 0.
Prove that if there exists v V such that [v[ = 1 and [Tv zv[ < c, then T has
an eigenvalue z
0
such that

z z
0

< c.
Proof. By the Spectral Theorem, there exists an orthonormal basis (e
1
. . . . . e
n
)
consisting of eigenvectors of T. Write v =

n
iD1
a
i
e
i
, where a
i
F. Since [v[ =
1, we have
1 =
_
_
_
_
_
_
n

iD1
a
i
e
i
_
_
_
_
_
_
2
=[a
1
[
2
[a
n
[
2
.
Suppose that [z z
i
[ > c for all eigenvalues z
i
F. Then
[Tv zv[
2
=
_
_
_
_
_
_
T
_
_
n

iD1
a
i
e
i
_
_
z
n

iD1
a
i
e
i
_
_
_
_
_
_
2
=
_
_
_
_
_
_
n

iD1
a
i
z
i
e
i

n

iD1
a
i
ze
i
_
_
_
_
_
_
2
=
_
_
_
_
_
_
n

iD1
a
i
(z
i
z) e
i
_
_
_
_
_
_
2
=
n

iD1
[a
i
[
2
[z
i
z[
2
>
n

iD1
[a
i
[
2
c
2
= c
2
.
that is, [Tv zv[ > c. A contradiction. Thus, there exists some eigenvalue z
0
so that

z z
0

< c. |L
IExercise 7.16 (7.15). Suppose U is a nite-dimensional real vector space and
T L(U). Prove that U has a basis consisting of eigenvectors of T if and only if
there is an inner product on U that makes T into a self-adjoint operator.
Proof. Suppose rst that U has a basis consisting of eigenvectors (e
1
. . . . . e
n
)
of T. Let the corresponding eigenvalues be (z
1
. . . . . z
n
). Then
M(T) =

z
1
0 0
0 z
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 z
n

.
Dene (. ) : U U R by letting

e
i
. e
}

=
i}
.
Then, for arbitrary u. w U,
CHAPTER 7 OPERATORS ON INNER-PRODUCT SPACES 57
(Tu. w) =
_
n

iD1
a
i
Te
i
.
n

iD1
b
i
e
i
_
=
n

iD1
n

}D1
a
i
b
}

Te
i
. e
}

=
n

iD1
n

}D1
a
i
z
i
b
}

e
i
. e
}

=
n

iD1
a
i
z
i
b
i
.
Similarly, (u. Tw) =

n
iD1
a
i
z
i
b
i
. Hence T = T

.
The other direction follows from the Real Spectral Theorem directly. |L
I Exercise 7.17 (7.16). Give an example of an operator T on an inner-product
space such that T has an invariant subspace whose orthogonal complement is
not invariant under T.
Solution. Let (e
1
. . . . . e
n
) be an orthonormal basis of U. Extend to an or-
thonormal basis (e
1
. . . . . e
n
. f
1
. . . . . f
n
) of V . Let U is invariant under T, but
U
?
is not invariant under T. Then M(T) takes the following form
M(T) =
_
_
_
_
_
_
_
_
_
_
_
e
1
e
n
f
1
f
n
e
1
.
.
. A B
e
n
f
1
.
.
. 0 C
f
n
_
_
_
_
_
_
_
_
_
_
_
.
Since (f
1
. . . . . f
n
) is a orthonormal basis of U
?
, we know that U
?
is not invari-
ant if C = 0.
For example, let V = R
2
, U be the .-axis, and U
?
be the ,-axis. Let (e
1
. e
2
)
be the standard basis of R
2
. Let
M(T) =
_
e
1
e
2
e
1
1 1
e
2
0 1
_
Notice that T is not normal:
M(T)M(T

) =
_
2 1
1 1
_
. but M(T

)M(T) =
_
1 1
1 2
_
. |L
I Exercise 7.18 (7.17). Prove that the sum of any two positive operators on V
is positive.
Proof. Let S. T L(V ) be positive. Then
(S T)

= S

= S T:
58 CHAPTER 7 OPERATORS ON INNER-PRODUCT SPACES
that is, S T is self-adjoint. Also, for an arbitrary v V ,
((S T)v. v) = (Sv. v) (Tv. v) > 0.
Hence, S T is positive. |L
IExercise 7.19 (7.18). Prove that if T L(V ) is positive, then so is T
k
for every
positive integer k.
Proof. It is evident that T
k
is self-adjoint. Pick an arbitrary v V . If k = 2,
then

T
2
v. v

= (Tv. Tv) = [Tv[ > 0. Now suppose that


_
T
I
v. v
_
> 0 for all
integer < k. Then
_
T
k
v. v
_
=
_
T
k1
v. Tv
_
=
_
T
k2
(Tv). Tv
_
> 0
by the induction hypothesis. |L
I Exercise 7.20 (7.19). Suppose that T is a positive operator on V . Prove that
T is invertible if and only if (Tv. v) > 0 for every v V {0].
Proof. First assume that (Tv. v) > 0 for every v V {0]. Then Tv = 0; that
is, T is injective, which means that T is invertible.
Now suppose that T is invertible. Since T is self-adjoint, there exists an or-
thonormal basis (v
1
. . . . . v
n
) consisting of eigenvectors of T by the Real Spectral
Theorem. Let (z
1
. . . . . z
n
) be the corresponding eigenvalues. Since T injective,
we know that Tv
i
= 0 for all i = 1. . . . . n; hence, z
i
= 0 for all i = 1. . . . . n.
For every v V {0], there exists a list (a
1
. . . . . a
n
), not all zero, such that
v =

n
iD1
a
i
v
i
. Then
(Tv. v) =
_
n

iD1
a
i
Tv
i
.
n

iD1
a
i
v
i
_
=
_
n

iD1
a
i
z
i
v
i
.
n

iD1
a
i
v
i
_
=
n

iD1
z
i
[a
i
[
2
> 0. |L
I Exercise 7.21 (7.20). Prove or disprove: the identity operator on F
2
has in-
nitely many self-adjoint square roots.
Proof. Let
M(S) =
_
cos 0 sin0
sin0 cos 0
_
.
Then S
2
= Id. Hence, there are innitely many self-adjoint square roots. |L
Part II
Linear Algebra and Its Application (Lax,
2007)
8
FUNDAMENTALS
I Exercise 8.1 (1.1). Show that the zero of vector addition is unique.
Proof. Suppose that 0 and 0
0
are both additive identities for some vector. Then
0
0
= 0
0
0 = 0. |L
I Exercise 8.2 (1.2). Show that the vector with all components zero serves as
the zero element of classical vector addition.
Proof. Let 0 = (0. . . . . 0). Then x 0 = (.
1
. . . . . .
n
) (0. . . . . 0) = (.
1
. . . . . .
n
) =
x. |L
Example 8.3 (Examples of Linear Spaces).
(i) Set of all row vectors: (a
1
. . . . . a
n
), a
}
1; addition, multiplication dened
componentwise. This space is denoted as 1
n
.
(ii) Set of all real-valued functions (.) dened on the real line, 1 = R.
(iii) Set of all functions with values in 1, dened on an arbitrary set S.
(iv) Set of all polynomials of degree less than n with coecients in 1.
I Exercise 8.4 (1.3). Show that (i) and (iv) are isomorphic.
Proof. Let P
n1
(1) denote the set of all polynomials of degree less than n
with coecients in 1, that is,
P
n1
(1) =
_
a
1
a
2
. a
n
.
n1

a
1
. . . . . a
n
1
_
.
Then, (a
1
. . . . . a
n
) a
1
a
2
. a
n
.
n1
is an isomorphism. |L
I Exercise 8.5 (1.4). Show that if S has n elements, (i) and (iii) are isomorphic.
Proof. Let [S[ = n. Then any function 1
S
can be written as
_
(s
1
) . . . . . (s
n
)
_
= (a
1
. . . . . a
n
) .
where s
1
. . . . . s
n
S. |L
61
62 CHAPTER 8 FUNDAMENTALS
I Exercise 8.6 (1.5). Show that when 1 = R, (iv) is isomorphic with (iii) when
S consists of n distinct points of R.
Proof. We need to show that R
S
is isomorphic to P
n1
(R). We can write each
R
S
as (a
1
. . . . . a
n
), and consider the map (a
1
. . . . . a
n
) a
1
a
2
.
a
n
.
n1
. |L
I Exercise 8.7 (1.6). Prove that Y 7 is a linear subspace of X if Y and 7 are.
Proof. If y
1
z
1
. y
2
z
2
Y 7, then (y
1
z
1
) (y
2
z
2
) = (y
1
y
2
)
(z
1
z
2
) Y 7; if y z Y 7 and k 1, then k (y z) = ky kz
Y 7. |L
I Exercise 8.8 (1.7). Prove that if Y and 7 are linear subspaces of X, so is
Y 7.
Proof. If x. y Y 7, then x y Y and x y 7, which imply that
x y Y 7; if x Y 7, then x Y and x 7; since both Y and X are
subspaces of X, we have kx Y and ky 7 for all k 1, that is kx Y 7. |L
I Exercise 8.9 (1.8). Show that the set {0] consisting of the zero element of a
linear space X is a subspace of X. It is called the trivial subspace.
Proof. Trial. |L
I Exercise 8.10 (1.9). Show that the set of all linear combinations of x
1
. . . . . x
}
is a subspace of X, and that is the smallest subspace of X containing x
1
. . . . . x
}
.
This is called the subspace spanned by x
1
. . . . . x
}
.
Proof. Let span(x
1
. . . . . x
}
)
_
x : x =

}
iD1
k
i
x
i
_
. Let x =

}
iD1
k
i
x
i
and x
0
=

}
iD1
k
0
i
x
i
. Then
x x
0
=
}

iD1
_
k
i
k
0
i
_
x
i
.
and
kx =
}

iD1
(kk
i
) x
i
.
Hence, the set of all linear combinations of x
1
. . . . . x
}
is a subspace of X.
Since x
i
= 1 x
i

Ii
0 x
I
, each x
i
is a linear combination of
_
x
1
. . . . . x
}
_
.
Thus, span(x
1
. . . . . x
}
) contains each x
i
. Conversely, because subspaces are
closed under scalar multiplication and addition, every subspace of V contain-
ing each x
i
must contain span(x
1
. . . . . x
}
). |L
I Exercise 8.11 (1.10). Show that if the vectors x
1
. . . . . x
}
are linearly indepen-
dent, then none of the x
i
is the zero vector.
CHAPTER 8 FUNDAMENTALS 63
Proof. Suppose that there is a vector x
i
= 0. Then
k
i
0

Ii
0 x
I
= 0. V k = 0.
that is, the list
_
x
1
. . . . . x
}
_
is linearly dependent. |L
I Exercise 8.12 (1.11). Prove that if X is nite dimensional and the direct sum
of Y
1
. . . . . Y
n
, then dimX =

n
}D1
dimY
}
.
Proof. Let
_
y
1
1
. . . . . y
1
n
1
_
be a basis of Y
1
, . . .,
_
y
n
1
. . . . . y
n
n
m
_
be a basis of Y
n
.
We show that the list T =
_
y
1
1
. . . . . y
1
n
1
. . . . . y
n
1
. . . . . y
n
n
m
_
is a basis of X =
Y
1
Y
n
. To see X = span(T), note that for any x X, there exists a
unique list (y
1
. . . . . y
n
) with y
i
Y
i
such that x =

n
iD1
y
i
. But each y
i
can be
uniquely represented as y
i
=

n
i
}D1
= a
i
}
y
i
}
; thus,
x =
n

iD1
n
i

}D1
a
i
}
y
i
}
.
To see that the list T is linearly independent, suppose that there exists a list
of scalars b =
_
b
1
1
. . . . . b
1
n
1
. . . . . b
n
1
. . . . . b
n
n
n
_
, such that
b
1
1
y
1
1
b
1
n
1
y
1
n
1
b
n
1
y
n
1
b
n
n
n
y
n
n
n
= 0
A
.
But 0
A
= 0y
1
1
0y
1
n
1
0y
n
1
0y
n
n
n
and X = Y
1
Y
n
implies that
all the scalars are zero, that is, T is linearly independent. Therefore, dimX =

n
}D1
dimY
}
. |L
I Exercise 8.13 (1.12). Show that every nite-dimensional space X over 1 is
isomorphic to 1
n
, n = dimX. Show that this isomorphism is not unique when n
is > 1.
Proof. Let (x
1
. . . . . x
n
) be a basis of X, and (e
1
. . . . . e
n
) be a basis of 1
n
. Dene
a linear map T L(X. 1
n
) by letting Tx
i
= e
i
. Then for any x =

n
iD1
a
i
x
i
X,
we have
Tx = T
_
_
n

iD1
a
i
x
i
_
_
=
n

iD1
a
i
Tx
i
=
n

iD1
a
i
e
i
.
We rst show that T is surjective. For any k 1
n
, there exists (k
1
. . . . . k
n
)
such that k =

n
iD1
k
i
e
i
, and so there exists x
k
=

n
iD1
k
i
x
i
X such that
Tx
k
=

n
iD1
k
i
e
i
= k. To see T is injective, let
T
_
_
n

iD1
a
i
x
i
_
_
= T
_
_
n

iD1
b
i
x
i
_
_
.
that is,
64 CHAPTER 8 FUNDAMENTALS
n

iD1
a
i
e
i
=
n

iD1
b
i
e
i
==
n

iD1
(a
i
b
i
) e
i
= 0 == a
i
= b
i
V i = 1. . . . . n
since (e
1
. . . . . e
n
) is linearly independent. Thus,

n
iD1
a
i
x
i
=

n
iD1
b
i
x
i
.
The isomorphism is not unique when n > 1 since there are many, many
basis. |L
I Exercise 8.14 (1.13). Congruence mod Y is an equivalence relation. Show
further that if x
1
x
2
, then kx
1
kx
2
for every scalar k.
Proof. (i) If x
1
x
2
, then x
1
x
2
Y , which means that x
2
x
1
= (x
1
x
2
)
Y since Y is a subspace; (ii) x x = 0 Y ; (iii) if x
1
x
2
Y and x
2
x
3
Y ,
then x
1
x
3
= (x
1
x
2
) (x
2
x
3
) Y , that is, x
1
x
3
.
If x
1
x
2
mod Y , then x
1
x
2
Y and so k (x
1
x
2
) Y since Y is a
subspace of X. But then kx
1
kx
2
Y , i.e., kx
1
kx
2
mod Y . |L
I Exercise 8.15 (1.14). Show that two congruence classes are either identical
or disjoint.
Proof. Let x
3
x
1
| x
2
|. Then x
1
x
3
and x
3
x
2
. By transitivity of we
have x
1
x
2
, that is, x
1
| = x
2
|. |L
I Exercise 8.16 (1.15). Show that the above denition of addition and multi-
plication by scalars is independent of the choice of representatives in the con-
gruence class.
1
Proof. By denition, x|z| = x z| = (x z)Y , and k x| = kx| = kxY .
Note that
_
x
0
_
= x| if x
0
x|. |L
I Exercise 8.17 (1.16). Denote by X the linear space of all polynomials (t )
of degree < n, and denote by Y the set of polynomials that are zero at t
1
. . . . . t
}
,
< n.
a. Show that Y is a subspace of X.
b. Determine dimY .
c. Determine dimX,Y .
Proof.
a. Any P
n1
(1) with roots t
1
. . . . . t
}
can be written in the form
q (t )
}

iD1
_
t t
}
_
.
where q (t ) P
n1}
(1). These clearly form a vector space.
1
We have xj D x CY . Proof: If z 2 xj, then there exists y 2 Y such that z x D y; then
z D x C y 2 x C Y . Conversely, if z 2 x C Y , then z D x C y for some y 2 Y ; hence,
z x D y 2 Y , i.e., z 2 xj.
CHAPTER 8 FUNDAMENTALS 65
b. dimY = n .
c. dimX,Y = dimX dimY = n
_
n
_
= .
|L
I Exercise 8.18 (1.17). A subspace Y of a nite-dimensional linear space X
whose dimension is the same as the dimension of X is all of X.
Proof. Suppose that Y X, then there exists x X Y and x = 0
A
since
0
A
Y . Let x| = xY . Thus, x| X,Y and x| = Y = 0
A{Y
and so dimX,Y >
1, which implies that dimY = dimX dimX,Y < dimX by Theorem 1.6. A
contradiction. |L
I Exercise 8.19 (1.18). Show that dimX
1
X
2
= dimX
1
dimX
2
.
Proof. X
1
X
2
implies that X
1
X
2
= {0], that is, dimX
1
X
2
= 0. Therefore,
dimX
1
X
2
= dimX
1
dimX
2
. See Exercise 8.12. |L
I Exercise 8.20 (1.19). X is a linear space, Y a subspace. Show that Y X,Y
is isomorphic to X.
Proof. According to Exercise 8.13, we only need to show that dimY X,Y =
dimX. This holds since
dimY X,Y = dimY dimX,Y = dimX.
|L
I Exercise 8.21 (1.20). Which of the following sets of vectors x = (.
1
. . . . . .
n
)
R
n
are a subspace of R
n
? Explain your answer.
a. All x such that .
1
> 0.
b. All x such that .
1
.
2
= 0.
c. All x such that .
1
.
2
1 = 0.
d. All x such that .
1
= 0.
e. All x such that .
1
is an integer.
Proof.
a. No. x is not in that set if .
1
> 0.
b. Yes.
c. No. kx is not in that set if k = 1.
d. Yes.
66 CHAPTER 8 FUNDAMENTALS
e. No. kx is not in that set if k Z.
|L
IExercise 8.22 (1.21). Let U, V , and W be subspaces of some nite-dimensional
vector space X. Is the statement
dimU V W =dimU dimV dimW dimU V
dimU W dimV W dimU V W .
true or false? If true, prove it. If false, provide a counterexample.
Proof. It is false. See Exercise 2.17. |L
9
DUALITY
Remark 9.1 (Theorem 3). The bilinear function (x. ) gives a natural identi-
cation of X with X
00
.
Proof. For (x. ), x x = x
0
, then we observe that the function of the vectors
in X
0
, whose value at is (x
0
. ) = (x
0
), is a scalar-valued function that hap-
pens to be linear [Proof: Let :
0
X
00
be so dened. For any .
0
X
0
, we have
:
0
_

0
_
=

x
0
.
0

=
_

0
_
(x
0
) = (x
0
)
0
(x
0
) = :
0
() :
0
_

0
_
. For any
k 1 and X
0
, we have :
0
(k) = (x
0
. k) = (k) (x
0
) = k (x
0
) = k :
0
().]
Thus,

x
0
, I

denes a linear functional on A


0
, and consequently, and element of A
00
.
By this method we have exhibited some linear functionals on X
0
; have we exhib-
ited them all? For the nite-dimensional case the following theorem furnishes
the armative answer.
If A is a nite-dimensional vector space, then corresponding to every linear func-
tional z
0
on A
0
there is a vector x
0
2 A such that z
0
(I) D

x
0
, I

D I (x
0
) for every
I 2 A
0
; the correspondence z
0
$x
0
between A
00
and A is an isomorphism.
Proof: To every x
0
X, we make correspond a vector :
x
0
X
00
dened
by :
x
0
() = (x
0
) for every X
0
. We rst show that the transformation
x
0
:
x
0
is linear. For any x
0
. x
1
X, we have x
0
x
1
:
x
0
Cx
1
; by denition,
:
x
0
Cx
1
() = (x
0
x
1
) = (x
0
) (x
1
) = :x
0
() :
x
1
() for any X
0
.
For any k 1 and x
0
X, we have kx
0
:
kx
0
and so :
kx
0
() = (kx
0
) =
k (x
0
) = k :
x
0
for any X
0
.
We shall show that this transformation is injective. Take any :
x
1
. :
x
2
X
00
with :
x
1
= :
x
2
. To say that :
x
1
= :
x
2
means that (x
1
. ) = (x
2
. ) for every
X
0
. But then x
1
= x
2
by Exercise 9.3 (iii).
Therefore, the set 7 {:
x
: x X] is a subspace of X
00
since 7 is the range
under a linear map, and 7 is isomorphic to X
00
, and so dim7 = dimX. Since
dimX = dimX
0
= dimX
00
, we have dim7 = dimX
00
. It follows that X
00
= 7 by
Exercise 8.18. |L
Remark 9.2 (p. 16). Y
?
is isomorphic to
_
X,Y
_
0
.
67
68 CHAPTER 9 DUALITY
Proof. Given Y
?
, we make correspond a linear functional 1
I

_
X,Y
_
0
dened by
1
I
x| = (x) .
We show rst that 1
I
x| = (x) is well dened.
1
Let x
1
| = x|; then there
exists y
1
Y such that x
1
= x y
1
, as depicted in Figure 9.1. Thus,
(x
1
) = (x y
1
) = (x) (y
1
) = (y) .
that is, 1
I
x
1
| = 1
I
x| if x
1
| = x|. We then show 1
I
such dened is linear. For
any x| . y| X,Y , we have 1
I
_
x| y|
_
h1i
= 1
I
x y| = (x y) = (x)
(y) = 1
I
x| 1
I
y|, where (1) holds since x| y| = (x Y ) (y Y ) =
(x y)Y = x y|. To see 1
I
is homogenous, take any k 1 and x| X,Y .
Then 1
I
_
k x|
_
= 1
I
kx| = (kx) = k (x) = k 1
I
x|.
Y
0
x| = x Y
x
y
1
x
1
y
2
x
2
z
z| = z Y
Figure 9.1. Y
?

_
A{Y
_
0
Conversely, given any 1
_
X,Y
_
0
, dene a linear functional
1
on X as

1
(x) = 1x| .
It follows from the above denition that
1
Y
?
: for any y Y , we have y| =
y Y = Y and so
1
(y) = 1Y | = 0. This also proves that the correspondence
between Y
?
and
_
X,Y
_
0
is surjective. We nally to show that the mapping
1
I
is injective. Take two
1
.
2
Y
?
such that 1
I
1
= 1
I
2
, where 1
I
1
. 1
I
2

_
X,Y
_
0
. To say 1
I
1
= 1
I
2
means that 1
I
1
x| = 1
I
2
x| for all x| X,Y , but
which means that
1
(x) =
2
(x) for all x X, i.e.,
1
=
2
. Thus, 1
I
is
injective. |L
1
To see the relation between congruence classes and ane sets (linear manifolds), refer
Rockafellar (1970).
CHAPTER 9 DUALITY 69
IExercise 9.3 (2.1). Given a nonzero vector x
1
X, show that there is a linear
function such that (x
1
) = 0.
Proof. See Halmos (1974, Sec. 15).
(
i) If X is an n-dimensional vector space, if (x
1
. . . . . x
n
) is a basis of X, and
if (a
1
. . . . . a
n
) is any list of n scalars, then there is one and only one linear
functional on X such that (x
i
. ) = a
i
for i = 1. . . . . n.
Proof: Every x X can be represented uniquely as x =

n
iD1
k
i
x
i
, where
k
i
1. If is any linear functional, then
(x. ) =
_
n

iD1
k
i
x
i
.
_
= k
1
(x
1
. ) k
n
(x
i
. ) .
From this relation the uniqueness of is clear: if (x
i
. ) = a
i
, then the value of
(x. ) is determined, for every x, by (x. ) =

n
iD1
k
i
a
i
. The argument can also
be turned around; if we dene by
(x. ) = k
1
a
1
k
n
a
n
.
then is indeed a linear functional, and (x
i
. ) = a
i
.
(
ii) If X is an n-dimensional vector space and if T = (x
1
. . . . . x
n
) is a basis of X,
then there is a uniquely determined basis T
0
in X
0
, T
0
= (
1
. . . . .
n
), with the
property that

x
i
.
}

=
i}
. Consequently the dual space of an n-dimensional
space is n-dimensional.
Proof: It follows from (i) that, for each = 1. . . . . n, a unique
i
X
0
can be
found so that

x
i
.
}

=
i}
; we have only to prove that the list T
0
= (
1
. . . . .
n
)
is a basis in X
0
. In the rst place, T
0
is linearly independent, for if we had
a
1

1
a
n

n
= 0, in other words, if
(x. a
1

1
a
n

n
) = a
1
(x.
1
) a
n
(x.
n
) = 0
for all x X, then we should have, for x = x
i
,
0 =
n

}D1
a
}

x
i
.
}

=
n

}D1
a
}

i}
= a
i
.
In the second place, X
0
= span(
1
. . . . .
n
). To prove this, write (x
i
. ) = a
i
;
then, for x =

n
iD1
k
i
x
i
, we have
70 CHAPTER 9 DUALITY
(x. ) =
_
n

iD1
k
i
x
i
.
_
=
n

iD1
k
i
(x
i
. ) =
n

iD1
k
i
a
i
.
On the other hand,

x.
}

=
n

iD1
k
i

x
i
.
}

= k
}
.
so that, substituting in the preceding equation, we get
(x. ) =
n

iD1
k
i
a
i
=
n

iD1
a
i
(x.
i
) =
_
x.
n

iD1
a
i

i
_
.
Consequently =

n
iD1
a
i

i
, and the proof of (ii) is complete.
(
iii) For any non-zero vector x X there corresponds a X
0
such that (x. ) =
0.
Proof: Let (x
1
. . . . . x
n
) be a basis of X, and let (
1
. . . . .
n
) be the dual basis in
X
0
. If x =

n
iD1
k
i
x
i
, then

x.
}

= k
}
. Hence if (x. ) = 0 for all , in particular,
if

x.
}

= 0 for = 1. . . . . n, then k
}
= 0 and so x = 0
A
. |L
I Exercise 9.4 (2.2). Verify that Y
?
is a subspace of X
0
.
Proof. (i) Obviously that 0 Y
?
since (x. 0) = 0 for any x X, including
y Y _ X. (ii) Let . m Y
?
. Then (y. ) = 0 = (y. m) for all y Y and so
(y. m) = (y. ) (y. m) = 0, i.e., m Y
?
. (iii) If Y
?
, then k (y. ) = 0
for any y Y , and so k Y
?
. Thus Y
?
is a subspace of X
0
. |L
I Exercise 9.5 (2.3). Denote by Y the smallest subspace containing S. Then
S
?
= Y
?
.
Proof. It is clear that Y
?
_ S
?
. If S = , then Y = {0] and the conclusion
is obvious. Similarly, the proof is trivial if S = {0]. So we suppose that S =
and S = {0]. Take any y
1
S with y
1
= 0. If S _ span(y
1
), let Y = span(y
1
); if
there is y
2
S span(y
1
), let Y = span(y
1
. y
2
); . . .. Since the embedding vector
space is nite-dimensional, the process will be ended with a list (y
1
. . . . . y
n
)
with y
1
. . . . . y
n
S, and this list is a basis of Y . Then for any S
?
and any
y Y , we have
(y. ) =
_
n

iD1
k
i
y
i
.
_
=
n

iD1
k
i
(y
i
. ) = 0
since (y
i
) = 0. Thus, Y
?
. |L
I Exercise 9.6 (2.4). In Theorem 7 take the interval 1 to be 1. 1|, and take
n = 3. Choose the three points to be t
1
= a, t
2
= 0, and t
3
= a.
CHAPTER 9 DUALITY 71
a. Determine the weights m
1
. m
2
. m
3
so that
_
J
(t ) dt = m
1
(t
1
) m
2
(t
2
)
m
3
(t
3
) holds for all polynomials P
2
(1).
b. Show that for a >
_
1,3, all three weights are positive.
c. Show that for a =
_
3,5, (9) holds for all P
5
(1).
Proof.
a. If (t ) = t , then
_
1
1
t dt = 0 and so 0 = m
1
(a) m
3
a, i.e., m
1
= m
3
. Then
(9) can be rewritten as
_
1
1
(t ) dt = m
1
_
(a) (a)
_
m
2
(0) . (9.1)
Take (t ) = 1 now. Then 2 =
_
1
1
dt = 2m
1
m
2
, i.e., m
2
= 2 (1 m
1
). So we
rewrite (9.1) as
_
1
1
(t ) dt = m
1
_
(a) (a)
_
2 (1 m
1
) (0) . (9.2)
Now let (t ) = t
2
and hence (0) = 0. We then have
2
3
=
_
1
1
t
2
dt = m
1
2a
2
implies that
m
1
= m
3
=
1
3a
2
. and m
2
= 2
2
3a
2
.
|L
10
LINEAR MAPPINGS
I Exercise 10.1 (3.1). The image of a subspace of X under a linear map T is
a subspace of U. The inverse image of a subspace of U, that is the set of all
vectors in X mapped by T into the subspace, is a subspace of X.
Proof. Let Y be a subspace of X; then 0
A
Y and so 0
U
= T0
A
TY |. To
see TY | is closed under addition, take any Tx. Ty TY |. Then x y Y and
Tx Ty = T(x y) TY |; to see TY | is closed under scalar multiplication,
take any k 1 and Tx TY |; then k ln Y and kTx = T(kx) TY |. Thus
TY | is a subspace of U.
We then show that T
1
V | is a subspace of X is V is a subspace of U. (i)
0
A
T
1
V | since 0
U
V and T0
A
= 0
U
. (ii) For any x. y T
1
V |, we have
Tx. Ty V and so T(x y) = Tx Ty V , i.e., x y T
1
V |. (iii) For any
k 1 and x T
1
V |, we have Tx V and kTx V ; then T(kx) kTx V , that
is, kx kTx. |L
I Exercise 10.2 (3.3).
a. The composite of linear mappings is also a linear mapping.
b. Composition is distributive with respect to the addition of linear maps, that
is, (R S) T = R T S T and S (T P) = S T S P, where R and S
map U V and P and T map X U.
Proof.
a. Let S. T L(X. U) and consider S T. To see S T is additive, take any x. y
X; then (S T) (x y) = S
_
T(x y)
_
= S Tx Ty| = (ST) x (ST) y =
(S T) x (S T) y. To see S T is homogenous, take any k 1 and x X.
Then (S T) (kx) = S
_
T(kx)
_
= S (kTx) = kSTx = k (S T) x.
b. Let
V
R

S
U
P

T
X.
For any x X, we have
_
(R S) T
_
(x) = (R S) (Tx) = R(Tx) S (Tx) =
(R T) x (S T) x. The other claim is proved similarly.
73
74 CHAPTER 10 LINEAR MAPPINGS
|L
I Exercise 10.3 (3.7). Show that whenever meaningful,
(ST)
0
= T
0
S
0
. (T R)
0
= T
0
R
0
. and
_
T
1
_
0
=
_
T
0
_
1
.
Proof. For a generic linear mapping T L(X. U), we have the following dia-
gram:
X
T
U.
X
0
T
0
U
0
.
For the rst equality
1
, let F
I
V
S
U
T
X, i.e., F
I
V
ST
X, and so (ST)
0
:
V
0
X
0
. We have
_
_
T
0
S
0
_
. x
_
=
_
T
0
_
S
0

_
. x
_
=

S
0
. Tx

= (. (ST) x) =

(ST)
0
. x

.
and this establish the rst equality. As for the second equality,
_
_
T
0
S
0
_
. x
_
=

T
0
S
0
. x

=

T
0
. x

S
0
. x

= (. Tx) (. Sx)
= (. (T S) x)
=

(T S)
0
. x

.
Finally, let F
I
U
T
X, then T
0
: U
0
X
0
and
_
T
0
_
1
: R
T
0 U
0
. Take any
m R
T
0 ; then there exists U
0
such that
_
T
0
_
1
(m) = , or equivalently,
T
0
= m. Now consider
_
T
1
_
0
(m). Then
_
T
1
_
0
(m) =
_
T
1
_
0 _
T
0

_
=
_
TT
1
_
0
= Id
0
=
since Id
0
= Id. |L
I Exercise 10.4 (3.8). Show that if X
00
is identied with X and U
00
with U, then
T
00
= T.
Proof. We have
(T. x) =

. T
0
x

=

T
00
. x

.
|L
I Exercise 10.5 (3.9). Show that if A L(X) is a left inverse of B L(X), that
is, AB = Id, then it is also a right inverse: BA = Id.
1
Notation Warning: We occasionally use F, instead of 1, to denote the eld. From now on
we also be back to Laxs notation of linear mapping, that is, I (x) D

I, x

.
CHAPTER 10 LINEAR MAPPINGS 75
Proof. AB = Id == ABA = A == A(BA) = A == BA = Id. |L
I Exercise 10.6 (3.10). Show that if M is invertible, and similar to K, then K
also is invertible, and K
1
is similar to M
1
.
Proof. M similar to K means that K = S MS
1
; then
K
1
=
_
(S M) S
1
_
1
= S (S M)
1
= S M
1
S
1
.
and
M = S
1
KS == M
1
= S
1
K
1
S.
|L
I Exercise 10.7 (3.11). If either A or B in L(X) is invertible, then AB and BA
are similar.
Proof. Suppose B is invertible. Then
B(AB) B
1
= BA.
i.e., AB similar to BA. |L
I Exercise 10.8 (3.14). Suppose T is a linear map of rank 1 of a nite dimen-
sional vector space into itself.
a. Show there exists a unique number c such that T
2
= cT.
b. Show that if c = 1 then Id T has an inverse.
Proof. Let T L(X). By denition rank(T) = 1 means that dimR
T
= 1. Let
dimX = n. Then
dimX = dim
T
dimR
T
implies that
dim
T
= n 1.
Let (v), where v = 0
A
, be a basis of R
T
, and extend it to a basis (v. u
1
. . . . . u
n1
)
of X. Since u
i
span(v) for all u
i
, we have Tu
i
= 0
A
; since R
T
= span(v), there
exists c 1 such that Tv = av. For any x X, there exists a list of scalars
(b. k
1
. . . . . k
n1
) such that x = bv

n1
iD1
k
i
u
i
. Then
Tx = bTv = b (cv) = cbv.
and
T
2
x = T(Tx) = T(cbv) = cbTv = c
2
bv = c (cbv) = cTx.
Since the above display holds for any x X, we have T
2
= cT. |L
76 CHAPTER 10 LINEAR MAPPINGS
I Exercise 10.9 (3.15
23
). Suppose T and S are linear maps of a nite di-
mensional vector space into itself. Show that rank(ST) 6 rank(S). Show that
dim
ST
6 dim
S
dim
T
.
Proof. Let X
S
X
T
X. By denition, rank(ST) 6 rank(S) if and only if
dimR
ST
6 dimR
S
. But this is obvious. As for the second claim, we have
R
ST
= (ST) X| = S
_
TX|
_
= S R
T
| .
so that
rank(ST) = dimR
ST
= dimS R
T
|.
If M is a subspace of dimension m, say, and if N is any complement of M so
that X = M N, then
4
R
S
= S X| = S M| S N| .
It follows that
rank(S) = dimR
S
6 dimS M| dimS N| 6 dimS M| dimN.
and hence that
dimX
S
6 dimS M| dimX m.
If in particular
M = R
T
= TX| .
then the last inequality implies that
rank(T)
S
6 rank(ST) .
or, equivalently, that
dimX dim
S
dim
T
6 dimX dim
ST
.
that is,
dim
ST
6 dim
S
dim
T
.
|L
2
See Exercise 3.18.
3
See Halmos (1995, Problem 95, p. 270).
4
Theorem 1.5 (b).
11
MATRICES
77
12
DETERMINANT AND TRACE
I Exercise 12.1 (5.1). Prove the properties of signature:
1
sign() = 1. (5-a)
sign( o) = sign() sign(o) . (5-b)
Proof. The discriminant of (x
1
. . . . . x
n
) is 1 (x
1
. . . . . x
n
) =

i~}
_
x
i
x
}
_
.
Thus
(1) (x
1
. . . . . x
n
) =

i~}
_
x
t
i
x
t
j
_
.
A typical factor in 1 is x
t
i
x
t
j
. Now if
i
<
}
, this is also a factor of 1,
while if
i
>
}
, then
_
x
t
i
x
t
j
_
is a factor of 1. Consequently, 1 = 1
if the number of inversions of the natural order in is even and 1 = 1 if
it is odd. Then (5-a) holds since
sign() =
1
1
= 1.
We not prove (5-b). Let 1 =

i~}
_
x
i
x
}
_
. Then, since 1 = sign() 1, we
have
(1o) (x
1
. . . . . x
n
) =
_
(1) o
_
(x
1
. . . . . x
n
)
= sign(o) (1) (x
1
. . . . . x
n
)
= sign(o) sign() 1 (x
1
. . . . . x
n
) .
But 1o = oo1. Hence, sign(o) = sign() sign(o). |L
I Exercise 12.2 (5.2). Prove that transposition has the following properties:
a. The signature of a transposition t is minus one:
sign(t ) = 1. (5-c)
1
See Robinson (2003, Sec. 3.1) for a detailed discussion of permutation, signature function,
and so on.
79
80 CHAPTER 12 DETERMINANT AND TRACE
b. Every permutation can be written as a composition of transpositions:
= t
k
t
1
. (5-d)
Proof. (5-c) is clear. For (5-d), see Robinson (2003, 3.1.3 & 3.1.4, p. 34-35). |L
References
[1] Axler, Sheldon (1997) Linear Algebra Done Right, Undergraduate Texts
in Mathematics, New York: Springer-Verlag, 2nd edition. [i, v, 1, 38]
[2] Halmos, Paul R. (1974) Finite-Dimensional Vector Spaces, Undergraduate
Texts in Mathematics, New York: Springer-Verlag. [69]
[3] (1995) Linear Algebra Problem Book, 16 of The Dolciani Math-
ematical Expositions, Washington, DC: The Mathematical Association of
America. [19, 76]
[4] Lax, Peter D. (2007) Linear Algebra and Its Applications, Pure and Ap-
plied Mathematics: A Wiley-Interscience Series of Texts, Monographs and
Tracts, New Jersey: Wiley-Interscience, 2nd edition. [i, v, 59]
[5] Robinson, Derek J. S. (2003) An Introduction to Abstract Algebra, Berlin:
Walter de Gruyter. [79, 80]
[6] Rockafellar, R. Tyrrell (1970) Convex Analysis, Princeton Mathemati-
cal Series, New Jersey: Princeton University Press, 2nd edition. [68]
[7] Roman, Steven (2008) Advanced Linear Algebra, 135 of Graduate Texts
in Mathematics, NewYork: Springer Science+Business Media, LLC, 3rd edi-
tion. [i, 5]
81

You might also like