You are on page 1of 173

Exercises in Advanced Linear and Multilinear Algebra

John M. Erdman
Portland State University

Version May 25, 2009

2009
c John M. Erdman

E-mail address: erdman@pdx.edu


Contents

Preface vii
Some Algebraic Objects ix
Chapter 1. Abelian Groups 1
1.1. Background 1
1.2. Exercises (Due: Wed. Jan. 7) 2
Chapter 2. Functions and Diagrams 3
2.1. Background 3
2.2. Exercises (Due: Fri. Jan. 9) 5
Chapter 3. Rings 7
3.1. Background 7
3.2. Exercises (Due: Mon. Jan. 12) 8
Chapter 4. Vector Spaces and Subspaces 9
4.1. Background 9
4.2. Exercises (Due: Wed. Jan. 14) 10
Chapter 5. Linear Combinations and Linear Independence 13
5.1. Background 13
5.2. Exercises (Due: Fri. Jan. 16) 14
Chapter 6. Bases for Vector Spaces 17
6.1. Background 17
6.2. Exercises (Due: Wed. Jan. 21) 19
Chapter 7. Linear Transformations 21
7.1. Background 21
7.2. Exercises (Due: Fri. Jan. 23) 22
Chapter 8. Linear Transformations (continued) 25
8.1. Background 25
8.2. Exercises (Due: Mon. Jan. 26) 26
Chapter 9. Duality in Vector Spaces 27
9.1. Background 27
9.2. Exercises (Due: Wed. Jan. 28) 28
Chapter 10. Duality in Vector Spaces (continued) 31
10.1. Background 31
10.2. Exercises (Due: Fri. Jan. 30) 32
Chapter 11. The Language of Categories 33
11.1. Background 33
11.2. Exercises (Due: Mon. Feb. 2) 35
iii
iv CONTENTS

Chapter 12. Direct Sums 37


12.1. Background 37
12.2. Exercises (Due: Wed. Feb. 4) 39
Chapter 13. Products and Quotients 41
13.1. Background 41
13.2. Exercises (Due: Mon. Feb. 9) 43
Chapter 14. Products and Quotients (continued) 45
14.1. Background 45
14.2. Exercises (Due: Wed. Feb. 11) 46
Chapter 15. Projection Operators 49
15.1. Background 49
15.2. Exercises (Due: Fri. Feb. 13) 50
Chapter 16. Algebras 51
16.1. Background 51
16.2. Exercises (Due: Mon. Feb. 16) 52
Chapter 17. Spectra 55
17.1. Background 55
17.2. Exercises (Due: Wed. Feb. 18) 56
Chapter 18. Polynomials 59
18.1. Background 59
18.2. Exercises (Due: Fri. Feb. 20) 61
Chapter 19. Polynomials (continued) 63
19.1. Background 63
19.2. Exercises (Due: Mon. Feb. 23) 64
Chapter 20. Invariant Subspaces 67
20.1. Background 67
20.2. Exercises (Due: Wed. Feb. 25) 68
Chapter 21. The Spectral Theorem for Vector Spaces 69
21.1. Background 69
21.2. Exercises (Due: Fri. Feb. 27) 70
Chapter 22. The Spectral Theorem for Vector Spaces (continued) 71
22.1. Background 71
22.2. Exercises (Due: Mon. Mar. 2) 72
Chapter 23. Diagonalizable Plus Nilpotent Decomposition 73
23.1. Background 73
23.2. Exercises (Due: Wed. Mar. 4) 74
Chapter 24. Inner Product Spaces 77
24.1. Background 77
24.2. Exercises (Due: Mon. Mar. 9) 78
Chapter 25. Orthogonality and Adjoints 81
25.1. Background 81
25.2. Exercises (Due: Wed. Mar. 11) 82
Chapter 26. Orthogonal Projections 85
CONTENTS v

26.1. Background 85
26.2. Exercises (Due: Fri. Mar. 13) 86
Chapter 27. The Spectral Theorem for Inner Product Spaces 89
27.1. Background 89
27.2. Exercises (Due: Mon. Mar. 30) 90
Chapter 28. Multilinear Maps 93
28.1. Background 93
Differential calculus 93
Permutations 93
Multilinear maps 94
28.2. Exercises (Due: Wed. Apr. 1) 95
Chapter 29. Determinants 97
29.1. Background 97
29.2. Exercises (Due: Fri. Apr. 3) 98
Chapter 30. Free Vector Spaces 99
30.1. Background 99
30.2. Exercises (Due: Mon. Apr. 6) 100
Chapter 31. Tensor Products of Vector Spaces 101
31.1. Background 101
31.2. Exercises (Due: Wed. Apr. 8) 102
Chapter 32. Tensor Products of Vector Spaces (continued) 103
32.1. Background 103
32.2. Exercises (Due: Fri. Apr. 10) 104
Chapter 33. Tensor Products of Linear Maps 105
33.1. Background 105
33.2. Exercises (Due: Mon. April 13) 106
Chapter 34. Grassmann Algebras 107
34.1. Background 107
34.2. Exercises (Due: Wed. April 15) 108
Chapter 35. Graded Algebras 109
35.1. Background 109
35.2. Exercises (Due: Fri. April 17) 110
Chapter 36. Existence of Grassmann Algebras 111
36.1. Background 111
36.2. Exercises (Due: Mon. April 20) 112
Chapter 37. The Hodge ∗-operator 113
37.1. Background 113
37.2. Exercises (Due: Wed. April 22) 114
Chapter 38. Differential Forms 115
38.1. Background 115
38.2. Exercises (Due: Fri. April 24) 117
Chapter 39. The Exterior Differentiation Operator 119
39.1. Background 119
vi CONTENTS

39.2. Exercises (Due: Mon. April 27) 120


Chapter 40. Differential Calculus on R3 121
40.1. Background 121
40.2. Exercises (Due: Wed. April 29) 122
Chapter 41. Closed and Exact Forms 125
41.1. Background 125
41.2. Exercises (Due: Mon. May 4) 126
Chapter 42. The de Rham Cohomology Group 127
42.1. Background 127
42.2. Exercises (Due: Wed. May 6) 128
Chapter 43. Cochain Complexes 129
43.1. Background 129
43.2. Exercises (Due: Fri. May 8) 130
Chapter 44. Simplicial Homology 131
44.1. Background 131
44.2. Exercises (Due: Wed. May 13) 134
Chapter 45. Simplicial Cohomology 135
45.1. Background 135
45.2. Exercises (Due: Fri. May 15) 137
Chapter 46. Integration of Differential Forms 139
46.1. Background 139
46.2. Exercises (Due: Mon. May 18) 141
Chapter 47. Stokes’ Theorem 143
47.1. Background 143
47.2. Exercises (Due: Wed. May 20) 145
Chapter 48. Quadratic Forms 147
48.1. Background 147
48.2. Exercises (Due: Fri. May 22) 148
Chapter 49. Definition of Clifford Algebra 149
49.1. Background 149
49.2. Exercises (Due: Wed. May 27) 150
Chapter 50. Orthogonality with Respect to Bilinear Forms 151
50.1. Background 151
50.2. Exercises (Due: Fri. May 29) 152
Chapter 51. Examples of Clifford Algebras 153
51.1. Background 153
51.2. Exercises (Due: Mon. June 1) 154
Bibliography 155
Index 157
Preface

This collection of exercises is designed to provide a framework for discussion in a two-term se-
nior/first year graduate level class in linear and multilinear algebra such as the one I have conducted
fairly regularly at Portland State University.
Most recently I have been using Douglas R. Farenick’s Algebras of Linear Transformations
(Springer-Verlag 2001) as a text for parts of the course. (In these Exercises it is referred to as
AOLT.) For the more elementary parts of linear algebra there is certainly no shortage of readily
available texts. In particular there are now a number of excellent online texts which are available
free of charge. Among the best are Linear Algebra [14] by Jim Hefferon,
http://joshua.smcvt.edu/linearalgebra
A First Course in Linear Algebra [2] by Robert A. Beezer,
http://linear.ups.edu/download/fcla-electric-2.00.pdf
and Linear Algebra [9] by Paul Dawkins.
http://tutorial.math.lamar.edu/pdf/LinAlg/LinAlg_Complete.pdf
Another very useful online resource is Przemyslaw Bogacki’s Linear Algebra Toolkit [4].
http://www.math.odu.edu/~bogacki/lat
And, of course, many topics in linear algebra are discussed with varying degrees of thoroughness
in the Wikipedia [28]
http://en.wikipedia.org
and Eric Weisstein’s Mathworld [27].
http://http://mathworld.wolfram.com
For more advanced topics in linear algebra some references that I particularly like are Paul Hal-
mos’s Finite-Dimensional Vector Spaces [13], Hoffman and Kunze’s Linear Algebra [15], Charles
W. Curits’s Linear Algebra: An Introductory Approach [8], Steven Roman’s Advanced Linear Alge-
bra [23], and William C. Brown’s A Second Course in Linear Algebra [5]. Readable introductions
to some topics in multilinear algebra (exterior, graded, and Clifford algebras, for example) are a
bit harder to come by. For these I will suggest specific sources later in these Exercises.
The short introductory Background section which precedes each assignment is intended to
fix notation and provide “official” definitions and statements of important theorems for ensuing
discussions.

vii
Some Algebraic Objects

Let S be a nonempty set. Consider the following axioms:


(1) + : S × S → S. ( + is a binary operation, called addition, on S)
(2) (x + y) + z = x + (y + z) for all x, y, z ∈ S. (associativity of addition)
(3) There exists 0S ∈ S such that x + 0S = 0S + x = x for all x ∈ S. (existence of an
additive identity)
(4) For every x ∈ S there exists −x ∈ S such that x + (−x) = (−x) + x = 0S . (existence
of additive inverses)
(5) x + y = y + x for all x, y ∈ S. (commutativity of addition)
(6) m : S × S → S : (x, y) 7→ xy. (the map (x, y) 7→ xy is a binary operation, called
multiplication, on S)
(7) (xy)z = x(yz) for all x, y, z ∈ S. (associativity of multiplication)
(8) (x+y)z = xz +yz and x(y +z) = xy +xz for all x, y, z ∈ S. (multiplication distributes
over addition)
(9) There exists 1S in S such that x 1S = 1S x = x for all x ∈ S. (existence of a
multiplicative identity or unit)
(10) 1S 6= 0S .
(11) For every x ∈ S such that x 6= 0S there exists x−1 ∈ S such that xx−1 = x−1 x = 1S .
(existence of multiplicative inverses)
(12) xy = yx for all x, y ∈ S. (commutativity of multiplication)
Definitions.
• (S, +) is a semigroup if it satisfies axioms (1)–(2).
• (S, +) is a monoid if it satisfies axioms (1)–(3).
• (S, +) is a group if it satisfies axioms (1)–(4).
• (S, +) is an Abelian group if it satisfies axioms (1)–(5).
• (S, +, m) is a ring if it satisfies axioms (1)–(8).
• (S, +, m) is a commutative ring if it satisfies axioms (1)–(8) and (12).
• (S, +, m) is a unital ring (or ring with identity) if it satisfies axioms (1)–(9).
• (S, +, m) is a division ring (or skew field) if it satisfies axioms (1)–(11).
• (S, +, m) is a field if it satisfies axioms (1)–(12).
Remarks.
• A binary operation is often written additively, (x, y) 7→ x + y, if it is commutative and
multiplicatively, (x, y) 7→ xy, if it is not. This is by no means always the case: in a
commutative ring (the real numbers or the complex numbers, for example), both addition
and multiplication are commutative.
• When no confusion is likely to result we often write 0 for 0S and 1 for 1S .
• Many authors require a ring to satisfy axioms (1)–(9).
• It is easy to see that axiom (10) holds in any unital ring except the trivial ring S = {0}.

ix
CHAPTER 1

Abelian Groups

1.1. Background
Topics: groups, Abelian groups, group homomorphisms.

1.1.1. Definition. Let G and H be Abelian groups. A map f : G → H is a homomorphism if


f (x + y) = f (x) + f (y)
for all x, y ∈ G. We will denote by Hom(G, H) the set of all homomorphisms from G into H and
will abbreviate Hom(G, G) to Hom(G).

1.1.2. Definition. Let G and H be Abelian groups. For f and g in Hom(G, H) we define
f + g : G → H : x 7→ f (x) + g(x).

1
2 1. ABELIAN GROUPS

1.2. Exercises (Due: Wed. Jan. 7)


1.2.1. Exercise. In AOLT, page 2, the author says that a vector space is, among other things,
“an additive Abelian group”. What sort of Abelian group is an additive Abelian group? Give an
example of a very common Abelian group which is not written additively.

1.2.2. Exercise. Prove that the identity element in an Abelian group is unique.

1.2.3. Exercise. Let x be an element of an Abelian group. Prove that the inverse of x is
unique.

1.2.4. Exercise. Let x be an element of an Abelian group. Prove that if x + x = x, then x = 0.

1.2.5. Exercise (†). Let x be an element of an Abelian group. Prove that −(−x) = x.

1.2.6. Exercise. Let f : G → H be a homomorphism of Abelian groups. Show that f (0) = 0.

1.2.7. Exercise (†). Let f : G → H be a homomorphism of Abelian groups. Show that f (−x) =
−f (x) for each x ∈ G.

1.2.8. Exercise. Let E2 be the Euclidean plane. It contains points (which do not have coordi-
nates) and lines (which do not have equations). A directed segment is an ordered pair of points.
Define two directed segments to be equivalent if they are congruent (have the same length), lie on
parallel lines, and have the same direction. This is clearly an equivalence relation on the set DS
−−→
of directed segments in the plane. We denote by P Q the equivalence class containing the directed
segment (P, Q), going from the point P to the point Q. Define an operation on DS by
−−→ −−→ −→
P Q + QR = P R.
Show that this operation is well defined and that under it DS is an Abelian group.
−−→ −−→
1.2.9. Exercise. Suppose that A, B, C, and D are points in the plane such that AB = CD.
−→ −−→
Show that AC = BD.
1.2.10. Exercise. Let G and H be Abelian groups. Show that with addition as defined in 1.1.2
Hom(G, H) is an Abelian group.
CHAPTER 2

Functions and Diagrams

2.1. Background
Topics: functions, commutative diagrams.
2.1.1. Definition. It is frequently useful to think of functions as arrows in diagrams. For
example, the situation h : R → S, j : R → T , k : S → U , f : T → U may be represented by the
following diagram.
j
R /T

h f

 
S /U
k
The diagram is said to commute if k ◦ h = f ◦ j. Diagrams need not be rectangular. For instance,
R?
??
??
??d
h ??
??
 ?
S /U
k

is a commutative diagram if d = k ◦ h.
2.1.2. Example. Here is one diagrammatic way of stating the associative law for composition
of functions: If h : R → S, g : S → T , and f : T → U and we define j and k so that the triangles in
the diagram
j
R /T
?

g 
h  f

  
S /U
k
commute, then the square also commutes.
2.1.3. Convention. If S, T , and U are sets we will not distinguish
  (S × T ) × U ,
between
S × (T × U ), and S × T × U . That is, the ordered pairs (s, t), u and s, (t, u) and the ordered
triple (s, t, u) will be treated as identical.

2.1.4. Notation. Let S be a set. The map


idS : S → S : x 7→ x
is the identity function on S. When no confusion will result we write id for idS .

2.1.5. Notation. Let f : S → U and g : T → V be functions between sets. Then f × g denotes


the map 
f × g : S × T → U × V : (s, t) 7→ f (s), g(t) .
3
4 2. FUNCTIONS AND DIAGRAMS

2.1.6. Convention. We will have use for a standard one-element set, which, if we wish, we
can regard as the Cartesian product of an empty family of sets. We will denote it by 1. For each
set S there is exactly one function from S into 1. We will denote it by εS . If no confusion is likely
to arise we write ε for εS .

2.1.7. Notation. We denote by δ the diagonal mapping of a set S into S × S. That is,
δ : S → S × S : s 7→ (s, s).

2.1.8. Notation. Let S be a set. We denote by σ the interchange (or switching) operation
on the S × S. That is,
σ : S × S → S × S : (s, t) 7→ (t, s).

2.1.9. Notation. If S and T are sets we denote by F(S, T ) the family of all functions from S
into T . We will use F(S) for F(S, R), the set of all real valued functions on S.
2.2. EXERCISES (DUE: FRI. JAN. 9) 5

2.2. Exercises (Due: Fri. Jan. 9)


2.2.1. Exercise. Let S be a set and a : S × S → S be a function such that the diagram
a×id
/ a /S
S×S×S /S×S (D1)
id ×a

commutes. What is (S, a)? Hint. Interpret a as, for example, addition (or multiplication).
2.2.2. Exercise. Let S be a set and suppose that a : S × S → S and η : 1 → S are functions
such that both diagram (D1) above and the diagram (D2) which follows commute.
9 S eK
ss O KKKK
sss KKK g
f ss KKK
ss
ss KKK
a
s
sss KKK
s
1×S / S×S o S×1 (D2)
η×id id ×η

(Here f and g are the obvious bijections.) What is (S, a, η)?


2.2.3. Exercise (†). Let S be a set and suppose that a : S × S → S and η : 1 → S are functions
such that the diagrams (D1) and (D2) above commute. Suppose further that there is a function
ι : S → S for which the following diagram commutes.
ι×id
δ / S×S / a /
S HH / S×S :S (D3)
HH vv
HH id ×ι
vvv
HH vv
HH vv
HH vv
ε HHH vv η
HH vvv
HH v
HH vv
$ vv
1
What is (S, a, η, ι)?
2.2.4. Exercise. Let S be a set and suppose that a : S × S → S, η : 1 → S, and ι : S → S are
functions such that the diagrams (D1), (D2), and (D3) above commute. Suppose further that the
following diagram commutes.
S×S
σ / S×S
? ??
?? 
?? 
a ??? 
??  a
  
S (D4)
What is (S, a, η, ι, σ)?
2.2.5. Exercise. Let f : G → H be a function between Abelian groups. Suppose that the
diagram
f ×f
G×G / H ×H

+ +

 
G /H
f
commutes. What can be said about the function f ?
2.2.6. Exercise (††). Let S be a set with exactly one element. Discuss the cardinality of (that
is, the number of elements in) the sets F(∅, ∅), F(∅, S),F(S, ∅), and F(S, S),
CHAPTER 3

Rings

3.1. Background
Topics: rings; zero divisors; cancellation property; ring homomorphisms.

3.1.1. Definition. An element a of a ring is left cancellable if ab = ac implies that b = c.


It is right cancellable if ba = ca implies that b = c. A ring has the cancellation property
if every nonzero element of the ring is both left and right cancellable.

3.1.2. Definition. A nonzero element a of a ring is a zero divisor (or divisor of zero) if
there exists a nonzero element b of the ring such that (i) ab = 0 or (ii) ba = 0.
Most everyone agrees that a nonzero element a of a ring is a left divisor or zero if it satisfies (i)
for some nonzero b and a right divisor of zero if it satisfies (ii) for some nonzero b. There agreement
on terminology ceases. Some authors ([7], for example) use the definition above for divisor of zero;
others ([17], for example) require a divisor of zero to be both a left and a right divisor of zero; and
yet others ([18], for example) avoid the issue entirely by defining zero divisors only for commutative
rings. Palmer in [22] makes the most systematic distinctions: a zero divisor is defined as above;
an element which is both a left and a right zero divisor is a two-sided zero divisor ; and if the same
nonzero b makes both (i) and (ii) hold a is a joint zero divisor.
3.1.3. Definition. A function f : R → S between rings is a (ring) homomorphism if
f (x + y) = f (x) + f (y)
and
f (xy) = f (x)f (y)
for all x and y in R. If in addition R and S are unital rings and f (1R ) = 1S then f is a unital
(ring) homomorphism.

7
8 3. RINGS

3.2. Exercises (Due: Mon. Jan. 12)


3.2.1. Exercise. Show that the additive identity of a ring is an annihilator. That is, show that
for every element a of a ring 0a = a 0 = 0.
3.2.2. Exercise. Show that if a and b are elements of a ring, then (−a)b = a(−b) = −ab and
(−a)(−b) = ab.

3.2.3. Exercise. Every division ring has the cancellation property.

3.2.4. Exercise. A division ring has no zero divisors. That is, if ab = 0 in a division ring, then
a = 0 or b = 0.

3.2.5. Exercise (†). A ring has the cancellation property if and only if it has no zero divisors.

3.2.6. Exercise. Let G, H, and J be Abelian groups. If f ∈ Hom(G, H) and g ∈ Hom(H, J),
then the composite of g with f belongs to Hom(G, J). Note: This composite is usually written as
gf rather than as g ◦ f .

3.2.7. Exercise. Let G be an Abelian group. Then Hom(G) is a unital ring (under the opera-
tions of addition and composition).

3.2.8. Exercise. Let (G, +) be an Abelian group, F a field, and M : F → Hom(G) be a unital
ring homomorphism. What is (G, +, M )?
CHAPTER 4

Vector Spaces and Subspaces

4.1. Background
Topics: vector spaces, linear (or vector) subspaces.

4.1.1. Notation.
F = F[a, b] = {f : f is a real valued function on the interval [a, b]}
P = P[a, b] = {p : p is a polynomial function on [a, b]}
P4 = P4 [a, b] = {p ∈ P : the degree of p is three or less}
Q4 = Q4 [a, b] = {p ∈ P : the degree of p is equal to 4}
C = C[a, b] = {f ∈ F : f is continuous}
D = D[a, b] = {f ∈ F : f is differentiable}
K = K[a, b] = {f ∈ F : f is a constant function}
B = B[a, b] = {f ∈ F : f is bounded}
J = J [a, b] = {f ∈ F : f is integrable}
(A function f ∈ F is bounded if there exists a number M ≥ 0 such that |f (x)| ≤ M for all x in
Rb
[a, b]. It is (Riemann) integrable if it is bounded and the Riemann integral a f (x) dx exists.)

4.1.2. Notation. For a vector space V we will write U  V to indicate that U is a subspace
of V . To distinguish this concept from other uses of the word “subspace” (topological subspace, for
example) writers frequently use the expressions linear subspace, vector subspace, or linear manifold.
4.1.3. Definition. If A and B are subsets of a vector space then the sum of A and B, denoted
by A + B, is defined by
A + B := {a + b : a ∈ A and b ∈ B}.

9
10 4. VECTOR SPACES AND SUBSPACES

4.2. Exercises (Due: Wed. Jan. 14)


4.2.1. Exercise (†). Show that condition (b)(ii) in the author’s definition of vector space
(AOLT, page 2) is redundant. Using only the other vector space axioms and exercise 1.2.4 prove
that in fact αv = 0 if and only if α = 0 or v = 0.
4.2.2. Exercise. Let x be an element of a vector space. Using only the vector space axioms
and the preceding exercises prove that (−1)x is the additive inverse of x. That is, show that
(−1)x = −x.

4.2.3. Exercise. Let V be the set of all real numbers. Define an operation of “addition” by
x  y = the maximum of x and y
for all x, y ∈ V . Define an operation of “scalar multiplication” by
α x = αx
for all α ∈ R and x ∈ V . Prove or disprove: under the operations  and the set V is a vector
space.
4.2.4. Exercise (†). Let V be the set of all real numbers x such that x > 0. Define an operation
of “addition” by
x  y = xy
for all x, y ∈ V . Define an operation of “scalar multiplication” by
α x = xα
for all α ∈ R and x ∈ V . Prove or disprove: under the operations  and the set V is a vector
space.
4.2.5. Exercise. Let V be R2 , the set of all ordered pairs (x, y) of real numbers. Define an
operation of “addition” by
(u, v)  (x, y) = (u + x + 1, v + y + 1)
for all (u, v) and (x, y) in V . Define an operation of “scalar multiplication” by
α (x, y) = (αx, αy)
for all α ∈ R and (x, y) ∈ V . Prove or disprove: under the operations  and the set V is a vector
space.

4.2.6. Exercise. Find the smallest subspace of R3 containing the vectors (2, −3, −3) and
(0, 3, 2).

4.2.7. Exercise. The author’s definition of “subspace” (AOLT, page 2) is not quite correct
(especially in view of the sentence following the definition). Explain why. Provide a correct defini-
tion.

4.2.8. Exercise. Let α, β, and γ be real numbers. Prove that the set of all solutions to the
differential equation
αy 00 + βy 0 + γy = 0
is a subspace of the vector space of all twice differentiable functions on R.

4.2.9. Exercise. For a fixed interval [a, b], which sets of functions in the notation list 4.1.1 are
vector subspaces of which?

4.2.10. Exercise (††). Let U and V be subspaces of a vector space W , and let {Vλ : λ ∈ Λ} be
a (perhaps infinite) family of subspaces of W . Consider the following subsets of W .
4.2. EXERCISES (DUE: WED. JAN. 14) 11

(a) U ∩ V .
(b) U ∪ V .
(c) U + V .
(d) U − V .
(e) ∩λ∈Λ Vλ .
Which of these are subspaces of W ?

4.2.11. Exercise. Let R∞ denote the (real) vector space of all sequences x = (xk ) = (xk )∞ k=1 =
(x1 , x2 , x3 , . . . ) of real numbers. Define addition and scalar multiplication pointwise. That is,
x + y = (x1 + y1 , x2 + y2 , x3 + y3 , . . . )
for all x, y ∈ R∞ and
αx = (αx1 , αx2 , αx3 , . . . )
for all α ∈ R and x ∈ R∞ . Which of the following are subspaces of R∞ ?
(a) the sequences which have infinitely many zeros.
(b) the sequences which have only finitely many nonzero terms.
(c) the decreasing sequences.
(d) the convergent sequences.
(e) the sequence which converge to zero.
(f) the arithmetic progressions (that is, sequences such that xk+1 − xk is constant).
xk+1
(g) the geometric progressions (that is, sequences such that is constant).
xk
(h) the bounded sequences (that is sequences for which there is a number M > 0 such that
|xk | ≤ M for all k). ∞
X
(i) the absolutely summable sequences (that is, sequences such that |xk | < ∞).
k=1
CHAPTER 5

Linear Combinations and Linear Independence

5.1. Background
Topics: linear combinations, linear independence, span. (See AOLT, p. 2.)

5.1.1. Definition. Let A be a subset of a vector space V . The span of A is the set of all linear
combinations of elements of A. It is denoted by span(A) (or by spanF (A) if we wish to emphasize
the role of the scalar field F). The subset A spans the space V if V = span(A). In this case we
also say that A is a spanning set for V .

13
14 5. LINEAR COMBINATIONS AND LINEAR INDEPENDENCE

5.2. Exercises (Due: Fri. Jan. 16)


5.2.1. Exercise. Prove that if A is a nonempty subset of a vector space V , then span A is a
subspace of V . (This is the claim made in AOLT, page 2, lines 19–20.)

5.2.2. Exercise. Let A be a nonempty subset of a vector space V . What, exactly, do we mean
when we speak of the smallest subspace of V which contains A?

5.2.3. Exercise. Let A be a nonempty subset of a vector space V . Then the following sets are
equal:
(a) span(A);
(b) the intersection of the family of all subspaces of V which contain A; and
(c) the smallest subspace of V which contains A.
5.2.4. Exercise. Verify that supersets of linearly dependent sets are linearly dependent and
that subsets of linearly independent sets linearly independent. That is, show that if V is a vector
space, A is a linearly dependent subset of V , and B is a subset of V which contains A, then B is
linearly dependent. Also show that if B is a linearly independent subset of V and A is contained
in B, then A is linearly independent.
5.2.5. Exercise. Let w = (1, 1, 0, 0), x = (1, 0, 1, 0), y = (0, 0, 1, 1), and z = (0, 1, 0, 1).
(a) Show that {w, x, y, z} does not span (that is, generate) R4 by finding a vector u in R4
such that u ∈/ span(w, x, y, z).
(b) Show that {w, x, y, z} is a linearly dependent set of vectors by finding scalars α, β, γ, and
δ—not all zero—such that αw + βx + γy + δz = 0.
(c) Show that {w, x, y, z} is a linearly dependent set by writing z as a linear combination of
w, x, and y.
5.2.6. Exercise. In the vector space C[0, π] of continuous functions on the interval [0, π] define
the vectors f , g, and h by
f (x) = x
g(x) = sin x
h(x) = cos x
for 0 ≤ x ≤ π. Show that f , g, and h are linearly independent.
5.2.7. Exercise. In the vector space C[0, π] of continuous functions on [0, π] let f , g, h, and j
be the vectors defined by
f (x) = 1
g(x) = x
h(x) = cos x
x
j(x) = cos2
2
for 0 ≤ x ≤ π. Show that f , g, h, and j are linearly dependent by writing j as a linear combination
of f , g, and h.
5.2.8. Exercise. Let a, b, and c be distinct real numbers. Show that the vectors (1, 1, 1),
(a, b, c), and (a2 , b2 , c2 ) form a linearly independent subset of R3 .
5.2.9. Exercise (†). In the vector space C[0, 1] define the vectors f , g, and h by
f (x) = x
g(x) = ex
h(x) = e−x
5.2. EXERCISES (DUE: FRI. JAN. 16) 15

for 0 ≤ x ≤ 1. Are f , g, and h linearly independent?


5.2.10. Exercise. Let u = (λ, 1, 0), v = (1, λ, 1), and w = (0, 1, λ). Find all values of λ which
make {u, v, w} a linearly dependent subset of R3 .
5.2.11. Exercise (†)). Suppose that {u, v, w} is a linearly independent set in a vector space V .
Show that the set {u + v, u + w, v + w} is linearly independent in V .
CHAPTER 6

Bases for Vector Spaces

6.1. Background
Topics: basis, dimension, partial ordering, comparable elements, linear ordering, chain, maximal
element, largest element, axiom of choice, Zorn’s lemma. (See AOLT, page 2 and section 1.6.)

6.1.1. Definition. A relation on a set S is a subset of the Cartesian product S × S. If the


relation is denoted by ≤, then it is conventional to write x ≤ y (or equivalently, y ≥ x) rather than
(x, y) ∈ ≤ .

6.1.2. Definition. A relation ≤ on a set S is reflexive if x ≤ x for all x ∈ S. It is transitive


if x ≤ z whenever x ≤ y and y ≤ z. It is antisymmetric if x = y whenever x ≤ y and y ≤ x.
A relation which is reflexive, transitive, and antisymmetric is a partial ordering. A partially
ordered set is a set on which a partial ordering has been defined.

6.1.3. Example. The set R of real numbers is a partially ordered set under the usual relation ≤.

6.1.4. Example. A family A of subsets of a set S is a partially ordered set under the relation ⊆.
When A is ordered in this fashion it is said to be ordered by inclusion.

6.1.5. Example. Let F(S) be the family of real valued functions defined on a set S. For f ,
g ∈ F(S) write f ≤ g if f (x) ≤ g(x) for every x ∈ S. This is a partial ordering on F(S).

6.1.6. Definition. Let A be a subset of a partially ordered set S. An element u ∈ S is an


upper bound for A if a ≤ u for every a ∈ A. An element m in the partially ordered set S is
maximal if there is no element of the set which is strictly greater than m; that is, m is maximal
if c = m whenever c ∈ S and c ≥ m. An element m in S is the largest element of S if m ≥ s for
every s ∈ S.

6.1.7. Example. Let S = {a, b, c} be a three-element set. The family P(S) of all subsets of S
is partially ordered by inclusion. Then S is the largest element of P(S)—and, of course, it is also a
maximal element of P(S). The family Q(S) of all proper subsets of S has no largest element; but
it has three maximal elements {b, c}, {a, c}, and {a, b}.

6.1.8. Definition. A linearly independent subset of a vector space V which spans V is a


(Hamel) basis for V . We make a special convention that the empty set is a basis for the trivial
vector space {0}.

6.1.9. Definition. Let A be a subset of a partially ordered set S. An element l ∈ S is a lower


bound for A if l ≤ a for every a ∈ A. An element l in the partially ordered set S is minimal if
there is no element of the set which is strictly less than l; that is, l is minimal if c = l whenever
c ∈ S and c ≤ l. An element l in S is the smallest element of S if l ≤ s for every s ∈ S.
17
18 6. BASES FOR VECTOR SPACES

6.1.10. Definition. Two elements x and y in a partially ordered set are comparable if either
x ≤ y or y ≤ x. A chain (or linearly ordered subset) is a subset of a partially ordered set in
which every pair of elements is comparable.

6.1.11. Axiom (Axiom of Choice). If A = {Aλ : λ ∈ Λ}Sis a nonempty family of nonempty


pairwise disjoint sets, then there exists a function f : Λ → A such that f (λ) ∈ Aλ for every
λ ∈ Λ.

6.1.12. Axiom (Zorn’s Lemma). If each chain in a nonempty partially ordered set S has an
upper bound in S, then S has a maximal element. (See AOLT, page 31, lines 4–7.)

The following theorem is an extremely important item in the set-theoretic foundations of math-
ematics. Its proof is difficult and can be found in any good book on set theory.
6.1.13. Theorem. The axiom of choice and Zorn’s lemma are equivalent.
6.2. EXERCISES (DUE: WED. JAN. 21) 19

6.2. Exercises (Due: Wed. Jan. 21)


6.2.1. Exercise. A linearly independent subset of a vector space V is a basis for V if and only
if it is a maximal linearly independent subset. (This is AOLT, proposition 1.35.)

6.2.2. Exercise (††). A spanning subset for a nontrivial vector space V is a basis for V if and
only if it is a minimal spanning set for V .

6.2.3. Exercise. Let {e1 , . . . , en } be a basis


Pn for a vector space V . Then for each v ∈ V there
exist unique scalars α1 , . . . , αn such that v = k=1 αk ek .

6.2.4. Exercise. Let {e1 , . . . , en } be a basis for a vector space V and v = nk=1 αk ek be a
P
vector in V . If αp 6= 0 for some p, then {e1 , . . . , ep−1 , v, ep+1 , . . . , en } is a basis for V .

6.2.5. Exercise. If some basis for a vector space V contains n elements, then every linearly
independent subset of V with n elements is also a basis. Hint. Suppose {e1 , . . . , en } is a basis for V
and {v1 , . . . , vn } is linearly independent in V . Start by using exercise 6.2.4 to show that (after
perhaps renumbering the ek ’s) the set {v1 , e2 , . . . , en } is a basis for V .

6.2.6. Exercise. In a finite dimensional vector space V any two bases have the same number
of elements. (This establishes the claim made in AOLT, page 2, lines 25–27.) This number is the
dimension of V , and is denoted by dim V .

6.2.7. Exercise. Let S3 be the vector space of all symmetric 3 × 3 matrices of real numbers.
(A matrix [aij ] is symmetric if aij = aji for all i and j.)
(a) What is the dimension of S3 ?
(b) Find a basis for S3 .

6.2.8. Exercise. AOLT, section 1.8, exercise 1.


 
u −u − x
6.2.9. Exercise (†). Let U be the set of all matrices of real numbers of the form
0 x
 
v 0
and V be the set of all real matrices of the form . Find bases for U, V, U + V, and U ∩ V.
w −v

6.2.10. Exercise. Let A be a linearly independent subset of a vector space V . Then there
exists a basis for V which contains A. (This is AOLT, Theorem 1.36. Can you think of anyway to
shorten the proof?)

6.2.11. Exercise. Every vector space has a basis. (This is AOLT, Corollary 1.37.)
CHAPTER 7

Linear Transformations

7.1. Background
Topics: linear maps, kernel, range, invertible linear map, isomorphism.

7.1.1. Definition. Let V and W be vector spaces over the same field F. A function T : V → W
is linear if T (x + y) = T x + T y and T (αx) = αT x for all x, y ∈ V and α ∈ F.

7.1.2. Notation. If V and W are vector spaces the family of all linear functions from V into
W is denoted by L(V, W ). Linear functions are usually called linear transformations, linear maps,
or linear mappings. When V = W we condense the notation L(V, V ) to L(V ) and we call the
members of L(V ) operators.

7.1.3. Definition. Let T : V → W be a linear transformation between vector spaces. Then


ker T , the kernel (or nullspace) of T is defined to be the set of all x in V such that T x = 0.
Also, ran T , the range of T (or the image of T ), is the set of all y in W such that y = T x for
some x in V . The rank of T is the dimension of its range and the nullity of T is the dimension
of its kernel.

7.1.4. Definition. Let T : V → W be a linear transformation between vector spaces and let U
be a subspace of V . Define T → (U ) := {T x : x ∈ U }. This is the (direct) image of U under T .

7.1.5. Definition. Let T : V → W be a linear transformation between vector spaces and let
U be a subspace of W . Define T ← (U ) := {x ∈ V : T x ∈ U }. This is the inverse image of U
under T .

7.1.6. Definition. A linear map T : V → W between vector spaces is left invertible (or
has a left inverse, or is a section) if there exists a linear map L : W → V such that LT = 1V .
(Note: 1V is the identity mapping v 7→ v on V . See AOLT, page 4, lines -13 to -10, convention.
Other notations for the identity map on V are idV and IV .) The map T is right invertible
(or has a right inverse, or is a retraction) if there exists a linear map R : W → V such that
T R = 1W . We say that T is invertible (or has an inverse, or is an isomorphism) if it is both
left invertible and right invertible. If there exists an isomorphism between two vector spaces V and
W , we say that the spaces are isomorphic and we write V ∼ = W.
Note: This is not the definition of isomorphism given in AOLT (definition on page 3, lines 17–18).
You will prove in exercise 7.2.11 that the two definitions are equivalent.

21
22 7. LINEAR TRANSFORMATIONS

7.2. Exercises (Due: Fri. Jan. 23)


7.2.1. Exercise. Use the notation of exercise 3.2.8 and suppose that (V, +, M ) and (W, +, M )
are vector spaces over a common field F and that T ∈ Hom(V, W ) is such that the diagram

V
T /W

Mα Mα

 
V /W
T

commutes for every α ∈ F. What can be said about the homomorphism T ?


7.2.2. Exercise. Let T : R3 → R3 : x 7→ (x1 + 3x2 − 2x3 , x1 − 4x3 , x1 + 6x2 ).
(a) Identify the kernel of T by describing it geometrically and by giving its equation(s).
(b) Identify the range of T by describing it geometrically and by giving its equation(s).
7.2.3. Exercise. Let T be the linear map from R3 to R3 defined by
T (x, y, z) = (2x + 6y − 4z, 3x + 9y − 6z, 4x + 12y − 8z).
Describe the kernel of T geometrically and give its equation(s). Describe the range of T geometri-
cally and give its equation(s).
7.2.4. Exercise. Let C = C[a, b] be the vector space of all continuous real valued functions on
the interval [a, b] and C 1 = C 1 [a, b] be the vector space of all continuously differentiable real valued
functions on [a, b]. (Recall that a function is continuously differentiable if it has a derivative
and the derivative is continuous.) Let D : C 1 → C be the linear transformation defined by
Df = f 0
and let T : C → C 1 be the linear transformation defined by
Z x
(T f )(x) = f (t) dt
a
for all f ∈ C and x ∈ [a, b].
(a) Compute (and simplify) (DT f )(x).
(b) Compute (and simplify) (T Df )(x).
(c) Find the kernel of T .
(d) Find the range of T .
7.2.5. Exercise. Let T : V → W be a linear map between vector spaces and U  V . Show
that T → (U ) is a subspace of W . Conclude that the range of a linear map is a subspace of the
codomain of the map. (This establishes the claim made in AOLT, page 3, lines 15–16.)
7.2.6. Exercise. Let T : V → W be a linear map between vector spaces and U  W . Show
that T ← (U ) is a subspace of V . Conclude that the kernel of a linear map is a subspace of the
domain of the map. (This establishes the claim made in AOLT, page 3, lines 13–14.)
7.2.7. Exercise (†). Let T : V → W be a linear map between vector spaces. Show that T is
injective (that is, one-to-one) if and only if ker T = {0}.
7.2.8. Exercise. Show that an operator T ∈ L(V ) is invertible if it satisfies the equation
T 2 − T + 1V = 0.
7.2.9. Exercise. Let s be the vector space of all sequences of real numbers. (Addition and
scalar multiplication are defined pointwise.) Define a linear map T by
T : s → s : (a1 , a2 , a3 , . . . ) 7→ (0, a1 , a2 , . . . ).
Is T left invertible? right invertible? injective? surjective?
7.2. EXERCISES (DUE: FRI. JAN. 23) 23

7.2.10. Exercise. If a linear map T : V → W between two vector spaces has both a left inverse
and a right inverse, then these inverses are equal; so there exists a linear map T −1 : W → V such
that T −1 T = 1V and T T −1 = 1W .
7.2.11. Exercise. A linear map between vector spaces is invertible if and only if it is bijective
(that is, one-to-one and onto).
CHAPTER 8

Linear Transformations (continued)

8.1. Background
Topics: matrix representations of linear maps.

8.1.1. Convention. The space Pn ([a, b]) of polynomial functions of degree strictly less than
n ∈ N on the interval [a, b] (where a < b) is a vector space of degree n. For each n = 0, 1, 2,
. . . let pn (t) = tn for all t ∈ [a, b]. Then we take {p0 , p1 , p2 , . . . , pn−1 } to be the standard basis
for Pn ([a, b]).

25
26 8. LINEAR TRANSFORMATIONS (CONTINUED)

8.2. Exercises (Due: Mon. Jan. 26)


8.2.1. Exercise (†). Let P4 be the vector space of polynomials of degree less than 4. Consider
the linear transformation D2 : P4 → P4 : f 7→ f 00 .
(a) Find the matrix representation of D2 (with respect to the usual basis {1, t, t2 , t3 } for P 4 ).
(b) Find the kernel of D2 .
(c) Find the range of D2 .
8.2.2. Exercise. Let T : P4 → P5 be the linear transformation defined by (T p)(t) = (2+3t)p(t)
for every p ∈ P4 and t ∈ R. Find the matrix representation of T with respect to the usual bases
for P4 and P5 .
8.2.3. Exercise. Show that an operator T on a vector space V is invertible if it has a unique
right inverse. Hint. Consider ST + S − idV , where S is the unique right inverse for T .
8.2.4. Exercise. Let T : V → W be a linear map between vector spaces. If A is a subset of V
such that T → (A) is linearly independent, then A is linearly independent. Can you think of a way
of saying this without using any symbols?
8.2.5. Exercise (†). Let T : V → W be a linear map between vector spaces. If T is injective
and A is a linearly independent subset of V , then T → (A) is a linearly independent subset of W .
8.2.6. Exercise. Let T : V → W be a linear map between vector spaces and A ⊆ V . Then
T → (span A) = span T → (A).
8.2.7. Exercise. Let V and W be vector spaces over a field F and E be a basis for V . Every
function f : E → W can be extended in a unique fashion to a linear map T : V → W .

8.2.8. Exercise. Let T : V → W be a linear map between vector spaces. If T is injective and
B is a basis for a subspace U of V , then T → (B) is a basis for T → (U ).

8.2.9. Exercise. Let T : V → W be a linear map between vector spaces. If V is spanned by a


set B of vectors and T → (B) is a basis for W , then B is a basis for V and T is an isomorphism.

8.2.10. Exercise. Prove that a linear transformation T : R3 → R2 cannot be one-to-one and


that a linear transformation S : R2 → R3 cannot be onto. What is the most general version of these
assertions that you can invent (and prove)?

8.2.11. Exercise. Suppose that V and W are finite dimensional vector spaces of the same finite
dimension and that T : V → W is a linear map. Then the following are equivalent;
(a) T is injective;
(b) T is surjective; and
(c) T is invertible.
CHAPTER 9

Duality in Vector Spaces

9.1. Background
Topics: linear functionals, dual space, dual basis. (See AOLT, pages 4–6.)

9.1.1. Definition. Let S be a nonempty set, F be a field, and f be an F valued function on S.


The support of f , denoted by supp f , is {x ∈ S : f (x) 6= 0}.

9.1.2. Notation. Let S be a nonempty set and F be a field. We denote by l(S) (or by l(S, F),
or by FS , or by F(S, F)) the family of all functions α : S → F. For x ∈ l(S) we frequently write the
value of x at s ∈ S as xs rather than x(s). (Sometimes it seems a good idea to reduce the number
of parentheses cluttering a page.) Furthermore we denote by lc (S) (or by lc (S, F), or by Fc (S))
the family of all functions α : S → F with finite support; that is, those functions on S which are
nonzero at only finitely many elements of S.

27
28 9. DUALITY IN VECTOR SPACES

9.2. Exercises (Due: Wed. Jan. 28)


9.2.1. Exercise (†). Let V be a vector space (over a field F) with basis B. Define Ω : lc (B) → V
by X
Ω(v) = v(e) e.
e∈B
Prove that l(B) (under the usual pointwise operations) is a vector space, that lc (B) is a subspace
of l(B), and that Ω is an isomorphism.
9.2.2. Exercise. A vector v in a vector space V with basis B can be written in one and only
one way as a linear combination of elements in B.
9.2.3. Exercise. Let V and W be vector spaces and B be a basis for V . Every function
T0 : B → W can be extended in one and only one way to a linear transformation T : V → W .
9.2.4. Convention. If in exercise 9.2.1 we write v for Ω(v) then we see that
X
v= v(e) e. (*)
e∈B

The isomorphism Ω creates a one-to-one correspondence between functions v in lc (B) and vectors
v in V . This is an extension of the usual notation in Rn where we write a vector v in terms of its
components:
Xn
v= vk ek .
k=1
We will go even further and use Ω to identify V with lc (B) and write
X
v= v(e) e
e∈B

instead of (*). That is, in a vector space with basis we will treat a vector as a scalar valued function
on its basis. (Of course, if this identification ever threatens to cause confusion we can always go
back to the v or Ω(v) notation.)
9.2.5. Exercise. According to convention 9.2.4 above, what is the value of f (e) when e and f
are elements of the basis B?
9.2.6. Exercise. Let V be a vector space with basis B. For every v ∈ V define a function v ∗
on V by X
v ∗ (x) = x(e) v(e) for all x ∈ V .
e∈B
Then v∗ is a linear functional on V .
9.2.7. Notation. In the preceding exercise 9.2.6 the value v ∗ (x) of v ∗ at x is often written
hx, vi.

9.2.8. Exercise. Consider the notation 9.2.7 above in the special case that the scalar field
F = R. Then h , i is an inner product on the vector space V . (See the definition of inner product
on page 11 of AOLT.)
9.2.9. Exercise. In the special case that the scalar field F = C, things above are usually done
a bit differently. For v ∈ V the function v ∗ is defined by
X
v ∗ (x) = hx, vi = x(e) v(e).
e∈B
Why do you think things are done this way?
9.2. EXERCISES (DUE: WED. JAN. 28) 29

9.2.10. Exercise. Let v be a nonzero vector in a vector space V and E be a basis for V which
contains the vector v. Then there exists a linear functional φ ∈ V ∗ such that φ(v) = 1 and φ(e) = 0
for every e ∈ E \ {v}.
9.2.11. Corollary. If v is a vector in a vector space V and φ(v) = 0 for every φ ∈ V ∗ , then
v = 0.
9.2.12. Exercise. Let V be a vector space with basis B. The map Φ : V → V ∗ : v 7→ v ∗ (see
exercise 9.2.6) is linear and injective.
9.2.13. Exercise (Riesz-Fréchet theorem for vector spaces). (†) In the preceding exercise 9.2.12
the map Φ is an isomorphism if V is finite dimensional. Thus for every φ ∈ V ∗ there exists a unique
vector a ∈ V such that a∗ = φ.

9.2.14. Exercise. If V is a finite dimensional vector space with basis {e1 , . . . , en }, then {e∗1 , . . . , e∗n }
is the dual basis for V ∗ . (See AOLT, page 5, line -7 and Proposition 1.1.)
CHAPTER 10

Duality in Vector Spaces (continued)

10.1. Background
Topics: annihilators, pre-annihilators, trace. (See AOLT, page 7.)

10.1.1. Notation. Let V be a vector space and M ⊆ V . Then


M ⊥ := {f ∈ V ∗ : f (x) = 0 for all x ∈ M }
We say that M ⊥ is the annihilator of M . (The reasons for using the familiar “orthogonal
complement” notation M ⊥ will become apparent when we study inner product spaces, where
“orthogonality” actually makes sense.)

10.1.2. Notation. Let V be a vector space and F ⊆ V ∗ . Then


F⊥ := {x ∈ V : f (x) = 0 for all f ∈ F }
We say that F⊥ is the pre-annihilator of F .

31
32 10. DUALITY IN VECTOR SPACES (CONTINUED)

10.2. Exercises (Due: Fri. Jan. 30)


10.2.1. Exercise. In exercise 9.2.12 we showed that the map
Φ : V → V ∗ : v 7→ v ∗
is always an injective linear map. In exercise 9.2.13 we showed that if V is finite dimensional, then
so is V ∗ and Φ is an isomorphism between V and V ∗ . Now prove that if V is infinite dimensional,
then Φ is not an isomorphism. Hint. Let B be a basis for V . Is there a functional ψ ∈ V ∗ such
that ψ(e) = 1 for every e ∈ B? Could such a functional be Φ(x) for some x ∈ V ?
10.2.2. Exercise (†). Let V be a vector space over a field F. For every x in V define
b : V ∗ → F : φ 7→ φ(x) .
x
(a) Show that xb ∈ V ∗∗ for each x ∈ V .
(b) Let ΓV be the map from V to V ∗∗ which takes x to x b. (When no confusion is likely we
b for each x ∈ V .) Prove that Γ is linear.
write Γ for ΓV , so that Γ(x) = x
(c) Prove that Γ is injective.
(See AOLT, page 33, exercise 5.)
10.2.3. Exercise. Prove that if V is a finite dimensional vector space, then the map Γ : V →
V ∗∗ defined in the preceding exercise 10.2.2 is an isomorphism. Prove also that if V is infinite
dimensional, then Γ is not an isomorphism. Hint. Let B be a basis for V and ψ ∈ V ∗ be as in
exercise 10.2.1. Show that if we let C0 be {e∗ : e ∈ B}, then the set C0 ∪ {ψ} is linearly independent
and can therefore be extended to a basis C for V ∗ . Find an element τ in V ∗∗ such that τ (ψ) = 1
and τ (φ) = 0 for every other φ ∈ C. Can τ be Γx for some x ∈ V ? (If x 6= 0 in V = l0 (B), then
there is a basis vector f in B such that x(f ) 6= 0.)
∗
10.2.4. Exercise (†). Find the annihilator in R2 of the vector (1, 1) in R2 . (Express your
∗
answer in terms of the standard dual basis for R2 .)
10.2.5. Exercise. Let M and N be subsets of a vector space V . Then
(a) M ⊥ is a subspace of V ∗ .
(b) If M ⊆ N , then N ⊥  M ⊥ .
(c) (span M )⊥ = M ⊥ .
(d) (M ∪ N )⊥ = M ⊥ ∩ N ⊥ .
10.2.6. Exercise. If M and N be subspaces of a vector space V , then
(M + N )⊥ = M ⊥ ∩ N ⊥ .
Explain why it is necessary in this exercise to assume that M and N are subspaces of V and not
just subsets of V . Hint. Is it always true that span (M + N ) = span M + span N ?
10.2.7. Exercise. In exercises 10.2.5 and 10.2.6 you established some properties of the annihila-
tor mapping M 7→ M ⊥ . See to what extent you can prove similar results about the pre-annihilator
mapping F 7→ F⊥ . What can you say about the sets M ⊥ ⊥ and F⊥ ⊥ ?
CHAPTER 11

The Language of Categories

11.1. Background
Topics: categories, objects, morphisms, functors, vector space adjoint of a linear map.

11.1.1. Definition. Let A be a class, whose members we call objects. For every pair (S, T ) of
objects we associate a set Mor(S, T ), whose members we call morphisms from S to T . We assume
that Mor(S, T ) and Mor(U, V ) are disjoint unless S = U and T = V .
We suppose further that there is an operation ◦ (called composition) that associates with
every α ∈ Mor(S, T ) and every β ∈ Mor(T, U ) a morphism β ◦ α ∈ Mor(S, U ) in such a way that:
(1) γ ◦ (β ◦ α) = (γ ◦ β) ◦ α whenever α ∈ Mor(S, T ), β ∈ Mor(T, U ), and γ ∈ Mor(U, V );
(2) for every object S there is a morphism IS ∈ Mor(S, S) satisfying α ◦ IS = α whenever
α ∈ Mor(S, T ) and IS ◦ β = β whenever β ∈ Mor(R, S).
Under these circumstances the class A, together with the associated families of morphisms, is
a category. For a few more remarks on categories and some examples see section 8.1 of my
notes [11].

11.1.2. Definition. If A and B are categories a (covariant) functor F from A to B


(written A
F / B) is a pair of maps: an object map F which associates with each object S in
A an object F (S) in B and a morphism map (also denoted by F ) which associates with each
morphism f ∈ Mor(S, T ) in A a morphism F (f ) ∈ Mor(F (S), F (T )) in B, in such a way that
(1) F (g ◦ f ) = F (g) ◦ F (f ) whenever g ◦ f is defined in A; and
(2) F (idS ) = idF (S) for every object S in A.

The definition of a contravariant functor A


F / B differs from the preceding definition
only in that, first, the morphism map associates with each morphism f ∈ Mor(S, T ) in A a
morphism F (f ) ∈ Mor(F (T ), F (S)) in B and, second, condition (1) above is replaced by
(10 ) F (g ◦ f ) = F (f ) ◦ F (g) whenever g ◦ f is defined in A.
For a bit more on functors and some examples see section 15.4 of my notes [11].

11.1.3. Definition. The terminology for inverses of morphisms in categories is essentially the
β
same as for functions. Let S / T and T
α / S be morphisms in a category. If β ◦ α = IS , then β
is a left inverse of α and, equivalently, α is a right inverse of β. We say that the morphism
β
α is an isomorphism (or is invertible) if there exists a morphism T / S which is both a left
−1
and a right inverse for α. Such a function is denoted by α and is called the inverse of α.
11.1.4. Remark. All the categories that are of interest in this course are concrete categories.
A concrete category is, roughly speaking, one in which the objects are sets with additional
structure (algebraic operations, inner products, norms, topologies, and the like) and the morphisms
are maps (functions) which preserve, in some sense, the additional structure. If A is an object in
f
some concrete category C, we denote by A its underlying set. And if A / B is a morphism

33
34 11. THE LANGUAGE OF CATEGORIES

in C we denote by f the map from A to B regarded simply as a function between sets. It


is easy to see that  , which takes objects in C to objects in Set (the category of sets and maps)
and morphisms in C to morphisms in Set, is a functor. It is referred to as a forgetful functor.
In the category Vec of vector spaces and linear maps, for example,  causes a vector space V to
“forget” about its addition and scalar multiplication (V is just a set). And if T : V → W is a
linear transformation, then T : V → W is just a map between sets—it has “forgotten” about
preserving the operations. For more precise definitions of concrete categories and forgetful functors
consult any text on category theory.
11.1.5. Definition. Let T : V → W be a linear map between vector spaces. For every g ∈ W ∗
let T ∗ (g) = g T . Notice that T ∗ (g) ∈ V ∗ . The map T ∗ from the vector space W ∗ into the vector
space V ∗ is the (vector space) adjoint map of T .
WARNING: In inner product spaces we will use the same notation T ∗ for a different map. If
T : V → W is a linear map between inner product spaces, then the (inner product space) adjoint
transformation T ∗ maps W to V (not W ∗ to V ∗ ).

11.1.6. Definition. A partially ordered set is order complete if every nonempty subset has
a supremum (that is, a least upper bound) and an infimum (a greatest lower bound).
11.1.7. Definition. Let S be a set. Then the power set of S, denoted by P(S), is the family
of all subsets of S.
11.1.8. Notation. Let f : S → T be a function between sets. Then we define f → (A) =
{f (x) : x ∈ A} and f ← (B) = {x ∈ S : f (x) ∈ B}. We say that f → (A) is the image of A under
f and that f ← (B) is the preimage of B under f .
11.2. EXERCISES (DUE: MON. FEB. 2) 35

11.2. Exercises (Due: Mon. Feb. 2)


11.2.1. Exercise. Prove that if a morphism in some category has both a left and a right inverse,
then it is invertible.

11.2.2. Exercise (The vector space duality functor). Let T ∈ L(V, W ) where V and W are
vector spaces. Show that the pair of maps V 7→ V ∗ and T 7→ T ∗ is a contravariant functor from
the category of vector spaces and linear maps into itself. Show that (the morphism map of) this
functor is linear. (That is, show that (S + T )∗ = S ∗ + T ∗ and (αT )∗ = αT ∗ .)
11.2.3. Exercise (†). Let T : V → W be a linear map between vector spaces. Show that
ker T ∗ = (ran T )⊥ .
Is there a relationship between T being surjective and T ∗ being injective?
11.2.4. Exercise. Let T : V → W be a linear map between vector spaces. Show that
ker T = (ran T ∗ ) ⊥ .
Is there a relationship between T being injective and T ∗ being surjective?
11.2.5. Exercise (The power set functors). Let S be a nonempty set.
(a) The power set P(S) of S partially ordered by ⊆ is order complete.
(b) The class of order complete partially ordered sets and order preserving maps is a category.

(c) For each function f between sets let P(f ) = f → . Then P is a covariant functor from the
category of sets and functions to the category of order complete partially ordered sets and
order preserving maps.
(d) For each function f between sets let P(f ) = f ← . Then P is a contravariant functor from
the category of sets and functions to the category of order complete partially ordered sets
and order preserving maps.
CHAPTER 12

Direct Sums

12.1. Background
Topics: internal direct sum, external direct sum. (See AOLT, pages 8–9.)

12.1.1. Definition. Let M and N be subspaces of a vector space V . We say that the space V
is the (internal) direct sum of M and N if M + N = V and M ∩ N = {0}. In this case we
write V = M ⊕ N and say that M and N are complementary subspaces. We also say that M is
a complement of N and that N is a complement of M . Similarly, if M1 , . . . , Mn are subspaces
of a vector space V , if V = M1 + · · · + Mn , and if Mj ∩ Mk = {0} whenever j 6= k, then we say
that V is the (internal) direct sum of M1 , . . . , Mn and we write
M n
V = M1 ⊕ · · · ⊕ Mn = Mk .
k=1

12.1.2. Notation. If the vector space V is a direct sum of subspaces M and N , then, according
to exercise 12.2.1, every v ∈ V can be written uniquely in the form m + n where m ∈ M and n ∈ N .
We will indicate this by writing v = m ⊕ n. It is important to realize that this notation makes
sense only when we have in mind a particular direct sum decomposition M ⊕ N of V . Thus in this
context when we see v = a ⊕ b we conclude that v = a + b, that a ∈ M , and that b ∈ N .

12.1.3. Definition. Let V and W be vector spaces over a field F. To make the Cartesian
product V × W into a vector space we define addition by
(v, w) + (v1 , w1 ) = (v + v1 , w + w1 )
(where v, v1 ∈ V and w, w1 ∈ W ), and we define scalar multiplication by
α(v, w) = (αv, αw)
(where α ∈ F, v ∈ V , and w ∈ W ). The resulting vector space we call the (external) direct
sum of V and W . It is conventional to use the same notation V ⊕ W for external direct sums that
we use for internal direct sums.

12.1.4. Notation. If S, T , and U are nonempty sets and if f : S → T and g : S → U , then we


define the function (f, g) : S → T × U by
(f, g)(s) = (f (s), g(s)).
Suppose, on the other hand, that we are given a function h mapping S into the Cartesian product
T × U . Then for each s ∈ S the image h(s) is an ordered pair, which we will write as h1 (s), h2 (s) .


(The superscripts have nothing to do with powers.) Notice that we now have functions h1 : S → T
and h2 : S → U . These are the components of h. In abbreviated notation h = (h1 , h2 ).

The (external) direct sum of two vector spaces V1 and V2 is best thought of not just as the vector
space V1 ⊕V2 defined in 12.1.3 but as this vector space together with two distinguished “projection”
mappings defined below. (The reason for this assertion may be deduced from definition 13.1.1 and
exercise 13.2.1.)
37
38 12. DIRECT SUMS

12.1.5. Definition. Let V1 and V2 be vector spaces. For k = 1, 2 define the coordinate
projections πk : V1 ⊕ V2 → Vk by πk (v1 , v2 ) = vk . Notice two simple facts:
(i) π1 and π2 are surjective linear maps; and
(ii) idV1 ⊕V2 = (π1 , π2 ).
12.2. EXERCISES (DUE: WED. FEB. 4) 39

12.2. Exercises (Due: Wed. Feb. 4)


12.2.1. Exercise. If V is a vector space and V = M ⊕ N , then for every v ∈ V there exist
unique vectors m ∈ M and n ∈ N such that v = m + n.

12.2.2. Exercise. Suppose that a vector space V is the direct sum of subspaces M1 , . . . , Mn .
Show that for every v ∈ V there exist unique vectors mk ∈ Mk (for k = 1, . . . , n) such that
v = m1 + · · · + mn .

12.2.3. Exercise. Let U be the plane x + y + z = 0 and V be the line x = y = z in R3 . The


purpose of this problem is to confirm that R3 = U ⊕ V . This requires establishing three things: (i)
U and V are subspaces of R3 (which is very easy and which we omit); (ii) R3 = U + V ; and (iii)
U ∩ V = {0}.
(a) To show that R3 = U + V we need R3 ⊆ U + V and U + V ⊆ R3 . Since U ⊆ R3 and
V ⊆ R3 , it is clear that U + V ⊆ R3 . So all that is required is to show that R3 ⊆ U + V .
That is, given a vector x = (x1 , x2 , x3 ) in R3 we must find vectors u = (u1 , u2 , u3 ) in U
and v = (v1 , v2 , v3 ) in V such that x = u + v. Find two such vectors.
(b) The last thing to verify is that U ∩ V = {0}; that is, that the only vector U and V have
in common is the zero vector. Suppose that a vector x = (x1 , x2 , x3 ) belongs to both U
and V . Since x ∈ U it must satisfy the equation
x1 + x2 + x3 = 0. (1)
since x ∈ V it must satisfy the equations
x1 = x2 and x2 = x3 . (2)
Solve the system of equations (1) and (2).

12.2.4. Exercise. Let U be the plane x + y + z = 0 and V be the line x = − 34 y = 3z. The
purpose of this exercise is to see (in two different ways) that R3 is not the direct sum of U and V .
(a) If R3 were equal to U ⊕ V , then U ∩ V would contain only the zero vector. Show that this
is not the case by finding a vector x 6= 0 in R3 such that x ∈ U ∩ V .
(b) If R3 were equal to U ⊕ V , then, in particular, we would have R3 = U + V . Since both
U and V are subsets of R3 , it is clear that U + V ⊆ R3 . Show that the reverse inclusion
R3 ⊆ U + V is not correct by finding a vector x ∈ R3 which cannot be written in the form
u + v where u ∈ U and v ∈ V .
(c) We have seen in part (b) that U + V 6= R3 . Then what is U + V ?

12.2.5. Exercise. Let U be the plane x + y + z = 0 and V be the line x − 1 = 21 y = z + 2


in R3 . State in one short sentence how you know that R3 is not the direct sum of U and V .

12.2.6. Exercise (††). Let C = C([0, 1]) be the (real) vector space of all continuous real valued
functions on the interval [0, 1] with addition and scalar multiplication defined pointwise.
(a) Let f1 (t) = t and f2 (t) = t4 for 0 ≤ t ≤ 1. Let U be the set of all functions of the form
αf1 + βf2 where α, β ∈ R. Show that U is a subspace of C.
(b) Let V be the set of all functions g in C which satisfy
Z 1 Z 1
tg(t) dt = 0 and t4 g(t) dt = 0.
0 0
Show that V is a subspace of C.
(c) Show that C = U ⊕ V .
(d) Let f (t) = t2 for 0 ≤ t ≤ 1. Find functions u ∈ U and v ∈ V such that f = u + v.
40 12. DIRECT SUMS

12.2.7. Exercise. Let C = C[−1, 1] be the vector space of all continuous real valued functions
on the interval [−1, 1]. A function f in C is even if f (−x) = f (x) for all x ∈ [−1, 1]; it is odd
if f (−x) = −f (x) for all x ∈ [−1, 1]. Let Co = {f ∈ C : f is odd } and Ce = {f ∈ C : f is even }.
Show that C = Co ⊕ Ce .

12.2.8. Exercise. Prove that the external direct sum of two vector spaces (as defined in 12.1.3)
is indeed a vector space.

12.2.9. Exercise. Let V1 and V2 be vector spaces. Prove that following fact about the direct
sum V1 ⊕ V2 : for every vector space W and every pair of linear maps T1 : W → V1 and T2 : W → V2
there exists a unique linear map S : W → V1 ⊕ V2 which makes the following diagram commute.
W
  ???
  ??
T1  ??T2
 S ??
 ??
  ?
 
V1 o V 1 ⊕ V2 / V2
π1 π2

12.2.10. Exercise. Every subspace of a vector space has a complement. That is, if M is a
subspace of a vector space V , then there exists a subspace N of V such that V = M ⊕ N .

12.2.11. Exercise. Let V be a vector space and suppose that V = U ⊕ W . Prove that if B
is a basis for U and C is a basis for W , then B ∪ C is a basis for V . From this conclude that
dim V = dim U + dim W .

12.2.12. Exercise (†). Let U , V , and W be vector spaces. Show that if U ∼


= W , then U ⊕ V ∼
=
W ⊕ V . Show also that the converse of this assertion need not be true.

12.2.13. Exercise. A linear transformation has a left inverse if and only if it is injective (one-
to-one). It has a right inverse if and only if it is surjective (onto).
CHAPTER 13

Products and Quotients

13.1. Background
Topics: products, coproducts, quotient spaces, exact sequences. (See AOLT, pages 9–10.)

13.1.1. Definition. Let A1 and A2 be objects in a category C. We say that a triple (P, π1 , π2 ),
where P is an object and πk : P → Ak (k = 1, 2) are morphisms, is a product of A1 and A2 if
for every object B in C and every pair of morphisms fk : B → Ak (k = 1, 2) there exists a unique
map g : B → P such that fk = πk ◦ g for k = 1, 2.
B
  ???
  ??
f1  ??f2
 g ??
 ??
  ?
 
A1 o P / A2
π1 π2

Notice that what you showed in exercise 12.2.9 is that the external direct sum is a product in
the category of vector spaces and linear maps.
A triple (P, j1 , j2 ), where P is an object and jk : Ak → P , (k = 1, 2) are morphisms, is a
coproduct of A1 and A2 if for every object B in C and every pair of morphisms Fk : Ak → B
(k = 1, 2) there exists a unique map G : P → B such that Fk = G ◦ jk for k = 1, 2.

? BO ?_
  ???
F1  ?? F2
 G ??
 ??
  ??

A1 /P o A2
j1 j2

13.1.2. Definition. Let M be a subspace of a vector space V . Define an equivalence relation


∼ on V by
x∼y if and only if y − x ∈ M.
For each x ∈ V let [x] be the equivalence class containing x. Let V /M be the set of all equivalence
classes of elements of V . For [x] and [y] in V /M define
[x] + [y] := [x + y]
and for α ∈ R and [x] ∈ V /M define
α[x] := [αx].
Under these operations V /M becomes a vector space. It is the quotient space of V by M . The
notation V /U is usually read “V mod M ”. The linear map
π : V → V /M : x 7→ [x]
is called the quotient map.

41
42 13. PRODUCTS AND QUOTIENTS

13.1.3. Definition. A sequence of vector spaces and linear maps


jn jn+1
··· / Vn−1 / Vn / Vn+1 / ···

is said to be exact at Vn if ran jn = ker jn+1 . A sequence is exact if it is exact at each of its
constituent vector spaces. A sequence of vector spaces and linear maps of the form
j
0 /U /V k /W /0

is a short exact sequence. (Here 0 denotes the trivial 0-dimensional vector space, and the
unlabeled arrows are the obvious linear maps.)
13.2. EXERCISES (DUE: MON. FEB. 9) 43

13.2. Exercises (Due: Mon. Feb. 9)


13.2.1. Exercise. Show that in the category of vector spaces and linear maps the external
direct sum is not only a product but also a coproduct.

13.2.2. Exercise (††). Show that in the category of sets and maps (functions) the product and
the coproduct are not the same.

13.2.3. Exercise. Show that in a arbitrary category (or in the category of vector spaces and
linear maps, if you prefer) that products (and coproducts) are essentially unique. (Essentially
unique means unique up to isomorphism. That is, if (P, π1 , π2 ) and (Q, ρ1 , ρ2 ) are both products
of two given objects, then P ∼
= Q.)

13.2.4. Exercise (†). Verify the assertions made in definition 13.1.2. In particular, show that
∼ is an equivalence relation, that addition and scalar multiplication of the set of equivalence classes
is well defined, that under these operations V /M is a vector space, and that the quotient map is
linear.

The following exercise is called the fundamental quotient theorem or the first isomorphism
theorem for vector spaces. (See AOLT, theorem 1.6, page 10.)
13.2.5. Exercise. Let V and W be vector spaces and M  V . If T ∈ L(V, W ) and ker T ⊇ M ,
then there exists a unique Te ∈ L(V /M , W ) which makes the following diagram commute.
V C
CCC
CC
CC T
π CC
CC
CC
CC
 C!
V /M /W
Te

Furthermore, Te is injective if and only if ker T = U ; and Te is surjective if and only if T is.
Corollary: ran T ∼
= V / ker T .

13.2.6. Exercise. The sequence


j
0 /U /V k /W /0

of vector spaces is exact at U if and only if j is injective. It is exact at W if and only if k is


surjective.

13.2.7. Exercise. Let U and V be vector spaces. Then the following sequence is short exact:
ι1 π2
0 /U /U ⊕V /V / 0.

The indicated linear maps are the obvious ones:


ι1 : U → U ⊕ V : u 7→ (u, 0)
and
π2 : U ⊕ V → V : (u, v) 7→ v.
44 13. PRODUCTS AND QUOTIENTS

13.2.8. Exercise. Suppose a < b. Let K be the family of constant functions on the interval
[a, b], C 1 be the family of all continuously differentiable functions on [a, b], and C be the family of
all continuous functions on [a, b]. (A function f is said to be continuously differentiable if
its derivative f 0 exists and is continuous.)
Specify linear maps j and k so that the following sequence is short exact:
j
0 /K / C1 k /C / 0.

13.2.9. Exercise. Let C be the family of all continuous functions on the interval [0, 2]. Let E1
be the mapping from C into R defined by E1 (f ) = f (1). (The functional E1 is called evaluation
at 1.)
Find a subspace F of C such that the following sequence is short exact.
E1
0 /F ι /C /R / 0.

13.2.10. Exercise. If j : U → V is an injective linear map between vector spaces, then the
sequence
j
0 /U /V π / V / ran j /0
is exact.

13.2.11. Exercise. Consider the following diagram in the category of vector spaces and linear
maps.
j
0 /U /V k /W /0


f g h
  
0 /U 0 /V 0 /W 0 /0
j 0 k0
If the rows are exact and the left square commutes, then there exists a unique linear map h : W →
W 0 which makes the right square commute.

13.2.12. Exercise. Consider the following diagram of vector spaces and linear maps
j
0 /U /V k /W /0

f g h

  
0 /U 0 /V 0 /W 0 /0
j0 k0

where the rows are exact and the squares commute. Prove the following.
(a) If g is surjective, so is h.
(b) If f is surjective and g is injective, then h is injective.
(c) If f and h are surjective, so is g.
(d) If f and h are injective, so is g.
CHAPTER 14

Products and Quotients (continued)

14.1. Background
Topics: quotient map in a category, quotient object.

14.1.1. Definition. Let A be an object in a concrete category C. A surjective morphism


π /
A B in C is a quotient map for A if a function g : B → C (in SET) is a morphism (in C)
whenever g ◦ π is a morphism. An object B in C is a quotient object for A if it is the range of
some quotient map for A.

45
46 14. PRODUCTS AND QUOTIENTS (CONTINUED)

14.2. Exercises (Due: Wed. Feb. 11)


j
14.2.1. Exercise. Show that if 0 /U /V /W /0 is an exact sequence of vector spaces
k

and linear maps, then V = U ⊕ W . Hint. Consider the following diagram and use exercise 13.2.12.
j
0 /U /V k /W /0.

g

π2
/U / U ⊕W o / /0
0 i1
W
i2

The trick is to find the right map g.

14.2.2. Exercise (†). Prove the converse of the preceding exercise. That is, suppose that U ,
V , and W are vector spaces and that V ∼
= U ⊕ W ; prove that there exists linear maps j and k such
that the sequence 0 /U j /V k /W / 0 is exact. Hint. Suppose g : U ⊕ W → V is an
isomorphism. Define j and k in terms of g.

j
14.2.3. Exercise. Show that if 0 /U /V /W / 0 is an exact sequence of vector
k

spaces and linear maps, then W = V / ran j. Thus, if U  V and j is the inclusion map, then
W ∼= V /U . Give two different proofs of this result: one using exercise 13.2.5 and the other using
exercise 13.2.12.

14.2.4. Exercise. Prove the converse of exercise 14.2.3. That is, suppose that j : U → V is an
injective linear map between vector spaces and that W ∼= V / ran j; prove that there exists a linear
j
map k which makes the sequence 0 /U /V k /W / 0 exact.

14.2.5. Exercise. Let W be a vector space and M  V  W . Then


(W/M ) (V /M ) ∼

= W/V.
Hint. Exercise 14.2.3
14.2.6. Exercise. Let M be a subspace of a vector space V . Then the following are equivalent:
(a) dim V /M < ∞ ;
(b) there exists a finite dimensional subspace F of V such that V = M ⊕ F ; and
(c) there exists a finite dimensional subspace F of V such that V = M + F .

14.2.7. Exercise. Suppose that a vector space V is the direct sum of subspaces U and W .
Some authors define the codimension of U to be dim W . Others define it to be dim V /U . Show
that these are equivalent.

14.2.8. Exercise. Prove that if M is a finite dimensional subspace of a vector space V , then
dim V /M = dim V − dim M .

14.2.9. Exercise (††). Prove that in the category of vector spaces and linear maps every
surjective linear map is a quotient map.

14.2.10. Exercise. Let U , V , and W be vector spaces. If S ∈ L(U, V ), T ∈ L(V, W ), then the
sequence
0 / ker S / ker T S / ker T / coker S / coker T S / coker T /0

is exact.
14.2. EXERCISES (DUE: WED. FEB. 11) 47

For obvious reasons the next result is usually called the rank-plus-nullity theorem. (See AOLT,
theorem 1.8, page 10.)
14.2.11. Exercise. Let T : V → W be a linear map between vector spaces. Give a very simple
proof that if V is finite dimensional, then
rank T + nullity T = dim V.

14.2.12. Exercise. Show that if V0 , V1 , . . . , Vn are finite dimensional vector spaces and the
sequence
dn d1
0 / Vn / Vn−1 / ... / V1 / V0 /0
n
X
is exact, then (−1)k dim Vk = 0.
k=0
CHAPTER 15

Projection Operators

15.1. Background
Topics: idempotent maps, projections. (See AOLT, section 3.2, pages 81–86.) If you have not yet
discovered Professor Farenick’s web page where he has a link to a list of corrections to AOLT, this
might be a good time to look at it. See
http://www.math.uregina.ca/~farenick/fixups.pdf

15.1.1. Definition. Let V be a vector space. An operator E ∈ L(V ) is a projection oper-


ator if it is idempotent; that is, if E 2 = E.

15.1.2. Definition. Let V be a vector space and suppose that V = M ⊕ N . We know from
an earlier exercise 12.2.1 that for each v ∈ V there exist unique vectors m ∈ M and n ∈ N such
that v = m + n. Define a function EN M : V → V by EN M v = m. The function EN M is called the
projection of V along N onto M . (This terminology is, of course, optimistic. We must prove
that EN M is in fact a projection operator.)

15.1.3. Definition. Let M1 ⊕ · · · ⊕ Mn be a direct sum decomposition of a vector space V .


For each k ∈ Nn let Nk be the following subspace of V complementary to Mk :
Nk := M1 ⊕ · · · ⊕ Mk−1 ⊕ Mk+1 ⊕ · · · ⊕ Mn .
Also (for each k) let
Ek := EN
k Mk
be the projection onto Mk along the complementary subspace Nk . The projections E1 , . . . En are
the projections associated with the direct sum decomposition V = M1 ⊕ · · · ⊕ Mn .

49
50 15. PROJECTION OPERATORS

15.2. Exercises (Due: Fri. Feb. 13)


15.2.1. Exercise (†). If E is a projection operator on a vector space V , then
V = ran E ⊕ ker E.

15.2.2. Exercise. Let V be a vector space and E, F ∈ L(V ). If E + F = idV and EF = 0,


then E and F are projection operators and V = ran E ⊕ ran F .

If nk=1 Ek = idV and


P
15.2.3. Exercise. Let V be a vector space and E1 , . . . , En ∈ L(V ). L
Ei Ej = 0 whenever i 6= j, then each Ek is a projection operator and V = nk=1 ran Ek .

15.2.4. Exercise. If E is a projection operator on a vector space V , then ran E = {x ∈


V : Ex = x}.

15.2.5. Exercise. Let E and F be projection operators on a vector space V . Then E +F = idV
if and only if EF = F E = 0 and ker E = ran F .

15.2.6. Exercise (†). If M ⊕ N is a direct sum decomposition of a vector space V , then the
function EN M defined in 15.1.2 is a projection operator whose range is M and whose kernel is N .

15.2.7. Exercise. If M ⊕ N is a direct sum decomposition of a vector space V , then EN M +


EM N = idV and EN M EM N = 0.

15.2.8. Exercise. If E is a projection operator on a vector space V , then there exist M , N 4 V


such that E = EN M .

15.2.9. Exercise. Let M be the line y = 2x and N be the y-axis in R2 . Find [EM N ] and
[EN M ].

15.2.10. Exercise. Let E be the projection of R3 onto the plane 3x − y + 2z = 0 along the
z-axis and let F be the projection of R3 onto the z-axis along the plane 3x − y + 2z = 0.
(a) Find [E].
(b) Where does F take the point (4, 5, 1)?

15.2.11. Exercise. Let P be the plane in R3 whose equation is x − y − 2z = 0 and L be the


line whose equations are x = 0 and y = −z. Let E be the projection of R3 along L onto P and F
be the projection of R3 along P onto L. Then
   
a b b b b b
[E] = −a c c  and [F ] =  a −a −c
a −a −a −a a c
where a = ,b= , and c = .

15.2.12. Exercise. Suppose a finite dimensional vector space V has the direct sum decompo-
sition V = M ⊕ N and that E = EM N is the projection along M onto N . Show that E ∗ is the
projection in L(V ∗ ) along N ⊥ onto M ⊥ .
CHAPTER 16

Algebras

16.1. Background
Topics: algebras, matrix algebras, standard matrix units, ideals, center, central algebra, polyno-
mial functional calculus, quaternions, representation, faithful representation, left regular represen-
tation, permutation matrices. (See AOLT, sections 2.1–2.3.)

16.1.1. Definition. Let A be an algebra over a field F. The unitization of A is the unital alge-
bra à = A × F in which addition and scalar multiplication are defined pointwise and multiplication
is defined by
(a, λ) · (b, µ) = (ab + µa + λb, λµ).

51
52 16. ALGEBRAS

16.2. Exercises (Due: Mon. Feb. 16)


16.2.1. Exercise. In AOLT at the top of page 43 the author says that the ideal generated by
a subset S of an algebra A is the intersection of all ideals containing S. For this to be correct, we
must know that the intersection of a family of ideals in A is itself an ideal. Prove this.

16.2.2. Exercise. Also in AOLT at the top of page 43 the author says that the ideal generated
by a subset S of an algebra A can be characterized by as
X m 
aj sj bj : m = 1, 2, 3, . . . , sj ∈ S, and aj , bj ∈ A .
j=1
Prove that for a unital algebra this is correct..

16.2.3. Exercise. In AOLT at the bottom of page 43 the author constructs the Cartesian
product algebra A1 × · · · × An of algebras A1 , . . . An . Show that this is indeed a product in the
category of algebras and algebra homomorphisms. Under what circumstances will the product
algebra A1 × · · · × An be unital?

16.2.4. Exercise (†). Prove that product algebra Mn (F)×Mn (F) is neither simple nor central.
(See AOLT, section 2.7, exercise 3.)

16.2.5. Exercise. AOLT, section 2.7, exercise 4.

16.2.6. Exercise. In AOLT at the bottom of page 44 and the top of page 45 the author
constructs the quotient algebra A/J, where A is an algebra and J is an ideal in A. Prove that the
operations on A/J are well-defined and that under these operations A/J is in fact an algebra.

16.2.7. Exercise (†). An element a of a unital algebra A is invertible if there exists an


element a−1 ∈ A such that aa−1 = a−1 a = 1A . Explain why no proper ideal in a unital algebra
can contain an invertible element.

16.2.8. Exercise. Let φ : A → B be an algebra homomorphism. Prove that the kernel of φ is


an ideal in A and that the range of φ is a subalgebra of B.

16.2.9. Exercise. Prove that the unitization à of an algebra A is in fact a unital algebra with
(0, 1) as its identity. Prove also that A is (isomorphic to) a subalgebra of à with codimension 1.
(See AOLT, pages 49–50.)

16.2.10. Exercise. Let A and B be algebras and J be an ideal in A. If φ : A → B is an algebra


homomorphism and ker φ ⊇ J, then there exists a unique algebra homomorphism φe : A/J → B
which makes the following diagram commute.
AA
AA
AA
AA φ
AA
π AA
AA
AA
 AA
A/J /B
φe

Furthermore, φe is injective if and only if ker φ = U ; and φe is surjective if and only if φ is.
Corollary: ran φ ∼
= A/ ker φ. (See AOLT, page 45, Theorem 2.5.)
16.2. EXERCISES (DUE: MON. FEB. 16) 53

16.2.11. Exercise. Let V be a vector space, v and w be in V and φ and ψ be in V ∗ . Then


(v ⊗ φ)(w ⊗ ψ) = φ(w)(v ⊗ ψ).

16.2.12. Exercise. Let V be a vector space, v be in V and φ and ψ be in V ∗ . Then


(v ⊗ φ)∗ (ψ) = ψ(v)φ.
CHAPTER 17

Spectra

17.1. Background
Topics: eigenvalues, spectrum, annihilating polynomial, minimal polynomial, division algebra,
centralizer, skew-centralizer, spectral mapping theorem. (See AOLT, sections 2.4–2.5.)

17.1.1. Definition. Let a be an element of a unital algebra A over a field F. The spectrum
of a, denoted by σA (a) or just σ(a), is the set of all λ ∈ F such that a − λ1 is not invertible.
NOTE: This definition, which is the “official” one for this course, differs from—and is
not equivalent to—the one given on page 61 of AOLT.

17.1.2. Theorem (Spectral Mapping Theorem). If T is an operator on a finite dimensional


vector space and p is a polynomial, then
σ(p(T )) = p(σ(T )).
That is, if σ(T ) = {λ1 , . . . , λk }, then σ(p(T )) = {p(λ1 ), . . . , p(λk )}.

55
56 17. SPECTRA

17.2. Exercises (Due: Wed. Feb. 18)


17.2.1. Exercise. Let V be a vector space, T ∈ L(V ), v ∈ V , and φ ∈ V ∗ . Then
T (v ⊗ φ) = (T v) ⊗ φ.

17.2.2. Exercise. Let V be a vector space, T ∈ L(V ), v ∈ V , and φ ∈ V ∗ . Then


v ⊗ T ∗ φ = (v ⊗ φ)T.

17.2.3. Exercise. Prove that every finite rank linear map between vector spaces is a linear
combination of rank 1 linear maps.

17.2.4. Exercise (†). Give an example to show that the result stated in AOLT, Theorem 2.24,
part 1 need not hold in an infinite dimensional unital algebra. Hint. Let l1 (N, R) be the family
of all absolutely summable sequences of real numbers, that is, the set of all sequences (an ) of real
numbers such that ∞
P
k=1 n | < ∞. Consider
|a the operator T in the algebra L(l1 (N, R)) which takes
1

the sequence (an ) to the sequence n an .

17.2.5. Exercise. Give an example to show that the result stated in AOLT, Theorem 2.24, part
2 need not hold in an infinite dimensional unital algebra. Hint. Consider the unilateral shift op-
erator U in the algebra L(l(N, R)) which takes the sequence (an ) to the sequence (0, a1 , a2 , a3 , . . . ).

17.2.6. Exercise. Give an example to show that the result stated in AOLT, Theorem 2.24,
part 3 need not hold in an infinite dimensional unital algebra.

17.2.7. Exercise. If z is an element of the algebra C of complex numbers, then σ(z) = {z}.

17.2.8. Exercise. Let f be an element of the algebra C([a, b]) of continuous complex valued
functions on the interval [a, b]. Find the spectrum of f .

17.2.9. Exercise. The family M3 (C) of 3 × 3 matrices of complex numbers


 is a unital
 algebra
5 −6 −6
under the usual matrix operations. Show that the spectrum of the matrix −1 4 2  is {1, 2}.
3 −6 −4

17.2.10. Exercise. Let a be an element of a unital algebra such that a2 = 1. Then either
(i) a = 1, in which case σ(a) = {1}, or
(ii) a = −1, in which case σ(a) = {−1}, or
(iii) σ(a) = {−1, 1}.
1
Hint. In (iii) to prove σ(a) ⊆ {−1, 1}, consider (a + λ1).
1 − λ2

17.2.11. Exercise. An element a of an algebra is idempotent if a2 = a. Let a be an idempo-


tent element of a unital algebra. Then either
(i) a = 1, in which case σ(a) = {1}, or
(ii) a = 0, in which case σ(a) = {0}, or
(iii) σ(a) = {0, 1}.
1 
Hint. In (iii) to prove σ(a) ⊆ {0, 1}, consider 2
a + (λ − 1)1 .
λ−λ
17.2. EXERCISES (DUE: WED. FEB. 18) 57
 
1 −1 0
17.2.12. Exercise. Let T be the operator on R3 whose matrix representation is  0 0 0
−2 2 2
and let p(x) = x3 − x2 + x − 3. Verify the spectral mapping theorem 17.1.2 in this special case by
computing separately σ(p(T )) and p(σ(T )).

17.2.13. Exercise. Prove the spectral mapping theorem 17.1.2.


CHAPTER 18

Polynomials

18.1. Background
Topics: formal power series, polynomials, convolution, indeterminant, degree of a polynomial,
polynomial functional calculus, annihilating polynomial, monic polynomial, minimal polynomial.

18.1.1. Notation. We make the convention that the set of natural numbers N does not include
zero but the set Z+ of positive integers does. Thus Z+ = N ∪ {0}.

18.1.2. Notation. If S is a set and A is an algebra, l(S, A) denotes the vector space of all
functions from S into A with pointwise operations of addition and scalar multiplication, and lc (S, A)
denotes the subspace of functions with finite support.

18.1.3. Definition. Let A be a unital commutative algebra. On the vector space l(Z+ , A)
P n
P
define a binary operation ∗ (often called convolution) by (f ∗ g)n = fj g k = fj g n−j
j+k=n j=0
(where f , g ∈ l(Z+ , A) and n ∈ Z+ . An element of l(Z+ , A) is a formal power series (with
coefficients in A) and an element of lc (Z+ , A) is a polynomial (with coefficients in A).

18.1.4. Remark. We regard the algebra A as a subset of l(Z+ , A) by identifying the element
a ∈ A with the element (a, 0, 0, 0, . . . ) ∈ l(Z+ , A). Thus the map a 7→ (a, 0, 0, 0, . . . ) becomes an
inclusion map. (Technically speaking, of course, the map ψ : a 7→ (a, 0, 0, 0, . . . ) is an injective
unital homomorphism and A ∼ = ran ψ.)

18.1.5. Convention. In the algebra l(Z+ , A) we will henceforth write ab for a ∗ b.

18.1.6. Definition. Let A be a unital commutative algebra. In the algebra l(Z+ , A) of formal
power series the special sequence x = (0, 1A , 0, 0, 0, . . . ) is called the indeterminant of l(Z+ , A).
Notice that the sequence xn = x x x · · · x (n factors) has the property that (xn )n = 1 and (xn )k = 0
whenever k 6= n. It is conventional to take x0 to be the identity (1A , 0, 0, 0, . . . ) of l(Z+ , A).

18.1.7. Remark. The algebra l(Z+ , A) of formal


  power series with coefficients in a unital
commutative algebra A is frequently denoted by A [x] and the subalgebra lc (Z+ , A) of polynomials
is denoted by A[x].
For many algebraists scalar
  multiplication is of little interest so A is taken to be a unital
commutative ring, so that A [x] is ring of formal power series (with coefficients in A) and A[x] is
the polynomial ring (with coefficients in A). In your text, AOLT, A is always a field F. Since a
field can be regarded as a one-dimensional vector space over itself, it is also an algebra. Thus in the
text F[x] is the polynomial algebra with coefficients in F and has as its basis {xn : n = 0, 1, 2, . . . }.

18.1.8. Definition. A nonzero polynomial p, being an element of l0 (Z+ , A), has finite support.
So there exists n0 ∈ Z+ such that pn = 0 whenever n > n0 . The smallest such n0 is the degree
59
60 18. POLYNOMIALS

of the polynomial. We denote it by deg p. A polynomial of degree 0 is a constant polynomial.


The zero polynomial (the additive identity of l(Z+ , A)) is also a constant polynomial and many
authors assign its degree to be −∞.
If p is a polynomial of degree n, then pn is the leading coefficient of p. A polynomial is
monic if its leading coefficient is 1.

Pn 18.1.9.k Definition. Let A be a unital algebra over a field F. For each polynomial p =
k=0 pk x with coefficients in F define
n
X
pe: A → A : a 7→ pk ak .
k=0
Then pe is the polynomial function on A determined by the polynomial p. Also for fixed a ∈ A
define
Φ : F[x] → A : p 7→ pe(a).
The mapping Φ is the polynomial functional calculus determined by the element a.

18.1.10. Definition. Let V be a vector space over a field F and T ∈ L(V ). A nonzero polyno-
mial p ∈ F[x] such that pe(T ) = 0 is an annihilating polynomial for T . A monic polynomial of
smallest degree that annihilates T is a minimal polynomial for T .

18.1.11. Notation. If T is an operator on a finite dimensional vector space over a field F, we


denote its minimal polynomial in F[x] by mT .
18.2. EXERCISES (DUE: FRI. FEB. 20) 61

18.2. Exercises (Due: Fri. Feb. 20)


18.2.1. Exercise. If A is a unital commutative algebra, then under the operations defined
in 18.1.3 l(Z+ , A) is a unital commutative algebra (whose multiplicative identity is the sequence
(1A , 0, 0, 0, . . . )) and lc (Z+ , A) is a unital subalgebra of l(Z+ , A).

18.2.2. Exercise. If φ : A → B is a unital algebra homomorphism between unital commutative


algebras, then the map
∞
l(Z+ , φ) : l(Z+ , A) → l(Z+ , B) : f 7→ φ(fn ) n=0
is also a unital homomorphism of unital commutative algebras. The pair of maps A 7→ l(Z+ , A) and
φ 7→ l(Z+ , φ) is a covariant functor from the category of unital commutative algebras and unital
algebra homomorphisms to itself.

18.2.3. Exercise. Let A be a unital commutative algebra. If p is a nonzero polynomial in


lc (Z+ , A), then
n
X
p= pk xk where n = deg p.
k=0

18.2.4. Exercise. If p and q are polynomials with coefficients in a unital commutative algebra
A, then
(i) deg(p + q) ≤ max{deg p, deg q}, and
(ii) deg(pq) ≤ deg p + deg q.
If A is a field, the equality holds in (ii).

18.2.5. Exercise. Show that if A is a unital commutative algebra, then so is l(A, A) under
pointwise operations of addition, multiplication, and scalar multiplication.

18.2.6. Exercise (†). Prove that for each a ∈ A the polynomial functional calculus Φ : F[x] →
A defined in 18.1.9 is a unital algebra homomorphism. Show also that the map Ψ : F[x] →
l(A, A) : p 7→ pe is a unital algebra homomorphism. (Pay especially close attention to the fact
that “multiplication” on F[x] is convolution whereas “multiplication” on l(A, A) is defined point-
wise.) What is the image under Φ of the indeterminant x? What is the image under Ψ of the
indeterminant x?

18.2.7. Exercise. AOLT, section 2.7, exercise 2.

18.2.8. Exercise (†). Give an example to show that the polynomial functional calculus Φ may
fail to be injective. Hint. The preceding exercise 18.2.7.

18.2.9. Exercise. Let A be a unital algebra over a field F with finite dimension m. Show that
for every a ∈ A there exists a polynomial p ∈ F[x] such that 1 ≤ deg p ≤ m and pe(a) = 0.

18.2.10. Exercise. Let V be a finite dimensional vector space. Prove that every T ∈ L(V ) has
a minimal polynomial. Hint. Use exercise 18.2.9..

18.2.11. Exercise. Let f and d be polynomials with coefficients in a field F and suppose that
d 6= 0. Then there exist unique polynomials q and r in F[x] such that
(i) f = dq + r and
62 18. POLYNOMIALS

(ii) r = 0 or deg r < deg d.


Hint. Let f = kj=0 fj xj and d = m j
P P
j=0 dj x be in standard form. The case k < m is trivial. For
k ≥ m suppose the result to be true for all polynomials of degree strictly less than k. What can
you say about fb = f − p where p = (fk d−1
m )x
k−m d?

18.2.12. Exercise. Let V be a finite dimensional vector space and T ∈ L(V ). Prove that the
minimal polynomial for T is unique.
CHAPTER 19

Polynomials (continued)

19.1. Background
Topics: irreducible polynomial, prime polynomial, Langrange interpolation formula, unique fac-
torization theorem, greatest common divisor, relatively prime.

19.1.1. Definition. A polynomial p ∈ F[x] is irreducible (over F) provided that it is not


constant and whenever p = f g with f , g ∈ F[x], then either f or g is constant.

19.1.2. Definition. Let t0 , t1 , . . . , tn be distinct elements of a field F. For 0 ≤ k ≤ n define


pk ∈ F[x] by
n
Y x − tj
pk = .
tk − t j
j=0
j6=k

19.1.3. Definition. If F is a field, a polynomial p ∈ F[x] is reducible over F if there exist


polynomials q and r in F[x] both of degree at least one such that p = qr. A polynomial which
is not reducible over F is irreducible over F. A nonscalar irreducible polynomial is a prime
polynomial over F (or is a prime in F[x]).

63
64 19. POLYNOMIALS (CONTINUED)

19.2. Exercises (Due: Mon. Feb. 23)


19.2.1. Exercise. Let T be an operator on a finite dimensional vector space. Show that T is
invertible if and only if the constant term of its minimal polynomial is not zero. Explain, for an
invertible operator T , how to write its inverse as a polynomial in T .

19.2.2. Exercise (†). Let T be an operator on a finite dimensional vector space over a field F.
If p ∈ F[x] and pe(T ) = 0, then mT divides p. (If p, p1 ∈ F[x], we say that p1 divides p if there
exists q ∈ F[x] such that p = p1 q.)

19.2.3. Exercise. Let T be the operatoron thereal vector space R2 whose matrix representa-
0 −1
tion (with respect to the standard basis) is . Find the minimal polynomial mT of T and
1 0
show that it is irreducible (over R).

19.2.4. Exercise. Let T be the operator on the complex


 vector space C2 whose matrix repre-
0 −1
sentation (with respect to the standard basis) is . Find the minimal polynomial mT of T
1 0
and show that it is not irreducible (over C).

19.2.5. Exercise. Let p be a polynomial of degree m ≥ 1 in F[x]. If Jp is the principal ideal


generated by p, that is if
Jp := {pf : f ∈ F[x]}
then dim F[x]/Jp = m. Hint. Show that B = { [xk ] : k = 0, 1, . . . , m − 1} is a basis for the vector
space F[x]/Jp .

19.2.6. Exercise. Let T be an operator on a finite dimensional vector space V over a field
F and Φ : F[x] → L(V ) be the associated polynomial functional calculus. If p is a polynomial of
degree m ≥ 1 in F[x] and Jp is the principal ideal generated by p, then the sequence

0 / Jp / F[x] Φ / ran Φ /0

is exact.

19.2.7. Exercise (Lagrange Interpolation Formula). Prove that the polynomials defined in 19.1.2
form a basis for the vector space V of all polynomials with coefficients in F and degree less than or
equal to n and that for each polynomial q ∈ V
Xn
q= q(tk )pk .
k=0

19.2.8. Exercise. Use the Lagrange Interpolation Formula to find the polynomial with coeffi-
cients in R and degree no greater than 3 whose values at −1, 0, 1, and 2 are, respectively, −6, 2,
−2, and 6.

19.2.9. Exercise. Let F be a field and p, q, and r be polynomials in F[x]. If p is a prime in


F[x] and p divides qr, then p divides q or p divides r.

19.2.10. Exercise. Prove the Unique Factorization Theorem: Let F be a field. A nonscalar
monic polynomial in F[x] can be factored in exactly one way (except for the order of the factors)
as a product of monic primes in F[x] .
19.2. EXERCISES (DUE: MON. FEB. 23) 65

19.2.11. Exercise. Let F be a field. Then every nonzero ideal in F[x] is principal.

19.2.12. Exercise. If p1 , . . . , pn are polynomials, not all zero, with coefficients in a field F, then
there exists a unique monic polynomial d in the ideal generated by p1 , . . . , pn such that d divides pk
for each k = 1, . . . , n and, furthermore, any polynomial which divides each pk also divides d. This
polynomial d is the greatest common divisor of the pk ’s. The polynomials pk are relatively
prime if their greatest common denominator is 1.
CHAPTER 20

Invariant Subspaces

20.1. Background
Topics: invariant subspaces, invariant subspace lattice, reducing subspaces, transitive algebra,
Burnside’s theorem, triangulable. (See AOLT sections 3.1 and 3.3.)

20.1.1. Definition. An operator T on a vector space V is reduced by a pair (M, N ) of


subspaces M and N of V if
(i) V = M ⊕ N ,
(ii) M is invariant under T , and
(iii) N is invariant under T .

67
68 20. INVARIANT SUBSPACES

20.2. Exercises (Due: Wed. Feb. 25)


 
3 4 2
20.2.1. Exercise. Let S be the operator on R3 whose matrix representation is 0 1 2.
0 0 0
Find three one dimensional subspaces U , V , and W of R3 which are invariant under S.
 
0 0 2
20.2.2. Exercise. Let T be the operator on R3 whose matrix representation is 0 2 0.
2 0 0
Find a two dimensional subspace U of R3 which is invariant under T .

20.2.3. Exercise. Find infinitely many subspaces of the vector space of polynomial functions
on R which are invariant under the differentiation operator.
 
2 0 0
20.2.4. Exercise. Let T be the operator on R3 whose matrix representation is −1 3 2.
Find a plane and a line in R3 which reduce T . 1 −1 0

20.2.5. Exercise. AOLT, section 3.7, exercise 1.

20.2.6. Exercise. AOLT, section 3.7, exercise 2.

20.2.7. Exercise (†). Let M be a subspace of a vector space V and T ∈ L(V ). Show that
if M is invariant under T , then ET E = T E for every projection E onto M . Show also that if
ET E = T E for some projection E onto M , then M is invariant under T .

20.2.8. Exercise. Suppose a vector space V has the direct sum decomposition V = M ⊕ N .
An operator T on V is reduced by the pair (M, N ) if and only if ET = T E, where E = EM N is
the projection along M onto N .

20.2.9. Exercise. Let M and N be complementary subspaces of a vector space V (that is, V
is the direct sum of M and N ) and let T be an operator on V . Show that if M is invariant under
T , then M ⊥ is invariant under T ∗ and that if T is reduced by the pair (M, N ), then T ∗ is reduced
by the pair (M ⊥ , N ⊥ ).

20.2.10. Exercise. Let T : V → W be linear and S : W → V a left inverse for T . Then


(a) W = ran T ⊕ ker S, and
(b) T S is the projection along ker S onto ran T .

20.2.11. Exercise. If V is a vector space, V = M ⊕ N = M 0 ⊕ N , and M ⊆ M 0 , then M = M 0 .


CHAPTER 21

The Spectral Theorem for Vector Spaces

21.1. Background
Topics: similarity of operators, diagonalizable, resolution of the identity, spectral theorem for vector
spaces. (See section 1.1 of my notes on operator algebras [12].)

21.1.1. Definition. Suppose that on a vector space V there exist projection operators E1 , . . . ,
En such that
(i) IV = E1 + E2 + · · · + En and
(ii) Ei Ej = 0 whenever i 6= j.
Then we say that the family {E1 , E2 , . . . , En } of projections is a resolution of the identity.

21.1.2. Definition. Two operators on a vector space (or two n × n matrices) R and T are
similar if there exists an invertible operator (or matrix) S such that R = S −1 T S.

21.1.3. Notation. Let α1 , . . . , αn be elements of a field F. Then diag(α1 , . . . , αn ) denotes the


n × n matrix whose entries are all zero except on the main diagonal where they are α1 , . . . , αn .
That is, if [d1 , . . . , d n ] = diag(α1 , . . . , αn ), then dkk = αk for each k, and djk = 0 whenever j 6= k.
Such a matrix is a diagonal matrix.

21.1.4. Definition. Let V be a vector space of finite dimension n. An operator T on V is


diagonalizable if it has n linearly independent eigenvectors (or, equivalently, if it has a basis of
eigenvectors).

21.1.5. Theorem (Spectral Theorem for Vector Spaces). If T is a diagonalizable operator on


a finite dimensional vector space V , then
n
X
T = λ k Ek
k=1
where λ1 , . . . , λn are the (distinct) eigenvalues of T and {E1 , . . . En } is the resolution of the identity
whose projections are associated with the corresponding eigenspaces M1 , . . . , Mn .

69
70 21. THE SPECTRAL THEOREM FOR VECTOR SPACES

21.2. Exercises (Due: Fri. Feb. 27)


21.2.1. Exercise. If {E1 , E2 , . . . , En } is a resolution of the identity on a vector space V , then
V = nk=1 ran Ek .
L

21.2.2. Exercise. If M1 ⊕ · · · ⊕ Mn is a direct sum decomposition of a vector space V , then


the family {E1 , E2 , . . . , En } of the associated projections is a resolution of the identity.

21.2.3. Exercise. If v is a nonzero eigenvector associated with an eigenvalue λ of an operator


T ∈ L(V ) and p is a polynomial in F[x], then p(T )v = p(λ)v.

21.2.4. Exercise. Let λ1 , . . . , λn be the distinct eigenvalues of an operator T onPna finite


dimensional vector
P space V and let M1 , . . . Mn be the corresponding eigenspaces. If U S= k=1 Mk ,
then dim U = nk=1 dim Mk . Furthermore, if Bk is a basis for Mk (k = 1, . . . n), then nk=1 Bk is a
basis for U .

21.2.5. Exercise. If R and T are operators on a vector space, is RT always similar to T R?


What if R is invertible?

21.2.6. Exercise. Let R and T be operators on a vector space. If R is similar to T and p ∈ F[x]
is a polynomial, then p(R) is similar to p(T ).

21.2.7. Exercise. If R and T are operators on a vector space, R is similar to T , and R is


invertible, then T is invertible and T −1 is similar to R−1 .

21.2.8. Exercise. Let A be an n × n matrix with entries from a field F. Then A, regarded as
an operator on Fn , is diagonalizable if and only if it is similar to a diagonal matrix.

21.2.9. Exercise (†). Let E1 , . . . , En be the projections associated with a direct sum decom-
position V = M1 ⊕· · ·⊕Mn of a vector space V and let T be an operator on V . Then each subspace
Mk is invariant under T if and only if T commutes with each projection Ek .

21.2.10. Exercise. Prove the spectral theorem for vector spaces 21.1.5.

21.2.11. Exercise. Let T be an operator on a finite dimensional vector space V . If λ1 , . . . , λn


are distinct scalars and E1 , . . . , En are nonzero operators on V such that
(i) T = P nk=1 λk Ek ,
P
(ii) I = nk=1 Ek , and
(iii) Ej Ek = 0 whenever j 6= k,
then T is diagonalizable, the scalars λ1 , . . . , λn are the eigenvalues of T , and the operators E1 , . . . ,
En are projections whose ranges are the eigenspaces of T .

21.2.12. Exercise. If T is an operator on a finite dimensional vector space V over a field F


and p ∈ F[x], then
n
X
p(T ) = p(λk )Ek
k=1
where λ1 , . . . , λn are the (distinct) eigenvalues of T and E1 , . . . En are the projections associated
with the corresponding eigenspaces M1 , . . . , Mn .
CHAPTER 22

The Spectral Theorem for Vector Spaces (continued)

22.1. Background
Topics: representation, faithful representation, irreducible representation, Cayley-Hamilton theo-
rem. (See AOLT, section 3.4.)

22.1.1. Definition. If A is an n × n matrix, define the characteristic polynomial cA of


A to be the determinant of xI − A. (Note that xI − A is a matrix with polynomial entries.) As
you would expect, the characteristic polynomial of an operator on a finite dimensional space (with
basis B) is the characteristic polynomial of the matrix representation of that operator (with respect
to B).

22.1.2. Theorem (Cayley-Hamilton Theorem). If T is an operator on a finite dimensional


vector space, then the characteristic polynomial of T annihilates T .

71
72 22. THE SPECTRAL THEOREM FOR VECTOR SPACES (CONTINUED)

22.2. Exercises (Due: Mon. Mar. 2)


22.2.1. Exercise (†). AOLT, section 3.7, exercise 12.

22.2.2. Exercise. If T is a diagonalizable operator on a finite dimensional vector space


L V , then
the projections E1 , . . . , En associated with the decomposition of V as a direct sum Mk of its
eigenspaces can be expressed as polynomials in T . Hint. Apply the Lagrange interpolation formula
with the tk ’s being the eigenvalues of T .
 
0 0 2
22.2.3. Exercise. Let T be the operator on R3 whose matrix representation is 0 2 0. Use
exercise 22.2.2 to write T as a linear combination of projections. 2 0 0
 
2 −2 1
22.2.4. Exercise. Let T be the operator on R3 whose matrix representation is −1 1 1.
Use exercise 22.2.2 to write T as a linear combination of projections. −1 2 0

22.2.5. Exercise. Let T be the operator on R3 whose matrix representation is given in exer-
cise 22.2.2. Write T as a linear combination of projections.

22.2.6. Exercise. Let T be an operator on a finite dimensional vector space over a field F.
Show that λ is an eigenvalue of T if and only if cT (λ) = 0.

22.2.7. Exercise (††). Let T be an operator on a finite dimensional


Q vector space. Show that T
is diagonalizable if and only if its minimal polynomial is of the form nk=1 (x − λk ) for some distinct
elements λ1 , . . . , λn of the scalar field F.

22.2.8. Exercise. If T is an operator on a finite dimensional vector space, then its minimal
polynomial and characteristic polynomial have the same roots.

22.2.9. Exercise. Prove the Caley-Hamilton theorem 22.1.2. Hint. A proof is in the text (see
AOLT, pages 92–94). The problem here is to fill in missing details and provide in a coherent fashion
any background material necessary to understanding the proof.
CHAPTER 23

Diagonalizable Plus Nilpotent Decomposition

23.1. Background
Topics: primary decomposition theorem, diagonalizable plus nilpotent decomposition, Jordan
normal form.

23.1.1. Theorem (Primary Decomposition Theorem). Let T ∈ L(V ) where V is a finite di-
mensional vector space. Factor the minimal polynomial
Yn
mT = pkrk
k=1
into powers of distinct irreducible monic polynomials p1 , . . . , pn and let Wk = ker pk (T rk ) for

each k. Then
(i) V = nk=1 Wk ,
L
(ii) each Wk is invariant under T , and
(iii) if Tk = T W , then mT = pkrk .
k k

In the preceding theorem the spaces Wk are the generalized eigenspaces of the operator T .
23.1.2. Theorem. Let T be an operator on a finite dimensional vector space V . Suppose that
the minimal polynomial for T factors completely into linear factors
mT (x) = (x − λ1 )d1 . . . (x − λr )dr
where λ1 , . . . λr are the (distinct) eigenvalues of T . For each k let Wk be the generalized eigenspace
ker(T − λk I)dk and let E1 , . . . , Er be the projections associated with the direct sum decomposition
V = W1 ⊕ W2 ⊕ · · · ⊕ Wr .
Then this family of projections is a resolution of the identity, each Wk is invariant under T , the
operator
D = λ 1 E1 + · · · + λ r Er
is diagonalizable, the operator
N =T −D
is nilpotent, and N commutes with D.
Furthermore, if D1 is diagonalizable, N1 is nilpotent, D1 + N1 = T , and D1 N1 = N1 D1 , then
D1 = D and N1 = N .
23.1.3. Corollary. Every operator on a finite dimensional complex vector space can be written
as the sum of two commuting operators, one diagonalizable and the other nilpotent.

73
74 23. DIAGONALIZABLE PLUS NILPOTENT DECOMPOSITION

23.2. Exercises (Due: Wed. Mar. 4)


23.2.1. Exercise. Prove the primary decomposition theorem 23.1.1.

23.2.2. Exercise. Prove theorem 23.1.2.


 
2 1
23.2.3. Exercise. Let T be the operator on R2 whose matrix representation is .
−1 4
(a) Explain briefly why T is not diagonalizable.
(b) Find the diagonalizable
 and nilpotent
 parts of T .
a b −c c
Answer: D = and N = where a = ,b= , and c = .
b a −c c
 
0 0 −3
23.2.4. Exercise (†). Let T be the operator on R3 whose matrix representation is −2 1 −2.
2 −1 5
(a) Find D and N , the diagonalizable and nilpotent parts of T . Express these as polynomials
in T .
(b) Find a matrix S which diagonalizes D.
   
2 −1 −1 −2 1 −2
(c) Let [D1 ] = −1 2 −1 and [N1 ] = −1 −1 −1. Show that D1 is diagonalizable,
−1 −1 2 3 0 3
that N1 is nilpotent, and that T = D1 + N1 . Why does this not contradict the uniqueness
claim made in theorem 23.1.2?
 
0 1 0 −1
−2 3 0 −1
23.2.5. Exercise. Let T be the operator on R4 whose matrix representation is 
−2 1 2 −1.

2 −1 0 3
(a) The characteristic polynomial of T is (λ − 2)p where p = .
r
(b) The minimal polynomial of T is (λ − 2) where r = .
 
a b b b
b a b b
(c) The diagonalizable part of T is D =  b b a b  where a =
 and b = .
b b b a
 
−a b c −b
−a b c −b
(d) The nilpotent part of T is N = −a b c −b where a =
 ,b= , and
c= .
a −b c b
 
1 0 0 1 −1
0 1 −2 3 −3
5
 
23.2.6. Exercise. Let T be the operator on R whose matrix representation is 0 0 −1 2 −2.

1 −1 1 0 1
1 −1 1 −1 2
(a) Find the characteristic polynomial of T .
Answer: cT (λ) = (λ + 1)p (λ − 1)q where p = and q = .
(b) Find the minimal polynomial of T .
Answer: mT (λ) = (λ + 1)r (λ − 1)s where r = and s = .
23.2. EXERCISES (DUE: WED. MAR. 4) 75

(c) Find the eigenspaces V1 and V2 of T .


Answer: V1 = span{(a, 1, b, a, a)} where a = and b = ; and
V2 = span{(1, a, b, b, b), (b, b, b, 1, a)} where a = and b = .
(d) Find the diagonalizable part of T .
 
a b b b b
 b a −c c −c
 
Answer: D =   b b −a c −c where a = ,b= , and c = .

b b b a b 
b b b b a
(e) Find the nilpotent part of T.
 
a a a b −b
a a
 a b −b
Answer: N =  a a a a a where a = and b = .
 b −b b −b b 
b −b b −b b
(f) Find a matrix S which diagonalizes the diagonalizable part D of T . What is the diagonal
form Λ of D associated
 with this 
matrix?
a b a a a
 b a b a a
 
Answer: S = 
 b a a b a  where a =
 and b = .
a a a b b 
a a a a b
 
−a 0 0 0 0
 0 a 0 0 0
 
and Λ=  0 0 a 0 0 where a = .

 0 0 0 a 0
0 0 0 0 a

23.2.7. Exercise. Prepare and deliver a (30–45 minute) blackboard presentation on the Jordan
normal form of a matrix.
CHAPTER 24

Inner Product Spaces

24.1. Background
Topics: inner products, norm, unit vector, square summable sequences, the Schwarz (or Cauchy-
Schwarz) inequality. (See AOLT, section 1.3 and also section 1.2 of my lecture notes on operator
algebras [12].)

24.1.1. Definition. If x is a vector in Rn , then the norm (or length) of x is defined by


p
kxk = hx, xi .
A vector of length 1 is a unit vector
24.1.2. Theorem (Pythagorean theorem). If x and y are perpendicular vectors in an inner
product space, then
kx + yk2 = kxk2 + kyk2 .

24.1.3. Theorem (Parallelogram law). If x and y are vectors in an inner product space, then
kx + yk2 + kx − yk2 = 2kxk2 + 2kyk2 .

24.1.4. Theorem (Polarization identity). If x and y are vectors in a complex inner product
space, then
hx, yi = 41 (kx + yk2 − kx − yk2 + i kx + iyk2 − i kx − iyk2 ) .
What is the correct formula for a real inner product space?

77
78 24. INNER PRODUCT SPACES

24.2. Exercises (Due: Mon. Mar. 9)


24.2.1. Exercise. Prove the Pythagorean theorem 24.1.2.

24.2.2. Exercise. Prove the parallelogram law 24.1.3.

24.2.3. Exercise. On the vector space C([0, 1]) of continuous real valued functions on the
interval [0, 1] the uniform norm is defined by kf ku = sup{|f (x)| : 0 ≤ x ≤ 1}. Prove that the
uniform norm is not inducedp by an inner product. That is, prove that there is no inner product on
C([0, 1]) such that kf ku = hf, f i for all f ∈ C([0, 1]). Hint. Use exercise 24.2.2

24.2.4. Exercise. Prove the polarization identity 24.1.4.

24.2.5. Exercise. If a1 , . . . , an > 0, then


X n n
X 
1
aj ≥ n2 .
ak
j=1 k=1
The proof of this is obvious from the Schwarz inequality if we choose x and y to be what?

24.2.6. Exercise (†). Notice that part (a) is a special case of part (b).
2
(a) Show that if a, b, c > 0, then 21 a + 13 b + 61 c ≤ 12 a2 + 13 b2 + 16 c2 .
(b) Show that if a1 , . . . , an , w1 , . . . , wn > 0 and nk=1 wk = 1, then
P
X n 2 Xn
ak wk ≤ ak 2 wk .
k=1 k=1
P∞ 2
P∞ −1 a
24.2.7. Exercise. Show that if k=1 ak converges, then k=1 k k converges absolutely.

P∞24.2.8.2 Exercise. A sequence (ak ) of (real or) complex numbers is square summable if
k=1 |ak | < ∞. The vector space of all square summable sequences of real numbers (respectively,
complex numbers) is denoted by l2 (R) (respectively, l2 (C)). When no confusion will result, both
are denoted by l2 . If a, b ∈ l2 , define

X
ha, bi = ak bk .
k=1
Show that this definition makes sense and makes l2 into an inner product space.

24.2.9. Exercise (†). Use vector methods (no coordinates, no major results from Euclidean
geometry) to show that the midpoint of the hypotenuse of a right triangle is equidistant from the
vertices. Hint. Let 4ABC be a right triangle and O be the midpoint of the hypotenuse AB. What
−→ −−→ −−→ −−→
can you say about < AO + OC, CO + OB >?

24.2.10. Exercise. Use vector methods to show that if a parallelogram has perpendicular
diagonals, then it is a rhombus (that is, all four sides have equal length). Hint. Let ABCD be
−→ −−→
a parallelogram. Express the dot (inner) product of the diagonals AC and DB in terms of the
−−→ −−→
lengths of the sides AB and BC.

24.2.11. Exercise. Use vector methods to show that an angle inscribed in a semicircle is a
right angle.
24.2. EXERCISES (DUE: MON. MAR. 9) 79

24.2.12. Exercise. Let a be a vector in an inner product space H. If hx, ai = 0 for every
x ∈ H, then a = 0.

24.2.13. Exercise. Let S, T : H → K be linear maps between inner product spaces H and K.
If hSx, yi = hT x, yi for every x ∈ H and y ∈ K, then S = T .
CHAPTER 25

Orthogonality and Adjoints

25.1. Background
Topics: orthogonality, orthogonal complement, adjoint, involution, ∗ -algebra, ∗ -homomorphism,
self-adjoint, Hermitian, normal, unitary.

25.1.1. Definition. An involution on an algebra A is a map x 7→ x∗ from A into A which


satisfies
(i) (x + y)∗ = x∗ + y ∗ ,
(ii) (αx)∗ = α x∗ ,
(iii) x∗∗ = x, and
(iv) (xy)∗ = y ∗ x∗
for all x, y ∈ A and α ∈ E. An algebra on which an involution has been defined is a ∗ -algebra
(pronounced “star algebra”). An algebra homomorphism φ between ∗ -algebras which preserves
involution (that is, such that φ(a∗ ) = (φ(a))∗ ) is a ∗ -homomorphism (pronounced “star homomor-
phism”. A ∗ -homomorphism φ : A → B between unital algebras is said to be unital if φ(1A ) = 1B .

25.1.2. Definition. Let H and K be inner product spaces and T : H → K be a linear map. If
there exists a function T ∗ : K → H which satisfies
hT x, yi = hx, T ∗ yi
for all x ∈ H and y ∈ K, then T ∗ is the adjoint of T .
When H and K are real vector spaces, the adjoint of T is usually called the transpose of T
and the notation T t is used (rather than T ∗ ).

25.1.3. Definition. An element a of a ∗ -algebra A is self-adjoint (or Hermitian) if a∗ = a.


It is normal if a∗ a = aa∗ . And it is unitary if a∗ a = aa∗ = 1. The set of all self-adjoint elements
of A is denoted by H(A), the set of all normal elements by N (A), and the set of all unitary elements
by U(A).

81
82 25. ORTHOGONALITY AND ADJOINTS

25.2. Exercises (Due: Wed. Mar. 11)


25.2.1. Exercise. Let H be an inner product space and a ∈ H. Define ψa : V → E by
ψa (x) = hx, ai for all x ∈ H. Then ψa is a linear functional on H. (See the last sentence of the
proof of Theorem 1.15, AOLT, page 15.)

25.2.2. Exercise (†). Give a proof of the Riesz representation theorem that is much simpler
than the one given in AOLT, page 15, Theorem 1.15. Hint. Use exercises 9.2.9 and 9.2.13.

25.2.3. Exercise. Show by example that the Riesz representation theorem (AOLT, page 15,
Theorem 1.15) does not
P∞hold (as stated) inPinfinite dimensional spaces. Hint. Consider the function

φ : lc (N) → E : x 7→ k=1 αk where x = k=1 αk ek , the ek ’s being the standard basis vectors for
l0 (N).

25.2.4. Exercise. Show that if S is a set of mutually perpendicular vectors in an inner product
space and 0 ∈
/ S, then the set S is linearly independent.

25.2.5. Exercise. Let S and T be subsets of an inner product space H.


(a) S ⊥ is a subspace of H.
(b) If S ⊆ T , then T ⊥  S ⊥ .
(c) (span S)⊥ = S ⊥ .

25.2.6. Exercise. Show that if M is a subspace of a finite dimensional inner product space H,
then H = M ⊕ M ⊥ . Show also that this need not be true in an infinite dimensional space.

25.2.7. Exercise. Let M be a subspace of an inner product space H.


(a) Show that M ⊆ M ⊥⊥ .
(b) Prove that equality need not hold in (a).
(c) Show that if H is finite dimensional, then M = M ⊥⊥ .

25.2.8. Exercise. Let M and N be subspaces of an inner product space H.


(a) Show that (M + N )⊥ = (M ∪ N )⊥ = M ⊥ ∩ N ⊥ .
(b) Show that if H is finite dimensional, then (M ∩ N )⊥ = M ⊥ + N ⊥ .

25.2.9. Exercise. It seems unlikely that the similarity between the results of the exercises 25.2.5
and 25.2.8 and those you obtained in exercises 10.2.5 and 10.2.6 could be purely coincidental.
Explain carefully what is going on here.

25.2.10. Exercise. Show that if U is the unilateral shift operator on l2


U : l2 → l2 : (x1 , x2 , x2 , . . . ) 7→ (0, x1 , x2 , . . . ),
then its adjoint is given by
U ∗ : l2 → l2 : (x1 , x2 , x3 , . . . ) 7→ (x2 , x3 , x4 , . . . ).

25.2.11. Exercise (Multiplication operators). Let (X, A, µ) be a sigma-finite measure space


and L2 (X) be the Hilbert space of all (equivalence classes of) complex valued functions on X
which are square integrable with respect to µ. Let φ be an essentially bounded complex valued
µ-measurable function on X. Define Mφ on L2 (X) by Mφ (f ) := φf. Then Mφ is an operator on
L2 (X), kMφ k = kφk∞ , and Mφ ∗ = Mφ .
25.2. EXERCISES (DUE: WED. MAR. 11) 83

25.2.12. Exercise. Let a and b be elements of a ∗ -algebra. Show that a commutes with b if
and only if a∗ commutes with b∗ .

25.2.13. Exercise. Show that in a unital ∗ -algebra 1∗ = 1.

25.2.14. Exercise. Let a be an element of a unital ∗ -algebra. Show that a∗ is invertible if and
only if a is. And when a is invertible we have
∗
(a∗ )−1 = a−1 .

25.2.15. Exercise. Let a be an element of a unital ∗ -algebra. Show that λ ∈ σ(a) if and only
if λ ∈ σ(a∗ ).
CHAPTER 26

Orthogonal Projections

26.1. Background
Topics: orthogonal projections on an inner product space, projections in an algebra with involu-
tion, orthogonality of projections in a ∗ -algebra.

26.1.1. Definition. An operator P on an inner product space H is an orthogonal projec-


tion if it is self-adjoint and idempotent. (On a real inner product space, of course, the conditions
are symmetric and idempotent.)

26.1.2. Definition. A projection in a ∗ -algebra A is an element p of the algebra which is


idempotent (p2 = p) and self-adjoint (p∗ = p). The set of all projections in A is denoted by P(A).
Notice that projections in *-algebras correspond to orthogonal projections on inner product spaces
and not to (the more general) projections on vector spaces.

26.1.3. Definition. Two projections p and q in a ∗ -algebra are orthogonal, written p ⊥ q,


if pq = 0.

85
86 26. ORTHOGONAL PROJECTIONS

26.2. Exercises (Due: Fri. Mar. 13)


26.2.1. Exercise. Let T : H → K be a linear map between inner product spaces. Show that if
the adjoint of T exists, then it is unique. (That is, there is at most one function T ∗ : K → H that
satisfies hT x, yi = hx, T ∗ yi for all x ∈ H and y ∈ K.)

26.2.2. Exercise (†). Let T : H → K be a linear map between inner product spaces. Show
that if the adjoint of T exists, then it is linear.

26.2.3. Exercise. Let S and T be operators on an inner product space H. Then (S + T )∗ =


S∗ + T ∗ and (αT )∗ = αT ∗ for every α ∈ E. (See AOLT, Theorem 1.17.)

26.2.4. Exercise. Let T : H → K be a linear map between inner product spaces. Show that if
the adjoint of T exists, then so does the adjoint of T ∗ and T ∗∗ = T .

26.2.5. Exercise. Let S : H → K and T : K → L be linear maps between complex inner


product space. Show that if S and T both have adjoints, then so does their composite T S and
(T S)∗ = S ∗ T ∗ .
(See AOLT, Theorem 1.17 and section 1.8, exercise 12.)

26.2.6. Exercise. Let H be an inner product space and M and N be subspaces of H such that
H = M + N and M ∩ N = ∅. (That is, we suppose that H is the vector space direct sum of M
and N .) Also let P = EN M be the projection of H along N onto M . Prove that P is self-adjoint
(P ∗ exists and P ∗ = P ) if and only if M ⊥ N .

26.2.7. Exercise. If P is an orthogonal projection on an inner product space H, then the


space is the orthogonal direct sum of the range of P and its kernel; that is H = ran P + ker P and
ran P ⊥ ker P .

26.2.8. Exercise. Let p and q be projections in a ∗ -algebra. Then pq is a projection if and


only if pq = qp.

26.2.9. Exercise. Let P and Q be orthogonal projections on an inner product space V . If


P Q = QP , then P Q is an orthogonal projection whose kernel is ker P + ker Q and whose range is
ran P ∩ ran Q.

26.2.10. Exercise (†). Let T : H → K be a linear map between inner product spaces. Show
that
ker T ∗ = (ran T )⊥ .
Is there a relationship between T being surjective and T ∗ being injective?

26.2.11. Exercise. Let T : H → K be a linear map between inner product spaces. Show that
ker T = (ran T ∗ )⊥ .
Is there a relationship between T being injective and T ∗ being surjective?

26.2.12. Exercise. It seems unlikely that the similarity between the results of the two preceding
exercises and those you obtained in exercises 11.2.3 and 11.2.4 could be purely coincidental. Explain
carefully what is going on here.
26.2. EXERCISES (DUE: FRI. MAR. 13) 87

26.2.13. Exercise. A necessary and sufficient condition for two projections p and q in a
∗ -algebrato be orthogonal is that pq + qp = 0.

26.2.14. Exercise. Let p and q be projections in a ∗ -algebra. Then p + q is a projection if and


only if p and q are orthogonal.

26.2.15. Exercise. Let P and Q be orthogonal projections on an inner product space V . Then
P ⊥ Q if and only if ran P ⊥ ran Q. In this case P + Q is an orthogonal projection whose kernel is
ker P ∩ ker Q and whose range is ran P + ran Q.
CHAPTER 27

The Spectral Theorem for Inner Product Spaces

27.1. Background
Topics: subprojections, partial isometries, initial and final projections, orthogonal resolutions of
the identity, the spectral theorem for complex inner product spaces.

27.1.1. Definition. If p and q are projections in a ∗ -algebra we write p ≤ q if p = pq. In this


case we say that p is a subprojection of q or that p is smaller than q. (Note: it is easy to see
that the condition p = pq is equivalent to p = qp.)

27.1.2. Definition. An element v of a ∗ -algebra is a partial isometry if vv ∗ v = v. If v is a


partial isometry, then v ∗ v is the initial projection of v and vv ∗ is its final projection.

27.1.3. Definition. Let M1 ⊕ · · · ⊕ Mn be an orthogonal direct sum decomposition of an inner


product space V . For each k let Pk be the orthogonal projection onto Mk . The projections P1 ,
. . . Pn are the orthogonal projections associated with the orthogonal direct sum
decomposition V = M1 ⊕ · · · ⊕ Mn . The family {P1 , . . . , Pn } is an orthogonal resolution
of the identity. (Compare this with definitions 15.1.3 and 21.1.1.)

27.1.4. Definition. Two elements a and b of a ∗ -algebra A are unitarily equivalent if there
exists a unitary element u of A such that b = u∗ au.

27.1.5. Definition. An operator T on a complex inner product space V is unitarily diago-


nalizable if there exists an orthonormal basis for V consisting of eigenvectors of T .

27.1.6. Theorem (Spectral Theorem: Complex Inner Product Space Version). If N is a normal
operator on a finite dimensional complex inner product space V , then N is unitarily diagonalizable
and can be written as
Xn
N= λ k Pk
k=1
where λ1 , . . . , λn are the (distinct) eigenvalues of N and {P1 , . . . , Pn } is the orthogonal resolution
of the identity whose orthogonal projections are associated with the corresponding eigenspaces M1 ,
. . . , Mn .

89
90 27. THE SPECTRAL THEOREM FOR INNER PRODUCT SPACES

27.2. Exercises (Due: Mon. Mar. 30)


27.2.1. Exercise. If A is a ∗ -algebra, then the relation ≤ defined in 27.1.1 is a partial ordering
on P(A). If A is unital, then 0 ≤ p ≤ 1 for every p ∈ P(A).

27.2.2. Exercise. Let p and q be projections in a ∗ -algebra. Then q − p is a projection if and


only if p ≤ q.

27.2.3. Exercise. Let P and Q be orthogonal projections on an inner product space V . Then
the following are equivalent:
(i) P ≤ Q;
(ii) kP xk ≤ kQxk for all x ∈ V ; and
(iii) ran P ⊆ ran Q.
In this case Q − P is an orthogonal projection whose kernel is ran P + ker Q and whose range is
ran Q ran P .
Notation: If M and N are subspaces of an inner product space with N  M , then M N denotes
the orthogonal complement of N in M (so N ⊥ (M N ) and M = N ⊕ (M N ) ).

27.2.4. Exercise. If p and q are commuting projections in a ∗ -algebra, then p ∧ q and p ∨ q


exist. In fact, p ∧ q = pq and p ∨ q = p + q − pq.

27.2.5. Exercise. Let v be a partial isometry in a ∗ -algebra, p be its initial projection, and q
be its final projection. Then
(a) v ∗ is a partial isometry,
(b) p is a projection,
(c) p is the smallest projection such that vp = v,
(d) q is a projection, and
(e) q is the smallest projection such that qv = v.

27.2.6. Exercise. Let N be a normal operator on a finite dimensional complex inner product
space H. Show that kN xk = kN ∗ xk for all x ∈ H. (See AOLT, page 21, Proposition 1.22.)

27.2.7. Exercise. Let N be the operator on C2 whose matrix representation is


 
0 1
.
−1 0
(a) The eigenspace M1 associated with the eigenvalue −i is the span of ( 1 , ).
(b) The eigenspace M2 associated with the eigenvalue i is the span of ( 1 , ).
(c) The (matrix representations of the) orthogonal
  projections P1 andP2 onto the eigenspaces
a b a −b
M1 and M2 , respectively, are P1 = ; and P2 = where a = and
−b a b a
b= .
(d) Write N as a linear combination of the projections found in (c).
Answer: [N ] = P1 + P2 .
 
a a
(e) A unitary matrix U which diagonalizes [N ] is where  a= and b = .
−b b
The associated diagonal form Λ = U ∗ [N ] U of [N ] is  .
27.2. EXERCISES (DUE: MON. MAR. 30) 91
 
2 1+i
27.2.8. Exercise. Let H be the self-adjoint matrix .
1−i 3
(a) Use the spectral theorem to write H as a linear combination of orthogonal projections.
 
1 2 −1 − i
Answer: H = αP1 + βP2 where α = ,β = , P1 = ,
  3
1 1 1+i
and P2 = .
3
(b) Find a square root of H.

 
1 4 1+i
Answer: H = .
3
 
4 + 2i 1 − i 1 − i
1
27.2.9. Exercise. Let N =  1 − i 4 + 2i 1 − i .
3 1 − i 1 − i 4 + 2i
 
a b b
(a) The matrix N is normal because N N ∗ = N ∗ N =  b a b  where a = and
b b a
b= .
(b) According to the spectral theorem N can be written as a linear combination of orthogonal
projections. Written in this
 form N= λ1 P1 + λ2 P
2 where λ1 =  ,
a a a b −a −a
λ2 = , P1 = a a a, and P2 = −a b −a where a = and
a a a −a −a b
b= .  
a −b −c
(c) A unitary matrix U which diagonalizes N is a b −c where a = ,b= ,
a d 2c
c =
 , and d = . The associated diagonal form Λ = U ∗ N U of N is

 
 .
 

27.2.10. Exercise (†). Let A be an n × n matrix A of complex numbers. Then A, regarded as


an operator on Cn , is unitarily diagonalizable if and only if it is unitarily equivalent to a diagonal
matrix.
27.2.11. Exercise. The real and imaginary parts of an element of a ∗ -algebra are self-adjoint.

27.2.12. Exercise. An element of a ∗ -algebra is normal if and only if its real part and its
imaginary part commute. (See AOLT, page 23, proposition 1.25.)

27.2.13. Exercise. AOLT, section 1.8, exercise 19.


CHAPTER 28

Multilinear Maps

28.1. Background
Topics: free objects, free vector space, the “little-oh” functions, tangency, differential, permuta-
tions, cycles, symmetric group, multilinear maps, alternating multilinear maps.

Differential calculus
The following four definitions introduce the concepts necessary for a (civilized) discussion of
differentiability of real valued functions on Rn . In these definitions f and g are functions from Rn
into R and a is a point in Rn .
28.1.1. Definition. For every h ∈ Rn let
∆fa := f (a + h) − f (a).
28.1.2. Definition. The function f belongs to the family o of “little-oh” functions if for every
c > 0 there exists a δ > 0 such that kf (x)k ≤ ckxk whenever kxk < δ.
28.1.3. Definition. Functions f and g are tangent (at 0) if f − g ∈ o.
28.1.4. Definition. The function f is differentiable at the point a if there exists a linear
map dfa from Rn into R which is tangent to ∆fa . We call dfa the differential (or total
derivative) of f at a.

28.1.5. Remark. If a function f : Rn → R is differentiable, then at each point a in Rn the


differential of f at a is a linear map from Rn into R. Thus we regard df : a 7→ dfa (the differential
of f ) as a map from Rn into L(Rn , R). It is natural to inquire whether the function df is itself
differentiable. If it is, its differential at a (which we denote by d 2fa ) is a linear map from Rn into
L(Rn , R); that is
d 2fa ∈ L(Rn , L(Rn , R)).
In the same vein, since d f maps R into L(Rn , L(Rn , R)), its differential (if it exists) belongs to
2 n

L(Rn , L(Rn , L(Rn , R))). It is moderately unpleasant to contemplate what an element of L(Rn , L(Rn , R))
or of L(Rn , L(Rn , L(Rn , R))) might “look like”. And clearly as we pass to even higher order differ-
entials things look worse and worse. It is comforting to discover that an element of L(Rn , L(Rn , R))
may be regarded as a map from (Rn )2 into R which is bilinear (that is, linear in both of its vari-
ables), and that L(Rn , L(Rn , L(Rn , R))) may be thought of as a map from (Rn )3 into R which is
linear in each of its three variables. More generally, if V1 , V2 , V3 , and W are arbitrary vector spaces
it will be possible to identify the vector space L(V1 , L(V2 , W ))) with the space of bilinear maps from
V1 × V2 to W , the vector space L(V1 , L(V2 , L(V3 , W ))) with the trilinear maps from V1 × V2 × V3
to W , and so on (see exercise 28.2.3).

Permutations
A bijective map σ : X → X from a set X onto itself is a permutation of the set. If x1 ,
x2 , . . . , xn are distinct elements of a set X, then the permutation of X that maps x1 7→ x2 ,
x2 7→ x3 , . . . , xn−1 7→ xn , xn 7→ x1 and leaves all other elements of X fixed is a cycle (or cyclic
93
94 28. MULTILINEAR MAPS

permutation) of length n. A cycle of length 2 is a transposition. Permutations σ1 , . . . , σn


of a set X are disjoint if each x ∈ X is moved by at most one σj ; that is, if σj (x) 6= x for at most
one j ∈ Nn := {1, 2, . . . , n}.
28.1.6. Proposition. If X is a nonempty set, the set of permutations of X is a group under
composition.
Notice that if σ and τ are disjoint permutations of a set X, then στ = τ σ. If X is a set
with n elements, then the group of permutations of X (which we may identify with the group of
permutations of the set Nn ) is the symmetric group on n elements (or on n letters); it is
denoted by Sn .
28.1.7. Proposition. Any permutation σ 6= idX of a set X can be written as a product (com-
posite) of cycles of length at least 2. This decomposition is unique up to the order of the factors.
A permutation of a set X is even if it can be written as the product of an even number of
transpositions, and it is odd if it can be written as a product of an odd number of transpositions.
28.1.8. Proposition. Every permutation of a finite set is either even or odd, but not both.
The sign of a permutation σ, denoted by sgn σ, is +1 if σ is even and −1 if σ is odd.

Multilinear maps
28.1.9. Definition. Let V1 , V2 , . . . , Vn , and W be vector spaces over a field F. We say that
a function f : V1 × · · · × Vn → W is multilinear (or n-linear) if it is linear in each of its n
variables. We ordinarily call 2-linear maps bilinear and 3-linear maps trilinear. We denote by
Ln (V1 , . . . , Vn ; W ) the family of all n-linear maps from V1 × · · · × Vn into W . A multilinear map
from the product V1 × · · · × Vn into the scalar field F is a multilinear form (or a multilinear
functional.

28.1.10. Definition. A multilinear map f : V n → W from the n-fold product V × · · · × V of


a vector space V into a vector space W is alternating if f (v1 , . . . , vn ) = 0 whenever vi = vj for
some i 6= j.

28.1.11. Definition. If V and W are vector spaces, a multilinear map f : V n → W is skew-


symmetric if
f (v 1 , . . . , v n ) = (sgn σ)f v σ(1) , . . . , v σ(n)

for all σ ∈ Sn .
28.2. EXERCISES (DUE: WED. APR. 1) 95

28.2. Exercises (Due: Wed. Apr. 1)


28.2.1. Exercise. Let V and W be vector spaces, u, v, x, y ∈ V , and α ∈ R.
(a) Expand T (u + v, x + y) if T is a bilinear map from V × V into W .
(b) Expand T (u + v, x + y) if T is a linear map from V ⊕ V into W .
(c) Write T (αx, αy) in terms of α and T (x, y) if T is a bilinear map from V × V into W .
(d) Write T (αx, αy) in terms of α and T (x, y) if T is a linear map from V ⊕ V into W .

28.2.2. Exercise. Show that composition of operators on a vector space V is a bilinear map
on L(V ).

28.2.3. Exercise. Show that if U , V , and W are vector spaces, then so is L2 (U, V ; W ). Show
also that the spaces L(U, L(V, W )) and L2 (U, V ; W ) areisomorphic. Hint. The isomorphism is
implemented by the map
F : L(U, L(V, W )) → L2 (U, V ; W ) : φ 7→ φ̂
where φ̂(u, v) := (φ(u))(v) for all u ∈ U and v ∈ V . (Recall remark 28.1.5.)

28.2.4. Exercise. Let V = R2 and f : V 2 → R : (x, y) 7→ x1 y2 . Is f bilinear? Is it alternating?

28.2.5. Exercise. Let V = R2 and g : V 2 → R : (x, y) 7→ x1 +y2 . Is g bilinear? Is it alternating?

28.2.6. Exercise. Let V = R2 and h : V 2 → R : (x, y) 7→ x1 y2 − x2 y1 . Is h bilinear? Is it


alternating? If {e1 , e2 } is the usual basis for R2 , what is h(e1 , e2 )?

28.2.7. Exercise. Let V and W be vector spaces. Then every alternating multilinear map
f : V n → W is skew-symmetric. Hint. Consider f (u + v, u + v) in the bilinear case.
CHAPTER 29

Determinants

29.1. Background
Topics: determinant of a matrix, determinant function on Mn (A).

29.1.1. Definition. A field F is of characteristic zero if n1 = 0 for no n ∈ N.

NOTE: In the following material on determinants, we will assume that the scalar fields underlying
all the vector spaces we encounter are of characteristic zero. Thus multilinear functions will be
alternating if and only if they are skew-symmetric. (See exercises 28.2.7 and 29.2.3.)

29.1.2.
n Remark. Let A be a unital commutative algebra. In the sequel we identify the algebra
An = An × · · · × An (n factors) with the algebra Mn (A) of n × n matrices of elements of A by
n
regarding the term ak in (a1 , . . . , an ) ∈ An as the k th column vector of an n×n matrix of elements
of A. There are many standard notations for the same thing: Mn (A), An × · · · × An (n factors),
n 2
An , An×n , and An , for example.
The identity matrix, which we usually denote by I, in Mn (A) is (e1 , . . . , en ), where e1 , . . . , en
are the standard basis vectors for An ; that is, e1 = (1A , 0, 0, . . . ), e2 = (0, 1A , 0, 0, . . . ), and so on.

29.1.3. Definition. Let A be a unital commutative algebra. A determinant function is


an alternating multilinear map D : Mn (A) → A such that D(I) = 1A .

97
98 29. DETERMINANTS

29.2. Exercises (Due: Fri. Apr. 3)


29.2.1. Exercise. Let V = Rn . Define
X
∆ : V n → R : (v 1 , . . . , v n ) 7→ 1
(sgn σ)vσ(1) n
. . . vσ(n) .
σ∈Sn

Then ∆ is an alternating multilinear form which satisfies ∆(e1 , . . . , en ) = 1.


Note: If A is an n × n matrix of real numbers we define det A, the determinant of A, to be
∆(v 1 , . . . , v n ) where v 1 , . . . , v n are the column vectors of the matrix A.
 
1 3 2
29.2.2. Exercise. Let A = −1 0 3. Use the definition above to find det A.
−2 −2 1

29.2.3. Exercise. If V and W are vector spaces over a field F of characteristic zero and
f : V n → W is a skew-symmetric multilinear map, then f is alternating.

29.2.4. Exercise. Let ω be an n-linear functional on a vector space V over a field of charac-
teristic zero. If ω(v 1 , . . . , v n ) = 0 whenever v i = v i+1 for some i, then ω is skew-symmetric and
therefore alternating.

29.2.5. Exercise. Let f : V n → W be an alternating multilinear map, j 6= k in Nn , and α be


a scalar. Then
f (v 1 , . . . , v j + αv k , . . . , v n ) = f (v 1 , . . . , v j , . . . , v n ).
↑ ↑
j j

29.2.6. Exercise. Let A be a unital commutative algebra and n ∈ N. A determinant function


exists on Mn (A). Hint. Consider
X
det : Mn (A) → A : (a1 , . . . an ) 7→ (sgn σ)a1σ(1) . . . anσ(n) .
σ∈Sn

29.2.7. Exercise. Let D be an alternating multilinear map on Mn (A) where A is a unital


commutative algebra and n ∈ N. For every C ∈ Mn (A)
D(C) = D(I) det C.

29.2.8. Exercise. Show that the determinant function on Mn (A) (where A is a unital com-
mutative algebra) is unique.

29.2.9. Exercise. Let A be a unital commutative algebra and B, C ∈ Mn (A). Then


det(BC) = det B det C.
Hint. Consider the function D(C) = D(c1 , . . . , cn ) := det(Bc1 , . . . , Bcn ), where Bck is the product
of the n × n matrix B and the k th column vector of C.

29.2.10. Exercise. For an n × n matrix B let B t , the transpose


 j  of B, bet thematrix obtained
i

from B by interchanging its rows and columns; that is, if B = bi , then B = bj . Prove that
det B t = det B.
CHAPTER 30

Free Vector Spaces

30.1. Background
Topics: free object, free vector space. (See AOLT, section 6.1.)

30.1.1. Definition. Let F be an object in a (concrete) category C and ι : S → F be a map


whose domain is a nonempty set S. We say that the object F is free on the set S (or that F is
the free object generated by S) if for every object A in C and every map f : S → A there
exists a unique morphism feι : F → A in C such that feι ◦ ι = f .
S?
ι / F F
?? 
??  
??  e  e
?  fι 
f ??

??  
  
A A
We will be interested in free vector spaces; that is, free objects in the category VEC of vector
spaces and linear maps. Naturally, merely defining a concept does not guarantee its existence. It
turns out, in fact, that free vector spaces exist on arbitrary sets. (See exercise 30.2.2.)

99
100 30. FREE VECTOR SPACES

30.2. Exercises (Due: Mon. Apr. 6)


30.2.1. Exercise. If two objects in some concrete category are free on the same set, then they
are isomorphic.

30.2.2. Exercise. Let S be an arbitrary nonempty set and F be a field. Prove that there exists
a vector space V over F which is free on S. Hint. Given the set S let V be the set of all F-valued
functions on S which have finite support. Define addition and scalar multiplication pointwise.
The map ι : s 7→ χ{s} of each element s ∈ S to the characteristic function of {s} is the desired
injection. To verify that V is free over S it must be shown that for every vector space W and
f fe
every function S / W there exists a unique linear map V / W which makes the following
diagram commute.
S?
ι / V V
?? 
??  
??  ˜ 
?  f  f˜
f ??
??  
  
W W
30.2.3. Exercise. Prove that every vector space is free. Hint. Of course, part of the problem
is to specify a set S on which the given vector space is free.

30.2.4. Exercise. Let S = {a, ∗, #}. Then an expression such as



3a − 21 ∗ + 2 #
is said to be a formal linear combination of elements of S. Make sense of such expressions.
CHAPTER 31

Tensor Products of Vector Spaces

31.1. Background
Topics: vector space tensor products, elementary tensors. (See AOLT, section 6.2. A more
extensive and very careful exposition of tensor products can be found in chapter 14 of [23].)

31.1.1. Definition. Let U and V be vector spaces. A vector space U ⊗ V together with a
bilinear map τ : U × V → U ⊗ V is a tensor product of U and V if for every vector space W
and every bilinear map B : U × V → W , there exists a unique linear map B
e : U ⊗ V → W which
makes the following diagram commute.

U ×V
τ / U ⊗V
??
??
??
?? e
B ??
B
??
 
W

101
102 31. TENSOR PRODUCTS OF VECTOR SPACES

31.2. Exercises (Due: Wed. Apr. 8)


31.2.1. Exercise. Prove that in the category of vector spaces and linear maps if tensor products
exist, then they are unique (up to isomorphism).

31.2.2. Exercise. Show that in the category of vector spaces and linear maps tensor products
exist. Hint. Let U and V be vector spaces over a field F. Consider the free vector space lc (U ×V ) =
lc (U × V , F). Define
∗ : U × V → lc (U × V ) : (u, v) 7→ χ{(u,v)} .
Then let
S1 = {(u1 + u2 ) ∗ v − u1 ∗ v − u2 ∗ v : u1 , u2 ∈ U and v ∈ V },
S2 = {(αu) ∗ v − α(u ∗ v) : α ∈ F, u ∈ U , and v ∈ V },
S3 = {u ∗ (v1 + v2 ) − u ∗ v1 − u ∗ v2 : u ∈ U and v1 , v2 ∈ V },
S4 = {u ∗ (αv) − α(u ∗ v) : α ∈ F, u ∈ U , and v ∈ V },
S = span(S1 ∪ S2 ∪ S3 ∪ S4 ), and
U ⊗ V = lc (U × V )/S .
Also define
τ : U × V → U ⊗ V : (u, v) 7→ [u ∗ v].
Then show that U ⊗ V and τ satisfy the conditions stated in definition 31.1.1.

NOTE: It is conventional to write u ⊗ v for τ (u, v) = [u ∗ v]. Tensors of the form u ⊗ v are
called elementary tensors (or decomposable tensors or homogeneous tensors). Keep
in mind that not every member of U ⊗ V is of the form u ⊗ v.
31.2.3. Exercise. Let u and v be elements of finite dimensional vector spaces U and V , re-
spectively. Show that if u ⊗ v = 0, then either u = 0 or v = 0.

31.2.4. Exercise. Let u1 , . . . , un be linearly independent vectors in a vector space U and


v1 , . . . , vn be arbitrary vectors in a vector space V . Prove that if nk=1 uk ⊗ vk = 0, then vk = 0
P
for each k ∈ Nn .
31.2.5. Exercise. Let {ei }m n
i=1 and {fj }j=1 be bases for vector spaces U and V , respectively.
m n is a basis for U ⊗ V . Conclude that if U and V are finite
Show that the family {ei ⊗ fj }i=1, j=1
dimensional, then the dimension of U ⊗ V is the product of the dimensions of U and V .

31.2.6. Exercise. Let U and V be finite dimensional vector spaces and {fj }nj=1 be a basis
for V . Show that for every element t ∈ U ⊗ V there exist unique vectors u1 , . . . un ∈ U such that
Xn
t= uj ⊗ fj .
j=1

31.2.7. Exercise. If U and V are vector spaces, then


U ⊗V ∼ = V ⊗ U.

31.2.8. Exercise. If V is a vector space over a field F, then


V ⊗F∼ =V ∼ = F ⊗ V.
CHAPTER 32

Tensor Products of Vector Spaces (continued)

32.1. Background
Topics: No additional topics.

103
104 32. TENSOR PRODUCTS OF VECTOR SPACES (CONTINUED)

32.2. Exercises (Due: Fri. Apr. 10)


32.2.1. Exercise. Let U , V , and W be vector spaces. For every vector space X and every
trilinear map k : U × V × W → X there exists a unique linear map ek : U ⊗ (V ⊗ W ) : → X such
that 
k u ⊗ (v ⊗ w) = k(u, v, w)
e
for all u ∈ U , v ∈ V , and w ∈ W .

32.2.2. Exercise. If U , V , and W are vector spaces, then


U ⊗ (V ⊗ W ) ∼
= (U ⊗ V ) ⊗ W.

32.2.3. Exercise. If U and V are finite dimensional vector spaces, then


U ⊗V∗ ∼ = L(V, U ).
Hint. Consider the map
T : U × V ∗ → L(V, U ) : (u, φ) 7→ T (u, φ)
where T (u, φ) (v) = φ(v) u.

32.2.4. Exercise. If U , V , and W are vector spaces, then


U ⊗ (V ⊕ W ) ∼
= (U ⊗ V ) ⊕ (U ⊗ W ).

32.2.5. Exercise. If U and V are finite dimensional vector spaces, then


(U ⊗ V )∗ ∼
= U ∗ ⊗ V ∗.

32.2.6. Exercise. If U , V , and W are finite dimensional vector spaces, then


L(U ⊗ V, W ) ∼
= L(U, L(V, W )) ∼= L2 (U, V ; W ).

32.2.7. Exercise. Let u1 , u2 ∈ U and v1 , v2 ∈ V where U and V are finite dimensional vector
spaces. Show that if u1 ⊗ v1 = u2 ⊗ v2 6= 0, then u2 = αu1 and v2 = βv1 where αβ = 1.
CHAPTER 33

Tensor Products of Linear Maps

33.1. Background
Topics: tensor products of linear maps.

33.1.1. Definition. Let S : U → W and T : V → X be linear maps between vector spaces. We


define the tensor product of the linear maps S and T by
S ⊗ T : U ⊗ V → W ⊗ X : u ⊗ v 7→ S(u) ⊗ T (v) .

105
106 33. TENSOR PRODUCTS OF LINEAR MAPS

33.2. Exercises (Due: Mon. April 13)


33.2.1. Exercise. Definition 33.1.1 defines the tensor product S ⊗ T of two maps only for
homogeneous elements of U ⊗ V . Explain exactly what is needed to convince ourselves that S ⊗ T
is well defined on all of U ⊗ V . Then prove that S ⊗ T is a linear map.
33.2.2. Exercise. Some authors hesitate to use the notation S ⊗ T for the mapping defined
in 33.1.1 on the (very reasonable) grounds that S ⊗ T already has a meaning; it is a member of the
vector space L(U, W ) ⊗ L(V, X). Discuss this problem and explain, in particular, why the use of
the notation S ⊗ T in 33.1.1 is not altogether unreasonable.
33.2.3. Exercise. Suppose that R, S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and
X are finite dimensional vector spaces. Then
(R + S) ⊗ T = R ⊗ T + S ⊗ T.

33.2.4. Exercise. Suppose that R ∈ L(U, W ) and that S, T ∈ L(V, X) where U , V , W , and
X are finite dimensional vector spaces. Then
R ⊗ (S + T ) = R ⊗ S + R ⊗ T.

33.2.5. Exercise. Suppose that S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and X
are finite dimensional vector spaces. Then for all α, β ∈ R
(αS) ⊗ (βT ) = αβ(S ⊗ T ).

33.2.6. Exercise. Suppose that Q ∈ L(U, W ), R ∈ L(V, X), S ∈ L(W, Y ), and that T ∈
L(X, Z) where U , V , W , X, Y , and Z are finite dimensional vector spaces. Then
(S ⊗ T )(Q ⊗ R) = SQ ⊗ T R.

33.2.7. Exercise. Suppose that S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and X
are finite dimensional vector spaces. Show that if S and T are invertible, then so is S ⊗ T and
(S ⊗ T )−1 = S −1 ⊗ T −1 .

33.2.8. Exercise. Suppose that S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and X
are finite dimensional vector spaces. Show that if S ⊗ T = 0, then either S = 0 or T = 0.

33.2.9. Exercise. Suppose that S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and X
are finite dimensional vector spaces. Then
ran(S ⊗ T ) = ran S ⊗ ran T.

33.2.10. Exercise. Suppose that S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and X
are finite dimensional vector spaces. Then
ker(S ⊗ T ) = ker S ⊗ V + U ⊗ ker T.

33.2.11. Exercise. Suppose that S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and X
are finite dimensional vector spaces. Then
(S ⊗ T ) t = S t ⊗ T t .

33.2.12. Exercise. If U and V are finite dimensional vector spaces, then


IU ⊗ IV = IU ⊗V .
CHAPTER 34

Grassmann Algebras

34.1. Background
Topics: group algebras, Grassmann algebras, wedge product.

V
34.1.1. Definition. Let V be an d-dimensional vector space over a field F. We say that (V )
is the Grassmann algebra (or the exterior algebra) over V if
V
V over F (multiplication is denoted by ∧),
(1) (V ) is a unital algebra
(2) V is “contained in” (V ),
(3) v ∧ vV= 0 for every v ∈ V ,
(4) dim(
V (V )) = 2d , and
(5) (V ) is generated by 1Λ(V ) and V .
The multiplication ∧ in a Grassmann algebra is called the wedge product (or the exterior
product).

107
108 34. GRASSMANN ALGEBRAS

34.2. Exercises (Due: Wed. April 15)


34.2.1. Exercise. Near the top of page 48 in AOLT, the author, in the process of defining
the group algebra FG, says that, “We regard the elements of [a group] G as linearly independent
vectors, . . . .” Explain carefully what this means and why we may in fact do such a thing.

34.2.2. Exercise. In the middle of page 48 of AOLT the author gives a “succinct formula” for
the product of two elements in a group algebra. Show that this formula is correct.

34.2.3. Exercise. Check the author’s computation of xy in Example 2.10, page 48, AOLT. Find
the product both by using the formula given in the middle of page 48 and by direct multiplication.

34.2.4. Exercise. AOLT, section 2.7, exercise 7.

34.2.5. Exercise. In definition 34.1.1 why is “contained in” in quotation marks. Give a more
precise version of condition (2).

34.2.6. Exercise.
V In the last condition of definition 34.1.1 explain more precisely what is meant
by saying that (V ) is generated by 1 and V .
V
34.2.7. Exercise. Show that if (V ) is a Grassmann algebra over a vector space V , then
1V(V ) ∈
/ V.

34.2.8. Exercise. Let V


v and w be elements of a finite dimensional vector space V . Show that
in the Grassmann algebra (V ) generated by V
v ∧ w = −w ∧ v .

34.2.9. Exercise. Let V be a d-dimensional vector space with basis E = {e1 , . . . , ed }. For each
∧ ei2 ∧ · · · ∧ eip . Also let e∅ = 1V(V ) . Show
nonempty subset S = {ei1 , ei2 , . . . , eip } of E let eS = ei1 V
that {eS : S ⊆ E} is a basis for the Grassmann algebra (V ).
CHAPTER 35

Graded Algebras

35.1. Background
Topics: graded algebras, homogeneous elements, decomposable elements.

35.1.1. Definition. An algebra A is a Z+ -graded algebra if it is a direct sum A =


L
Ak
k≥0
of vector subspaces Ak and its multiplication ∧ takes elements in Aj × Ak to elements in Aj+k for
all j, k ∈ Z+ . Elements in Ak are said to be homogeneous of degree k.
The definitions of Z-graded algebras, N-graded algebras and Z2 -graded algebras are similar. (In
the case of a Z2 -graded algebra the indices are 0 and 1 and A1 ∧ A1 ⊆ A0 .) Usually the unmodified
expression “graded algebra” refers to an Z+ -graded algebra. V
Exercise 35.2.1 says that every Grassmann algebra (V ) over a vector space V is a graded
algebra. The set of elements homogeneous of degree k is denoted by k (V ). An element of
V
Vk
(V ) which can be written in the form v1 ∧ v2 ∧ · · · ∧ vk (where v1 , . . . , vk all belong to V ) is a
decomposable element of degree k.

109
110 35. GRADED ALGEBRAS

35.2. Exercises (Due: Fri. April 17)


35.2.1. Exercise. Show that every Grassmann algebra is an Z+ -graded algebra. We denote by
Vk
(V ) the subspace of all homogeneous elements of degree k in (V ). In particular, 0 (V ) = R
V V

and 1 (V ) = V . If the dimension of V is d, take k (V ) = {0} for all k > d. (And if you wish to
V V

regard (V ) as a Z-graded algebra also take k (V ) = {0} whenever k < 0.)


V V

35.2.2. Exercise. If the dimension of a vector space V is 3 or less, then every homogeneous
element of the corresponding Grassmann algebra is decomposable.
35.2.3. Exercise. If the dimension of a (finite dimensional) vector space V is at least four,
then there exist homogeneous elements in the corresponding Grassmann algebra which are not
decomposable. Hint. Let e1 , e2 , e3 , and e4 be distinct basis elements of V and consider (e1 ∧ e2 ) +
(e3 ∧ e4 ).
35.2.4. Exercise. The elements v1 , v2 , . . . , vp in a vector space V areVlinearly independent if
and only if v1 ∧ v2 ∧ · · · ∧ vp 6= 0 in the corresponding Grassmann algebra (V ).

35.2.5. Exercise. Let T : V → W be a linear map between finite dimensional vector spaces
(over the same field). Then there exists a unique extension of T to a unital algebra homomorphism
(T ) : (V ) → (W ). This extension maps k (V ) into k (W ) for each k ∈ N.
V V V V V
V V
35.2.6. Exercise. The pair of maps V 7→ (V ) and T 7→ (T ) is a covariant functor from
the category of vector spaces and linear maps to the category of unital algebras and unital algebra
homomorphisms.
Vp  d

35.2.7. Exercise. If V is a vector space of dimension d, then dim (V ) = p for 0 ≤ p ≤ d.
Vp Vq
35.2.8. Exercise. If V is a finite dimensional vector space, ω ∈ (V ), and µ ∈ (V ), then
pq
ω ∧ µ = (−1) µ ∧ ω.
CHAPTER 36

Existence of Grassmann Algebras

36.1. Background
Topics: tensor algebras, shuffle permutations, (A good reference for much of this material—and
the following material on differential forms—is chapter 4 of [3].)

36.1.1. Definition. Let V0 , V1 , V2 , . . . be vector spaces (over the same field). Then their

L
(external) direct sum, which is denoted by Vk , is defined to be the set of all functions
k=0
v : Z+ → ∞ +
S
k=0 Vk with finite support such that v(k) = vk ∈ Vk for each k ∈ Z . The usual
pointwise addition and scalar multiplication make this set into a vector space.

36.1.2. Definition. Let V be a vector space over a field F. Define T 0 (V ) = F, T 1 (V ) = V ,


T 2 (V ) = V ⊗ V , T 3 (V ) = V ⊗ V ⊗ V , . . . , T k (V ) = V ⊗ · · · ⊗ V (k factors), . . . . Then let

T k (V ). Define multiplication on T (V ) by using the obvious isomorphism
L
T (V ) =
k=0

T k (V ) ⊗ T m (V ) ∼
= T k+m (V )
and extending by linearity to all of T (V ). The resulting algebra is the tensor algebra of V (or
generated by V ).

36.1.3. Notation. If V is a vector space over F and k ∈ N we denote by Altk (V ) the set of
all alternating k-linear maps from V k into F. (The space Alt1 (V ) is just V ∗ .) Additionally, take
Alt0 (V ) = F.

36.1.4. Definition. Let p, q ∈ N. We say that a permutation σ ∈ Sp+q is a (p, q)-shuffle


if σ(1) < · · · < σ(p) and σ(p + 1) < · · · < σ(p + q). The set of all such permutations is denoted
by S(p, q).

36.1.5. Definition. Let V be a vector space. For p, q ∈ N define


∧ : Altp (V ) × Altq (V ) → Altp+q (V ) : (ω, µ) 7→ ω ∧ µ
where
X
(ω ∧ µ)(v 1 , . . . , v p+q ) = (sgn σ)ω(v σ(1) , . . . , v σ(p) )µ(v σ(p+1) , . . . , v σ(p+q) ).
σ∈S(p,q)

111
112 36. EXISTENCE OF GRASSMANN ALGEBRAS

36.2. Exercises (Due: Mon. April 20)


36.2.1. Exercise. Show that T (V ) as defined in 36.1.2 is in fact a unital algebra.

36.2.2. Exercise. Let V be a finite dimensional vector space and J be the ideal in the tensor
algebra T (V ) generated by the set of all elements of the form v ⊗ v where v ∈ V . Show that the
quotient algebra T (V )/J is the Grassmann algebra over V ∗ (or, equivalently, over V ).
36.2.3. Exercise. Show that if V is a finite dimensional vector space and k > dim V , then
Altk (V ) = {0}.

36.2.4. Exercise. Give an example of a (4, 5)-shuffle permutation σ of the set N9 = {1, . . . , 9}
such that σ(7) = 4.

36.2.5. Exercise. Show that definition 36.1.5 is not overly optimistic by verifying that if ω ∈
Altp (V ) and µ ∈ Altq (V ), then ω ∧ µ ∈ Altp+q (V ).

36.2.6. Exercise. Show that the multiplication defined in 36.1.5 is associative. That is if
ω ∈ Altp (V ), µ ∈ Altq (V ), and ν ∈ Altr (V ), then
ω ∧ (µ ∧ ν) = (ω ∧ µ) ∧ ν.

36.2.7. Exercise. Let V be a finite dimensional vector space. Explain in detail how to make
Altk (V ) (or, if you prefer, Altk (V ∗ ) ) into a vector space for each k ∈ Z and how to make the
collection of these into a Z-graded algebra. Show that this algebra is the Grassmann algebra
generated by V . Hint. Take Altk (V ) = {0} for each k < 0 and extend the definition of the wedge
product so that if α ∈ Alt0 (V ) = F and ω ∈ Altp (V ), then α ∧ ω = αω.

36.2.8. Exercise. Let ω1 , . . . , ωp be members of Alt1 (V ) (that is, linear functionals on V ).


Then p
(ω1 ∧ · · · ∧ ωp )(v 1 , . . . , v p ) = det ωj (v k ) j,k=1


for all v 1 , . . . , v p ∈ V .

36.2.9. Exercise. If {e1 , . . . , en } is a basis for an n-dimensional vector space V , then


{e∗σ(1) ∧ · · · ∧ e∗σ(p) : σ ∈ S(p, n − p)}
is a basis for Altp (V ).
CHAPTER 37

The Hodge ∗-operator

37.1. Background
Topics: right-handed basis, orientation, opposite orientation, Hodge star operator.

37.1.1. Definition. Let E be a basis for an n-dimensional vector space V . Then the n-tuple
(e1 , . . . , en ) is an ordered basis for V if e1 , . . . , en are distinct elements of E.

37.1.2. Definition. Let E = (e1 , . . . , en ) be an ordered basis for Rn . We say that the basis E
is right-handed if det[e1 , . . . , en ] > 0 and left-handed otherwise.

37.1.3. Definition. Let V be a real n-dimensional vector space and T : Rn → V be an iso-


morphism. Then the set of all n-tuples of the form (T (e1 ), . . . , T (en )) where (e1 , . . . , en ) is a
right-handed basis in Rn is an orientation of V . Another orientation consists of the set of n-
tuples (T (e1 ), . . . , T (en )) where (e1 , . . . , en ) is a left-handed basis in Rn . Each of these orientations
is the opposite (or reverse) of the other. A vector space together with one of these orientations
is an oriented vector space.

113
114 37. THE HODGE ∗-OPERATOR

37.2. Exercises (Due: Wed. April 22)


37.2.1. Exercise. For T : V → W a linear map between vector spaces define
Altp (T ) : Altp (W ) → Altp (V ) : ω 7→ Altp (T )(ω)
where Altp (T )(ω) (v 1 , . . . , v p ) = ω(T v 1 , . . . , T v p ) for all v 1 , . . . , v p ∈ V . Then Altp is a contravari-
 
ant functor from the category of vector space and linear maps into itself.

37.2.2. Exercise. Let V be an n-dimensional vector space and T ∈ L(V ). If T is diagonalizable,


then
n
X
cT (λ) = (−1)k [Altn−k (T )]λk .
k=0

37.2.3. Exercise. Let V be an n-dimensional real inner product space. In exercise 9.2.13
we established an isomorphism Φ : v 7→ v ∗ between V and its dual space V ∗ . Show how this
isomorphism can be used to induce an inner product on V ∗ . Then show how this may be used to
create an inner product on Altp (V ) for 2 ≤ p ≤ n. Hint. For v, w ∈ V let hv ∗ , w∗ i = hv, wi. Then
for ω1 , . . . , ωp , µ1 , . . . , µp ∈ Alt1 (V ) let hω1 ∧ · · · ∧ ωp , µ1 ∧ · · · ∧ µp i = det[hωj , µk i].

37.2.4. Exercise. Let V be an d-dimensional oriented real inner product space. Fix a unit
vector vol ∈ Altd (V ). This vector is called a volume element. (In the case where V = Rd , we
will always choose vol = e∗1 ∧ · · · ∧ e∗d where (e1 , . . . , ed ) is the usual ordered basis for Rd .)
Let ω ∈ Altp (V ) and q = d − p. Show that there exists a vector ∗ ω ∈ Altq (V ) such that
h∗ ω, µi vol = ω ∧ µ
for each µ ∈ Alt (V ). Show that the map ω 7→ ∗ ω from Altp (V ) into Altq (V ) is a vector space
q

isomorphism. This map is the Hodge star operator.

37.2.5. Exercise. Let V be a finite dimensional oriented real inner product space of dimen-
sion n. Suppose that p + q = n. Show that ∗ ∗ ω = (−1)pq ω for every ω ∈ Altp (V ).
CHAPTER 38

Differential Forms

38.1. Background
Topics: tangents, tangent space, cotangent space, partial derivatives on a manifold, vector fields,
differential forms, the differential of a map between manifolds. (One good source for much of this
material is chapters 1 and 4 of [3].

In everything that follows all vector spaces are


assumed to be real, finite dimensional, and ori-
ented; and all manifolds are smooth oriented dif-
ferentiable manifolds.

38.1.1. Notation. Let m be a point on a manifold M . A real valued function f belongs to Cm∞

if it is defined in a neighborhood of m and is smooth. (Recall that a function is smooth if it has


derivatives of all orders).

38.1.2. Definition. Let m be a point on a manifold M . A tangent at m is a linear map


∞ → R such that
t : Cm
t(f g) = f (m)t(g) + f (t)g(m)
∞ . The set T of all tangents at m is called the tangent space at m. The dual
for all f , g ∈ Cm m
Tm∗ of the tangent space at m is called the cotangent space at m.

38.1.3. Definition. Let U ⊆ M where M is a manifold. A function X defined on U is a


vector field on U if Xm (also written as X(m)) belongs to the tangent space Tm for each point
m ∈ U.
∞ define the function Xf by
For each f ∈ Cm
(Xf )(m) = Xm (f )

for all m in the intersection of the domains of X and f . A vector field X is smooth if Xf ∈ Cm
∞ ∞
whenever f ∈ Cm . Thus a smooth vector field is often regarded as a mapping from Cm into itself.
In everything that follows we will assume that all vector fields are smooth.
38.1.4. Definition. Let U ⊆ M where M is a manifold. A function V ω ∗defined on U is a
differential p -form on U if ωm (also written as ω(m)) belongs to p (Tm ) for all m ∈ U .
A differential p -form is smooth if whenever X 1 , . . . , X p are vector fields on U the function
ω(X 1 , . . . , X p ) defined on U by
ω(X 1 , . . . , X p )(m) = ωm (Xm
1 p
, . . . , Xm )
is smooth.
The vector space of all smooth differential p-forms on U is denoted by Ωp (U ). If M is a manifold
of dimension d, we agree (as in exercise 35.2.1 that Ωp (U ) = 0 whenever p < 0 or p > d. The space
of all differential forms on U is denoted by Ω(U ). Since 0 (Tm ∗ ) = R, a member of Ω0 (U ) is just a
V
0 ∞
smooth real valued function on U , so Ω (U ) = C (U ). We denote the Z-graded algebra generated
115
116 38. DIFFERENTIAL FORMS

by the vector spaces Ωp (U ) and the wedge product by Ω(U ). In everything that follows when
we refer to a p-form or a differential form we will assume that it is smooth.
38.1.5. Definition. Let φ = (x1 , . . . , xd ) be a chart (or coordinate system) at a point m on
a d-dimensional manifold M . We define the partial derivative at m with respect to xk ,
denoted by Dxk (m), to be the tangent given by the formula
∂(f ◦ φ−1 )
Dxk (m)(f ) = (φ(m))
∂uk
where ∂/∂uk is the usual partial derivative with respect to the k th coordinate on Rd . Depending
on which variable we are interested in we often write Dxk (f )(m) for Dxk (m)(f ). It is common
practice to write ∂x∂ k for Dxk . In Rn we often write Dk for Duk , where, as above, u1 , . . . , un are
the standard coordinates on Rn . Also, if f is a smooth real valued function on a manifold, we may
write fxk (or just fk ) for Dxk (f ).
38.1.6. Theorem. Let φ = (x1 , . . . , xd ) be a chart (or coordinate system) at a point m on a
d-dimensional manifold M . If t is a tangent at m, then
d
X
t= t(xk ) Dxk
k=1

For a proof of this theorem consult [3] (section 1.3, theorem 1) or [25] (chapter 1, proposition
3.1).
38.1.7. Definition. Let F : M → N be a smooth function between manifolds and m be a point
in M . Define dFm : Tm → TF (m) , the differential of F at m, by dFm (t)(g) = t(g ◦ F ) for all
t ∈ Tm and all g ∈ CF∞(m) .

38.1.8. Notation. In exercise 37.2.4 we defined a volume element vol in Rd . We will adopt the
same notation vol for the differential form dx1 ∧· · ·∧dxd on a d-manifold M (where φ = (x1 , . . . , xd )
is a chart on M ). This is called the volume form.
38.2. EXERCISES (DUE: FRI. APRIL 24) 117

38.2. Exercises (Due: Fri. April 24)


38.2.1. Exercise. Let m be a point on a d-dimensional manifold M . Show that the tangent
space Tm is a vector space of dimension d.

38.2.2. Exercise. Let F : M → N be a smooth function between manifolds and m be a point


in M . Show that dFm (t) ∈ TF (m) whenever t ∈ Tm and that dFm ∈ L(Tm , TF (m) ).

38.2.3. Exercise (Chain Rule). Let F : M → N and G : N → P be smooth functions between


manifolds. Prove that
d(G ◦ F )m = dGF (m) ◦ dFm .

38.2.4. Exercise. Help future students. Design some reasonably elementary exercises which
you think would help clarify points in the background material for this assignment that you found
confusing at first reading.
CHAPTER 39

The Exterior Differentiation Operator

39.1. Background
Topics: exterior derivative.

39.1.1. Definition. Let U be an open subset of a manifold M . The exterior differen-


tiation operator (or exterior derivative) d is a mapping which takes k-forms on U to
(k + 1)-forms on U . That is, d : Ωk (U ) → Ωk+1 (U ). It is defined to have the following properties:
(i) If f is a 0-form on U , then d(f ) is the usual differential df of f (as defined in exercise 38.1.7;
(ii) d is linear;
(iii) d2 = 0 (that is, d(dω) = 0 for every k-form ω); and
(iv) if ω is a k-form and µ is any differential form, then
d(ω ∧ µ) = (dω) ∧ µ + (−1)k ω ∧ dµ.

Proofs of the existence and uniqueness of such a function can be found in [20] (theorem 12.14),
[25] (chapter 1, theorem 11.1), and [3] (section 4.6).

119
120 39. THE EXTERIOR DIFFERENTIATION OPERATOR

39.2. Exercises (Due: Mon. April 27)


39.2.1. Exercise. Theorem 38.1.6 tells us that at each point m of a d-dimensional manifold
∂ ∂
the vectors (m). . . . , (m) constitute a basis for the tangent space at m. Explain why the
∂x
 1 ∂x
 d
vectors d x1 m , . . . , d xd m make up its dual basis.
39.2.2. Exercise. If f is a smooth real valued function on an open subset U of Rd , then
d
X
df = fk dxk .
k=1

39.2.3. Exercise. Let φ = (x, y, z) be a chart at a point m in La 3-manifold M and let U be


the domain of φ. Explain why Ω0 (U ), Ω1 (U ), Ω2 (U ), Ω3 (U ), and k∈Z Ωk (U ) are vector spaces
and exhibit a basis for each.

39.2.4. Exercise. In beginningRRcalculus texts some curious arguments are given for replacing
the expression dx dy in the integral R f dx dy by r dr dθ when we change from rectangular to polar
coordinates in the plane. Show that if we interpret dx dy as the differential form dx ∧ dy, then this
is a correct substitution. (Assume additionally that R is a region in the open first quadrant and
that the integral of f over R exists.)
39.2.5. Exercise. Give an explanation similar to the one in the preceding exercise of the change
in triple integrals from rectangular to spherical coordinates.

39.2.6. Exercise. Generalize the two preceding exercises.

39.2.7. Exercise. Let φ = (x1 , . . . , xd ) be a chart at a point m on aP d-dimensional manifold.


If ω is a differential form defined on U then it can be written in the form ωJ dxJ where J ranges
J
over all subsets J = {j1 , . . . , jk } of Nd with j1 < · · · < jk , dx = dxj1 ∧ · · · ∧ dxjk (letting dxJ = 1
when J = ∅), and ωJ is a smooth real valued function on U . Show that
X
dωm = dωJ (m) ∧ dxJ (m).
CHAPTER 40

Differential Calculus on R3

40.1. Background
Topics: association of vector fields with 1-forms, gradient, curl, divergence.

40.1.1. Notation. If, on some manifold, f is a 0-form and ω is a differential form (of arbitrary
degree), we write f ω instead of f ∧ ω for the product of the forms.

40.1.2. Definition. Let P , Q, and R be scalar fields on an open subset U of R3 . In 38.1.3


we defined the term vector field. Recall that this term is used in a somewhat different fashion in
beginning calculus. There it refers simply to a mapping from R3 to R3 . So in this more elementary
context a vector field F can be written in the form P i+Q j+R k. We say that ω = P dx+Q dy+R dz
is the differential 1-form associated with the vector field F = P i + Q j + R k. (And, of course,
F is the vector field associated with ω.) Notice that a vector field is identically zero if and only if
its associated 1-form is.

121
122 40. DIFFERENTIAL CALCULUS ON R3

40.2. Exercises (Due: Wed. April 29)


Recall that in beginning calculus the cross product is defined only for vectors in R3 . The
following exercise makes clear the sense in which the wedge product is a generalization of the cross
product.
40.2.1. Exercise. Show that there exists an isomorphism u 7→ u# from R3 to Alt2 (R3 ) such
that
v ∗ ∧ w∗ = (v × w)#
for every v, w ∈ R3 .

40.2.2. Exercise. Let M be a 3-manifold, φ = (x, y, z) : U → R3 be a chart on M , and


f : U → R be a 0-form on U . Compute d(f dy).
40.2.3. Exercise. Let M be a 3-manifold and φ = (x, y, z) : U → R3 be a chart on M . Compute
d(x dy ∧ dz + y dz ∧ dx + z dx ∧ dy).
40.2.4. Exercise. Let M be a 3-manifold and φ = (x, y, z) : U → R3 be a chart on M . Compute
d[(3xz dx + xy 2 dy) ∧ (x2 y dx − 6xy dz)].
40.2.5. Exercise. Let M be a d-manifold, φ = (φ1 , . . . , φd ) : U → Rd be a chart on M , and
f : U → R be a 0-form on U . Show that ∗ (f dφk ) = f ∗ dφk for 1 ≤ k ≤ d.
40.2.6. Exercise. Let M be a 3-manifold, φ = (x, y, z) : U → R3 be a chart on M , and a, b,
c : U → R be 0-forms on U . Compute the following:
(1) ∗ a
(2) ∗ (a dx + b dy + c dz)
(3) ∗ (a dx ∧ dy + b dx ∧ dz + c dy ∧ dz) and
(4) ∗ (a dx ∧ dy ∧ dz)
40.2.7. Exercise. Explain how the Hodge star operator is extended to differential forms on Rn .
For differential forms on an open subset U of R3 prove that ∗ dx = dy ∧ dz, ∗ dy = dz ∧ dx, and
∗ dz = dx ∧ dy.

40.2.8. Exercise. Let f be a scalar field on an open subset U of R3 . Show that grad f (the
gradient of f ) is the vector field associated with the 1-form df .

40.2.9. Exercise. Let F and G be vector fields on a region in R3 . Show that if ω and µ are,
respectively, their associated 1-forms, then ∗ (ω ∧ µ) is the 1-form associated with F × G.

40.2.10. Exercise. Let ω = x2 yz dx + yz dy + xy 2 dz. Find d ∗ ω and ∗ dω.

40.2.11. Exercise. Let f be a scalar field. Write the left side of Laplace’s equation fxx + fyy +
fzz = 0 in terms of d, ∗, and f only.

40.2.12. Exercise. Suppose that F is a smooth vector field in R3 and that ω is its associated
1-form. Show that ∗ dω is the 1-form associated with curl F.

40.2.13. Exercise. Let F be a vector field on R3 and ω be its associated 1-form. Show that
∗ d ∗ ω = div F. (Here div F is the divergence of F .)

40.2.14. Exercise. Let F be a vector field on an open subset of R3 . Use differential forms (but
not partial derivatives) to show that div curl F = 0.
40.2. EXERCISES (DUE: WED. APRIL 29) 123

40.2.15. Exercise. Let f be a smooth scalar field (that is, a 0-form) in R3 . Use differential
forms (but not partial derivatives) to show that curl grad f = 0.
CHAPTER 41

Closed and Exact Forms

41.1. Background
Topics: closed differential forms, exact differential forms,

41.1.1. Definition. Let U be an open subset of a manifold and


dp−1 dp
··· / Ωp−1 (U ) / Ωp (U ) / Ωp+1 (U ) / ···

where dp−1 and dp are exterior differentiation operators. Elements of ker dp are called closed
p-forms and elements of ran dp−1 are exact p-forms.

125
126 41. CLOSED AND EXACT FORMS

41.2. Exercises (Due: Mon. May 4)


41.2.1. Exercise. Writers of elementary texts in calculus and physics in their otherwise laudable
efforts to express complicated ideas in a simple fashion will occasionally lose all semblance of self
control and write things like dr = dx i + dy j + dz k and dS = dy ∧ dz i + dz ∧ dx j + dx ∧ dy k. In
the depths of their depravity they may even go so far as to claim for a vector field F = (F 1 , F 2 , F 3 )
in R3 that the derivative of F · dr is curl F · dS. Is it possible to make any sense of such a statement?

41.2.2. Exercise. With the shameless notations suggested in exercise 41.2.1 try to make sense
of the claim that for a vector field F in R3
d(F · dS) = (div F) dV
where dV is the “volume element” dx ∧ dy ∧ dz.

41.2.3. Exercise. Show that every exact differential form is closed.

41.2.4. Exercise. Show that if ω and µ are closed differential forms, then so is ω ∧ µ.

41.2.5. Exercise. Show that if ω is an exact differential form and µ is a closed form, then ω ∧ µ
is exact.

41.2.6. Exercise. Let φ = (x, y, z) : U → R3 be a chart on a manifold and ω = a dx+b dy +c dz


∂c ∂b ∂a ∂c ∂b ∂a
be a 1-form on U . Show that if ω is exact, then = , = , and = .
∂y ∂z ∂z ∂x ∂x ∂y

41.2.7. Exercise. Explain why solving the initial value problem


ex cos y + 2x − ex (sin y)y 0 = 0, y(0) = π/3
is essentially the same thing as showing that the 1-form (ex cos y + 2x) dx − ex (sin y) dy is exact.
Do it.
CHAPTER 42

The de Rham Cohomology Group

42.1. Background
Topics: cocycles, coboundaries, the de Rham cohomology group.

42.1.1. Definition. Let M be a d-manifold. For 0 ≤ p ≤ d we denote by Z p (M ) (or just Z p )


the vector space of all closed p -forms on M . For reason which will become apparent shortly,
members of Z p are sometimes called (de Rham)p -cocycles. Also let B p (M ) (or just B p ) denote
the vector space of all exact p-forms on M (sometimes called p -coboundaries). Since there are
no differential forms of degree less than 0, we take B 0 = B 0 (M ) = {0}. For convenience we also set
Z p = Z p (M ) = {0} and B p = B p (M ) = {0} whenever p < 0 or p > d. It is a trivial consequence
of exercise 41.2.3 that B p (M ) is a subspace of the vector space of Z p (M ). Thus it makes sense to
define
Z p (M )
H p = H p (M ) = p .
B (M )
This is the pth de Rham cohomology group of M . (Clearly this cohomology “group” is actually
a vector space. It is usually called a “group” because most other cohomology theories produce only
groups.) The dimension of the vector space H p (M ) is the pth Betti number of M .
Another (obviously equivalent) way of phrasing the definition of the pth de Rham cohomology
group is in terms of the maps
dp−1 dp
··· / Ωp−1 (M ) / Ωp (M ) / Ωp+1 (M ) / ···

where dp−1 and dp are exterior differentiation operators. Define


ker dp
H p (M ) :=
ran dp−1
for all p.

42.1.2. Definition. Let F : M → N be a smooth function between smooth manifolds. For


p ≥ 1 define
Ωp F : Ωp (N ) → Ωp (M ) : ω 7→ (Ωp F )(ω)
where
(Ωp F )(ω) m (v 1 , . . . , v p ) = ωF (m) dFm (v 1 ), . . . , dFm (v p )
 
(42.1)
for every m ∈ M and v 1 , . . . v p ∈ Tm . Also define (Ω0 F )(ω) m = ωF (m) . We simplify the notation

in 42.1 slightly
(Ωp F )(ω)(v 1 , . . . , v p ) = ω(dF (v 1 ), . . . , dF (v p )). (42.2)
∗ p
Denote by F the map induced by the maps Ω F which takes the Z-graded algebra Ω(N ) to the
Z-graded algebra Ω(M ).

42.1.3. Definition. Let F : M → N be a smooth function between smooth manifolds. For


each integer p define
H p (F ) : H p (N ) → H p (M ) : [ω] 7→ [Ωp (F )(ω)].
Denote by H ∗ (F ) the induced map which takes the Z-graded algebra H ∗ (N ) into H ∗ (M ).
127
128 42. THE DE RHAM COHOMOLOGY GROUP

42.2. Exercises (Due: Wed. May 6)


42.2.1. Exercise. Show that Ωp (as defined in 38.1.4 and 42.1.2) is a contravariant functor
from the category of smooth manifolds and smooth maps to the category of vector spaces and
linear maps.

42.2.2. Exercise. If F : M → N is a smooth function between smooth manifolds, ω ∈ Ωp (N ),


and µ ∈ Ωq (N ), then
(Ωp+q F )(ω ∧ µ) = (Ωp F )(ω) ∧ (Ωq F )(µ).

42.2.3. Exercise. In exercise 42.2.1 you showed that Ωp was a functor for each p. What about
Ω itself? Is it a functor? Explain.

42.2.4. Exercise. Let F : M → N be a smooth function between smooth manifolds. Prove


that
d ◦ F ∗ = F ∗◦ d .

42.2.5. Exercise. Calculate the cohomology group H 0 (R).



42.2.6. Exercise. For U ⊆ Rn give a very clear description of H 0 (U ) and explain why its
dimension is the number of connected components of U . Hint. A function is said to be locally
constant if it is constant in some neighborhood of each point in its domain.

42.2.7. Exercise. Let V = {0} be the 0-dimensional Euclidean space. Compute the pth de
Rham cohomology group H p (V ) for all p ∈ Z.

42.2.8. Exercise. Compute H p (R) for all p ∈ Z.

42.2.9. Exercise. Let U be the union of m disjoint open intervals in R. Compute H p (U ) for
all p ∈ Z.


42.2.10. Exercise. Let U ⊆ Rn . For [ω] ∈ H p (U ) and [µ] ∈ H q (U ) define
[ω][µ] = [ω ∧ µ] ∈ H p+q (U ).
Explain why exercise 41.2.4 is necessary for this definition to make sense. Prove also that this def-
inition does not depend on L the representatives chosen from the equivalence classes. Show that this
definition makes H ∗ (U ) = H p (U ) into a Z-graded algebra. This is the de Rham cohomology
algebra of U . p∈Z

42.2.11. Exercise. Prove that with definition 42.1.3 H ∗ becomes a contravariant functor from
the category of open subsets of Rn and smooth maps to the category of Z-graded algebras and their
homomorphisms.
CHAPTER 43

Cochain Complexes

43.1. Background
Topics: cochain complexes, cochain maps.

43.1.1. Definition. A sequence


dp−1 dp
··· / Vp−1 / Vp / Vp+1 / ···

of vector spaces and linear maps is a cochain complex if dp ◦ dp−1 = 0 for all p ∈ Z. Such a
sequence may be denoted by (V ∗ , d) or just by V ∗ .

43.1.2. Definition. We generalize definition 42.1.1 in the obvious fashion. If V ∗ is a cochain


complex, then the pth cohomology group H p (V ∗ ) is defined to be ker dp / ran dp−1 . (As before,
this “group” is actually a vector space.) In this context the elements of Vp are often called p-
cochains, elements of ker dp are p-cocycles, elements of ran dp−1 are p-coboundaries, and d
is the coboundary operator.

43.1.3. Definition. Let (V ∗ , d) and (W ∗ , δ) be cochain complexes. A cochain map G : V ∗ →


W∗ is a sequence of linear maps Gp : Vp → Wp satisfying
δp ◦ Gp = Gp+1 ◦ dp
for every p ∈ Z. That is, the diagram
dp
... / Vp / Vp+1 / ...

Gp Gp+1
 
... / Wp / Wp+1 / ...
δp

commutes.

43.1.4. Definition. A sequence

0 / U∗ F /V∗ G / W∗ /0

of cochain complexes and cochain maps is (short) exact if for every p ∈ Z the sequence
Fp Gp
0 / Up / Vp / Wp /0

of vector spaces and linear maps is (short) exact.

129
130 43. COCHAIN COMPLEXES

43.2. Exercises (Due: Fri. May 8)


43.2.1. Exercise. Let G : V ∗ → W ∗ be a cochain map between cochain complexes. For each
p ∈ Z define
G∗p : H p (V ∗ ) → H p (W ∗ ) : [v] 7→ [Gp (v)]
whenever v is a cocycle in Vp . Show that the maps G∗p are well defined and linear. Hint. To prove
that G∗p is well-defined we need to show two things: that Gp (v) is a cocycle in Wp and that the
definition does not depend on the choice of representative v.

43.2.2. Exercise. If 0 / U∗ F /V∗ G / W∗ / 0 is a short exact sequence of cochain


complexes, then
Fp∗ G∗p
H p (U ∗ ) / H p (V ∗ ) / H p (W ∗ )
is exact at H p (V ∗ ) for every p ∈ Z.

43.2.3. Exercise. A short exact sequence

0 / U∗ F /V∗ G / W∗ /0

of cochain complexes induces a long exact sequence


ηp−1 Fp∗ G∗p ηp
/ H p−1 (W ∗ ) / H p (U ∗ ) / H p (V ∗ ) / H p (W ∗ ) / H p+1 (U ∗ ) /

Hint. If w is a cocycle in Wp , then, since Gp is surjective, there exists v ∈ Vp such that w = Gp (v).
It follows that dv ∈ ker Gp+1 = ran Fp+1 so that dv = Fp+1 (u) for some u ∈ Up+1 . Let ηp ([w]) = [u].
CHAPTER 44

Simplicial Homology

44.1. Background
Topics: convex, convex combinations, convex hull, convex independent, closed simplex, open
simplex, oriented simplex, face of a simplex, simplicial complex, r -skeleton of a complex, polyhedron
of a complex, chains, boundary maps, cycles, boundaries, Betti number, Euler characteristic.

44.1.1. Definition. Let V be a vector space. RecallPthat a linear combination of a finite


n
set {x1 , . . . , xn } of vectors in V is a vector of the form k=1 αk xk where α1 , . . . , αn ∈ R. If
α1 = α2 = · · · = αn = 0, then the linear combination is trivial ; if atPleast one αk is different
from zero, the linear combination is nontrivial. A linear combination nk=1 P αk xk of the vectors
x1 , . . . , xn is a convex combination if αk ≥ 0 for each k (1 ≤ k ≤ n) and if nk=1 αk = 1.

44.1.2. Definition. If a and b are vectors in the vector space V , then the closed segment
between a and b, denoted by [a, b], is {(1 − t)a + tb : 0 ≤ t ≤ 1}.

44.1.3. CAUTION. Notice that there is a slight conflict between this notation, when applied
to the vector space R of real numbers, and the usual notation for closed intervals on the real line.
In R the closed segment [a, b] is the same as the closed interval [a, b] provided that a ≤ b. If a > b,
however, the closed segment [a, b] is the same as the segment [b, a], it contains all numbers c such
that b ≤ c ≤ a, whereas the closed interval [a, b] is empty.

44.1.4. Definition. A subset C of a vector space V is convex if the closed segment [a, b] is
contained in C whenever a, b ∈ C.

44.1.5. Definition. Let A be a subset of a vector space V . The convex hull of A is the
smallest convex subset of V which contain A.

44.1.6. Definition. A set S = {v0 , v1 , . . . , vp } of p + 1 vectors in a vector space V is convex


independent if the set {v1 − v0 , v2 − v0 , . . . , vp − v0 } is linearly independent in V .

44.1.7. Definition. An affine subspace of a vector space V is any translate of a linear


subspace of V .

44.1.8. Definition. Let p ∈ Z+ . The closed convex hull of a convex independent set S =
{v0 , . . . , vp } of p + 1 vectors in some vector space is a closed p -simplex. It is denoted by [s] or
by [v0 . . . . , vp ]. The integer p is the dimension of Pthe simplex. The open p -simplex determined
by the set S is the set of all convex combinations pk=0 αk vk of elements of S where each αk > 0.
The open simplex will be denoted by (s) or by (v0 , . . . , vp ). We make the special convention that
a single vector {v} is both a closed and an open 0 -simplex.
If [s] is a simplex in Rn then the plane of [s] is the affine subspace of Rn having the least
dimension which contains [s]. It turns out that the open simplex (s) is the interior of [s] in the
plane of [s].

131
132 44. SIMPLICIAL HOMOLOGY

44.1.9. Definition. Let [s] = [v0 , . . . , vp ] be a closed p -simplex in Rn and {j0 , . . . , jq } be a


nonempty subset of {0, 1, . . . , p}. Then the closed q -simplex [t] = [vj0 , . . . , vjq ] is a closed q-face
of [s]. The corresponding open simplex (t) is an open q-face of [s]. The 0 -faces of a simplex are
called the vertices of the simplex.
Note that distinct open faces of a closed simplex [s] are disjoint and that the union of all the
open faces of [s] is [s] itself.
44.1.10. Definition. Let [s] = [v0 , . . . , vp ] be a closed p -simplex in Rn . We say that two
orderings (vi0 , . . . , vip ) and (vj0 , . . . , vjp ) of the vertices are equivalent if (j0 , . . . , jp ) is an even
permutation of (i0 , . . . , ip ). (This is an equivalence relation.) For p ≥ 1 there are exactly two
equivalence classes; these are the orientations of [s]. An oriented simplex is a simplex together
with one of these orientations. The oriented simplex determined by the ordering (v0 , . . . , vp ) will be
denoted by hv0 , . . . , vp i. If, as above, [s] is written as [v0 , . . . , vp ], then we may shorten hv0 , . . . , vp i
to hsi.
Of course, none of the preceding makes sense for 0 -simplexes. We arbitrarily assign them two
orientations, which we denote by + and −. Thus hsi and −hsi have opposite orientations.
44.1.11. Definition. A finite collection K of open simplexes in Rn is a simplicial complex
if the following conditions are satisfied:
(1) if (s) ∈ K and (t) is an open face of [s], then (t) ∈ K; and
(2) if (s), (t) ∈ K and (s) 6= (t), then (s) ∩ (t) = ∅.
The dimension of a simplicial complex K, denoted by dim K, is the maximum dimension of the
simplexes constituting K. If r ≤ dim K, then the r-skeleton of K, denoted by K r , is the set of
all open simplexes in K whose dimensions are no greater than r. The polyhedron, |K|, of the
complex K is the union of all the simplexes in K.
44.1.12. Definition. Let K be a simplicial complex in Rn . For 0 ≤ p ≤ dim K let Ap (K) (or
just Ap ) denote the free vector space generated by the set of all oriented p -simplexes belonging
to K. For 1 ≤ p ≤ dim K let Wp (K) (or just Wp ) be the subspace of Ap generated by all elements
of the form
hv0 , v1 , v2 , . . . , vp i + hv1 , v0 , v2 , . . . , vp i
and let Cp (K) (or just Cp ) be the resulting quotient space Ap /Wp .For p = 0 let Cp = Ap and for
p < 0 or p > dim K let Cp = {0}. The elements of Cp are the p -chains of K.
Notice that for any p we have
[hv0 , v1 , v2 , . . . , vp i] = −[hv1 , v0 , v2 , . . . , vp i] .
To avoid cumbersome notation we will not distinguish between the p -chain [hv0 , v1 , v2 , . . . , vp i] and
its representative hv0 , v1 , v2 , . . . , vp i.
44.1.13. Definition. Let hsi = hv0 , v1 , . . . , vp+1 i be an oriented (p + 1) -simplex. We define the
boundary of hsi, denoted by ∂hsi, by
p+1
X
∂hsi = (−1)k hv0 , . . . , vbk , . . . , vp+1 i .
k=0
(The caret above the vk indicates that that term is missing; so the boundary of a (p + 1) -simplex
is an alternating sum of p -simplexes.)
44.1.14. Definition. Let K be a simplicial complex in Rn . For 1 ≤ p ≤ dim K define
∂p = ∂ : Cp+1 (K) → Cp (K) :
P
as follows. If a(s)hsi is a p -chain in K, let
X  X
∂ a(s)hsi = a(s)hsi .
44.1. BACKGROUND 133

For all other p let ∂p be the zero map. The maps ∂p are called boundary maps. Notice that each
∂p is a linear map.
44.1.15. Definition. Let K be a simplicial complex in Rn and 0 ≤ p ≤ dim K. Define Zp (K) =
Zp to be the kernel of ∂p : Cp → Cp−1 and Bp (K) = Bp to be the range of ∂p+1 : Cp+1 → Cp . The
members of Zp are p -cycles and the members of Bp are p -boundaries.
It is clear from exercise 44.2.2 that Bp is a subspace of the vector space Zp . Thus we may define
Hp (K) = Hp to be Zp /Bp . It is the pth simplicial homology group of K. (And, of course, Zp ,
Bp , and Hp are the trivial vector space whenever p < 0 or p > dim K.)
44.1.16. Definition. Let K be a simplicial complex. The number βp := dim Hp (K) is the pth
Betti number of the complex K. And χ(K) := dim
P K p
p=0 (−1) βp is the Euler characteristic
of K.
134 44. SIMPLICIAL HOMOLOGY

44.2. Exercises (Due: Wed. May 13)


44.2.1. Exercise. Show that definition 44.1.5 makes sense by showing that the intersection
of a family of convex subsets of a vector space is itself convex. Then show that a “constructive
characterization” is equivalent; that is, prove that the convex hull of A is the set of all convex
combinations of elements of A.
44.2.2. Exercise. Let K be a simplicial complex in Rn . Show that ∂ 2 : Cp+1 (K) → Cp−1 (K)
is identically zero. Hint. It suffices to prove this for generators hv0 , . . . , vp+1 i.

44.2.3. Exercise. Let K be the topological boundary (that is, the 1 -skeleton) of an oriented
2 -simplex in R2 . Compute Cp (K), Zp (K), Bp (K), and Hp (K) for each p.

44.2.4. Exercise. What changes in exercise 44.2.3 if K is taken to be the oriented 2 -simplex
itself.

44.2.5. Exercise. Let K be the simplicial complex in R2 comprising two triangular regions
similarly oriented with a side in common. For all p compute Cp (K), Zp (K), Bp (K), and Hp (K).

44.2.6. Exercise. Let K be a simplicial complex. For 0 ≤ p ≤ dim K let αp be the number of
p -simplexes in K. That is, αp = dim Cp (K). Show that
dim
XK
χ(K) = (−1)p αp .
p=0
CHAPTER 45

Simplicial Cohomology

45.1. Background
Topics: simplicial cochain, simplicial cocycles, simplicial coboundaries, simplicial cohomology
group, smooth submanifold, smoothly triangulated manifold, de Rham’s theorem, pullback of dif-
ferential forms.

∗
45.1.1. Definition. Let K be a simplicial complex. For each p ∈ Z let C p (K) = Cp (K) .
The elements of C p (K) are (simplicial) p -cochains. Then the adjoint ∂p ∗ of the boundary map
∂p : Cp+1 (K) → Cp (K)
is the linear map
∂p ∗ = ∂ ∗ : C p (K) → C p+1 (K) .
(Notice that ∂ ∗ ◦ ∂ ∗ = 0.)
Also define
(1) Z p (K) := ker ∂p ∗ ;
(2) B p (K) := ran ∂p−1 ∗ ; and
(3) H p (K) := Z p (K)/B p (K).
Elements of Z p (K) are (simplicial) p -cocycles and elements of B p (K) are (simplicial) p -
coboundaries. The vector space H p (K) is the pth simplicial cohomology group of K.
45.1.2. Definition. Let F : N → M be a smooth injection between smooth manifolds. The
pair (N, F ) is a smooth submanifold of M if dFn is injective for every n ∈ N .
45.1.3. Definition. Let M be a smooth manifold, K be a simplicial complex in Rn , and
h : [K] → M be a homeomorphism. The triple (M,
K, h) is a smoothly triangulated manifold
if for every open simplex (s) in K the map h : [s] → M has an extension hs : U → M to a
[s]
neighborhood U of [s] lying in the plane of [s] such that (U, hs ) is a smooth submanifold of M .
45.1.4. Theorem. A smooth manifold can be triangulated if and only if it is compact.
The proof of this theorem is tedious enough that very few textbook authors choose to include
it in their texts. You can find a “simplified” proof in [6].
45.1.5. Theorem (de Rham’s theorem). If (M, K, φ) is a smoothly triangulated manifold, then
H p (M ) ∼
= H p (K)
for every p ∈ Z.
You can find proofs of de Rham’s theorem in [16], chapter IV, theorem 3.1; [20], theorem 16.12;
[24], pages 167–173; and [26], theorem 4.17.
45.1.6. Definition (pullbacks of differential forms). Let F : M → N be a smooth mapping
between smooth manifolds. Then there exists an algebra homomorphism F ∗ : Ω(N ) → Ω(M ),
called the pullback associated with F which satisfies the following conditions:
135
136 45. SIMPLICIAL COHOMOLOGY

(1) F ∗ maps Ωp (N ) into Ωp (M ) for each p;


(2) F ∗ (g) = g ◦ F for each 0-form g on N ; and
(3) (F ∗ µ)m (v) = µF (m) (dFm (v)) for every 1-form µ on N , every m ∈ M , and every v ∈ Tm .
45.2. EXERCISES (DUE: FRI. MAY 15) 137

45.2. Exercises (Due: Fri. May 15)


∗
45.2.1. Exercise. Show that if K is a simplicial complex in Rn , then H p (K) ∼
= Hp (K) for
every integer p.

45.2.2. Exercise. Prove the assertion made in definition 45.1.6.

45.2.3. Exercise. Show that if F : M → N is a smooth map between n-manifolds, then F ∗ is


a cochain map from the cochain complex (Ω∗ (N ), d ) to the cochain complex (Ω∗ (M ), d ). That is,
show that the diagram
Ωp (N )
d / Ωp+1 (N )

F∗ F∗

 
Ωp (M ) / Ωp+1 (M )
d
commutes for every p ∈ Z.
CHAPTER 46

Integration of Differential Forms

46.1. Background
Topics: integral of a differential form over a simplicial complex, integral of a differential form over
a manifold, manifolds with boundary.

46.1.1. Definition. Let hsi be an oriented p -simplex in Rn (where 1 ≤ p ≤ n) and µ be a p -


form defined on a set U which is open in the plane of hsi and which contains [s]. If hsi = hv0 , . . . , vp i
let x1 , . . . , xp be the
take (v1 − v0 , . . . , vp − v0 i to be an ordered basis for the plane of hsi and P
coordinate projection functions relative to this ordered basis; that is, if a = pk=1 ak (vk − v0 ) ∈ U ,
then xj (a) = ak for 1 ≤ j ≤ p. Then φ = (x1 , . . . , xp ) : U → Rp is a chart on U ; so there exists a
smooth function g on U such that µ = g dx1 ∧ · · · ∧ dxp . Define
Z Z
µ = g dx1 . . . dxp
hsi [s]

where the right hand side is an ordinary Riemann integral. If hv0 i is a 0 -simplex, we make a special
definition Z
f = f (v0 )
hv0 i
for every 0 -form f .
Extend the preceding definition
P to p -chains by requiring the integral to be linear as a function
of simplexes; that is, if c = as hsi is a p -chain (in some simplicial complex) and µ is a p -form,
define Z Z
X
µ= a(s) µ .
c hsi

46.1.2. Definition. For a smoothly triangulated manifold (M, K, h) we define a map


Z
: Ωp (M ) → C p (K)
p
R
as follows. If ω is a p -form on M , then p ω is to be a linear functional on Cp (K); that is, a member
∗
of C p (K) = Cp (K) . In order to define a linear functional on Cp (K) it suffices to specify its values
on the basis vectors of Cp (K); that is, on the oriented p -simplexes hsi which constitute Cp (K).
Let hs : U → M be an extension of h to an open set U in the plane of hsi. Then hs ∗ pulls back
[s]
p -forms on M to p -forms on U so that hs ∗ (ω) ∈ Ωp (U ). Define
Z  Z
ω hsi := hs ∗ (ω) .
p hsi

139
140 46. INTEGRATION OF DIFFERENTIAL FORMS

46.1.3. Notation. Let Hn = {x ∈ Rn : xn ≥ 0}. This is the upper half-space of Rn .


46.1.4. Definition. A n -manifold with boundary is defined in the same way as an n -
manifold except that the range of a chart is assumed to be an open subset of Hn .
The interior of Hn , denoted by int Hn , is defined to be {x ∈ Rn : xn > 0}. (Notice that this
is the interior of Hn regarded as a subset of Rn —not of Hn .) The boundary of Hn , denoted by
∂Hn , is defined to be {x ∈ Rn : xn = 0}.
If M is an n -manifold with boundary, a point m ∈ M belongs to the interior of M (denoted
by int M ) if φ(m) ∈ int Hn for some chart φ. And it belongs to the boundary of M (denoted by
∂M ) if φ(m) ∈ ∂Hn for some chart φ.

46.1.5. Theorem. Let M and N be a smooth n -manifolds with boundary and F : M → N be a


smooth diffeomorphism. Then both int M and ∂M are smooth manifolds (without boundary). The
interior of M has dimension n and the boundary of M has dimension n − 1. The mapping F
induces smooth diffeomorphisms int F : int M → int N and ∂F : ∂M → ∂N .
For a proof of this theorem consult the marvelous text [1], proposition 7.2.6.
46.2. EXERCISES (DUE: MON. MAY 18) 141

46.2. Exercises (Due: Mon. May 18)


46.2.1. Exercise. Let V be an open subset of Rn , F : V → R, and c : [t0 , t1 ] → V be a smooth
curve in V . Let C = ran c. It is Rconventional to define the “integral of the tangential component
of F over C ”, often denoted by C FT , by the formula
Z Z t1 Z t1
FT = hF ◦ c, Dci = hF (c(t)), c0 (t)i dt (46.1)
t0 t0
C

The “tangential component of F ”, written FT may be regarded as the 1 -form nk=1 F k dxk .
P
Make sense of the preceding definition in terms of the definition of the integral of 1 -forms over
a smoothly triangulated manifold. For simplicity take n = 2. Hint. Suppose we have the following:
(1) ht0 , t1 i (with t0 < t1 ) is an oriented 1 -simplex in R;
(2) V is an open subset of R2 ;
(3) c : J → V is an injective smooth curve in V , where J is an open interval containing [t0 , t1 ];
and
(4) ω = a dx + b dy is a smooth 1 -form on V .
First show that
c∗ (dx) (t) = Dc1 (t)


for t0 ≤ t ≤ t1 . (We drop the notational distinction between c and its extension cs to J. Since
the tangent space Tt is one-dimensional for every t, we identify Tt with R. Choose v (in (3) of
definition 45.1.6) to be the usual basis vector in R, the number 1.)
Show in a similar fashion that
c∗ (dy) (t) = Dc2 (t) .


Then write an expression for c∗ (ω) (t). Finally conclude that 1 ω (ht0 , t1 i) is indeed equal to
 R 
R t1
t0 h(a, b) ◦ c, Dci as claimed in 46.1.

46.2.2. Exercise. Let S1 be the unit circle in R2 oriented counterclockwise and let F be the
3 3 3 3
R defined by F(x, y) = (2x − y ) i + (x + y ) j. Use your work in exercise 46.2.1 to
vector field
calculate S1 FT . Hint. You may use without proof two facts: (1) the integral does not depend
on the parametrization (triangulation) of the curve, and (2) the results of exercise 46.2.1 hold also
for simple closed curves in R2 ; that is, for curves c : [t0 , t1 ] → R2 which are injective on the open
interval (t0 , t1 ) but which satisfy c(t0 ) = c(t1 ).
46.2.3. Exercise. Let V be an open subset of R3 , F : V → R3 be a smooth vector field, and
(S, K, h) be a smoothly triangulated 2 -manifold suchRRthat S ⊆ V . It is conventional to define the
“normal component of F over S ”, often denoted by S FN , by the formula
ZZ ZZ
FN = hF ◦ h, ni
S K

where n = h1 × h2 . (Notation: hk is the k th


partial derivative of h.)
Make sense of the preceding definition in terms of the definition of the integral of 2 -forms
over a smoothly triangulated manifold (with or without boundary). In particular, suppose that
F = a i+b j+c k (where a, b, and c are smooth functions) and let ω = a dy ∧dz +b dz ∧dx+c dx∧dy.
This 2 -form is conventionally called the “normal component of F” and is denoted by FN . Notice
that FN is just ∗µ where µ is the 1 -form associated with the vector field F. Hint. Proceed as
follows.
(a) Show that the vector n(u, v) is perpendicular to the surface S at h(u, v) for each (u, v) in
[K] by showing that it is perpendicular to D(h ◦ c)(0) whenever c is a smooth curve in [K]
such that c(0) = (u, v).
142 46. INTEGRATION OF DIFFERENTIAL FORMS

(b) Let u and v (in that order) be the coordinates in the plane of [K] and x, y, and z (in that
order) be the coordinates in R3 . Show that h∗ (dx) = h11 du + h12 dv. Also compute h∗ (dy)
and h∗ (dz).
Remark. If at each point in [K] we identify the tangent plane to R2 with R2 itself and if
we use conventional notation, the “v” which appears in (3) of definition 45.1.6 is just not
written. One keeps in mind that the components of h and all the differential forms are
functions on (a neighborhood of) [K].
(c) Now find h∗ (ω). (Recall that ω = FN is defined above.)
(d) Show for each simplex (s) in K that
Z  ZZ
ω (hsi) = hF ◦ h, ni .
2 [s]
Pn
(e) Finally show that if hs1 i, . . . , hsn i are the oriented 2 -simplexes of K and c = k=1 hsk i,
then Z  ZZ
ω (c) = hF ◦ h, ni .
2 [K]

46.2.4. Exercise. Let F(x, y, z) = xzRRi + yz j and H be the hemisphere of x2 + y 2 + z 2 = 4 for


which z ≥ 0. Use exercise 46.2.3 to find FN .
H
CHAPTER 47

Stokes’ Theorem

47.1. Background
Topics: Stokes’ theorem.

47.1.1. Theorem (Stokes’ theorem). Suppose that (M, K,Rh) is Ran oriented smoothly triangu-
lated manifold with boundary. Then the integration operator = p p∈Z is a cochain map from
the cochain complex (Ω∗ (M ), d ) to the cochain complex (C ∗ (K), ∂ ∗ ).
This is an important and standard theorem, which appears in many versions and with many
different proofs. See, for example, [1], theorem 7.2.6; [19], chapter XVII, theorem 2.1; [21], theorem
10.8; or [26], theorems 4.7 and 4.9.
Recall that when we say in Stokes’ theorem that the integration operator is a cochain map, we
are saying that the following diagram commutes.
d / Ωp (M ) d / Ωp+1 (M ) d /

R R

 
/ C p (K) / C p+1 (K) /
∂∗ ∂∗ ∂∗

Thus if ω is a p -form on M and hsi is an oriented (p + 1) -simplex belonging to K, then we must


have Z   Z 
dω (hsi) = ∂ ∗ ω (hsi). (47.1)
p+1 p
This last equation (47.1) can be written in terms of integration over oriented simplexes:
Z Z
d hs ∗ ω = hs ∗ ω .

(47.2)
hsi ∂hsi

In more conventional notation all mention of the triangulating simplicial complex K and of the
map h is suppressed. This is justified by the fact that it can be shown that the value of the integral
is independent of the particular triangulation used. Then when the equations of the form (47.2) are
added over all the (p + 1) -simplexes comprising K we arrive at a particularly simple formulation
of (the conclusion of) Stokes’ theorem
Z Z
dω = ω. (47.3)
M ∂M
One particularly important topic that has been glossed over in the preceding is a discussion of
orientable manifolds (those which possess nowhere vanishing volume forms), their orientations, and
the manner in which an orientation of a manifold with boundary induces an orientation on its
143
144 47. STOKES’ THEOREM

boundary. One of many places where you can find a careful development of this material is in
sections 6.5 and 7.2 of [1].
47.1.2. Theorem.
R Let ω be a 1 -form on a connected open subset U of R2 . Then ω is exact on
U if and only if C ω = 0 for every simple closed curve in U .
For a proof of this result see [10], chapter 2, proposition 1.
47.2. EXERCISES (DUE: WED. MAY 20) 145

47.2. Exercises (Due: Wed. May 20)


y dx x dy
47.2.1. Exercise. Let ω = − + 2 . Show that on the region R2 \ {(0, 0)} the
x2 + y 2 x + y2
1-form ω is closed but not exact.

47.2.2. Exercise. What classical theorem do we get (for smooth functions) from the version
of Stokes’ theorem given by equation (47.3) in the special case that ω is a 0 -form and M is a
1- manifold (with boundary) in R.

47.2.3. Exercise. What classical theorem do we get (for smooth functions) from the version
of Stokes’ theorem given by equation (47.3) in the special case that ω is a 0 -form and M is a
1- manifold (with boundary) in R3 .

47.2.4. Exercise. What classical theorem do we get (for smooth functions) from the version
of Stokes’ theorem given by equation (47.3) in the special case that ω is a 1 -form and M is a
2- manifold (with boundary) in R2 .

3 − y 3 ) dx + (x3 + y 3 ) dy (where S1 is
R
47.2.5. Exercise. Use exercise 47.2.4to compute S1 (2x
the unit circle oriented counterclockwise).

47.2.6. Exercise. What classical theorem do we get (for smooth functions) from the version
of Stokes’ theorem given by equation (47.3) in the special case that ω is a 1 -form and M is a
2- manifold (with boundary) in R3 .

47.2.7. Exercise. What classical theorem do we get (for smooth functions) from the version
of Stokes’ theorem given by equation (47.3) in the special case that ω is a 2 -form and M is a
3- manifold (with boundary) in R3 .

47.2.8. Exercise. Your good friend Fred R. Dimm calls you on his cell phone seeking help
with a math problem. He says that he wants to evaluate the integral of the normal component of
the vector field on R3 whose coordinate functions are x, y, and z (in that order) over the surface
of a cube whose edges have length 4. Fred is concerned that he’s not sure of the coordinates of
the vertices of the cube. How would you explain to Fred (over the phone) that it doesn’t matter
where the cube is located and that it is entirely obvious that the value of the surface integral he is
interested in is 192?
CHAPTER 48

Quadratic Forms

48.1. Background
Topics: quadratic forms.

48.1.1. Definition. Let V be a finite dimensional real vector space. A function Q : V → R is


a quadratic form if
(i) Q(v) = Q(−v) for all v, and
(ii) the map B : V × V → R : (u, v) 7→ Q(u + v) − Q(u) − Q(v) is a bilinear form.
In this case B is the bilinear form associated with the quadratic form Q. It is obviously sym-
metric. Note: In many texts B(u, v) is defined to be 21 [Q(u + v) − Q(u) − Q(v)].

147
148 48. QUADRATIC FORMS

48.2. Exercises (Due: Fri. May 22)


48.2.1. Exercise. Let B be a symmetric bilinear form on a real finite dimensional vector
space V . Define Q : V → R by Q(v) = B(v, v). Show that Q is a quadratic form on V .

48.2.2. Exercise. Let Q be a quadratic form on a real finite dimensional vector space V . Prove
that
Q(u + v + w) − Q(u + v) − Q(u + w) − Q(v + w) + Q(u) + Q(v) + Q(w) = 0
for all u, v, w ∈ V .

48.2.3. Exercise. Let Q be a quadratic form on a real finite dimensional vector space V .
Prove that Q(αv) = α2 Q(v) for all α ∈ R and v ∈ V . Hint. First use exercise 48.2.2 to show that
Q(2v) = 4Q(v) for every v ∈ V .
CHAPTER 49

Definition of Clifford Algebra

49.1. Background
Topics: an algebra universal over a vector space, Clifford map; Clifford algebra.

49.1.1. Definition. Let V be a vector space. A pair (U, ι), where U is a unital algebra and
ι : V → U is a linear map, is universal over V if for every unital algebra A and every linear map
f : V → A there exists a unique unital algebra homomorphism fe: U → A such that fe ◦ ι = f .

49.1.2. Definition. Let V be a real finite dimensional vector space with a quadratic form Q
and A be a real unital algebra. A map f : V → A is a Clifford map if
(i) f is linear, and
2
(ii) f (v) = Q(v)1A for every v ∈ V .

49.1.3. Definition. Let V be a real finite dimensional vector space with a quadratic form Q.
The Clifford algebra over V is a real unital algebra Cl(V, Q), together with a Clifford map
j : V → Cl(V, Q), which satisfies the following universal condition: for every real unital alge-
bra A and every Clifford map f : V → A, there exists a unique unital algebra homomorphism
fb: Cl(V, Q) → A such that fb ◦ j = f .

149
150 49. DEFINITION OF CLIFFORD ALGEBRA

49.2. Exercises (Due: Wed. May 27)


49.2.1. Exercise. Let V be a vector space. Prove that if U and U 0 are unital algebras universal
over V , then they are isomorphic.

49.2.2. Exercise. Let V be a vector space. Prove that there exists a unital algebra which is
universal over V .

49.2.3. Exercise. Show that condition (ii) in definition 49.1.2 is equivalent to


(ii 0 ) f (u)f (v) + f (v)f (u) = B(u, v)1A for all u, v ∈ V ,
where B is the bilinear form associated with Q.

49.2.4. Exercise. Let V be a real finite dimensional vector space with a quadratic form Q.
Prove that if the Clifford algebra Cl(V, Q) exists, then it is unique up to isomorphism.

49.2.5. Exercise. Let V be a real finite dimensional vector space with a quadratic form Q.
Prove that the Clifford algebra Cl(V, Q) exists. Hint. Recall the definition of the tensor algebra
T (V ) in 36.1.2. Try T (V )/J where J is the ideal in T (V ) generated by elements of the form
v ⊗ v − Q(v)1T (V ) where v ∈ V .
CHAPTER 50

Orthogonality with Respect to Bilinear Forms

50.1. Background
Topics: orthogonality with respect to bilinear forms, the kernel of a bilinear form, nondegenerate
bilinear forms.

50.1.1. Definition. Let B be a symmetric bilinear form on a real vector space V . Vectors v
and w in V are orthogonal, in which case we write v ⊥ w, if B(v, w) = 0. The kernel of B is
the set of all k ∈ V such that k ⊥ v for every v ∈ V . The bilinear form is nondegenerate if its
kernel is {0}.

50.1.2. Definition. Let V be a finite dimensional real vector space and B be a symmetric
bilinear form on V . An ordered basis E = (e1 , . . . , en ) for V is B-orthonormal if
(a) B(ei , ej ) = 0 whenever i 6= j and
(b) for each i ∈ Nn the number B(ei , ei ) is −1 or +1 or 0.
50.1.3. Theorem. If V is a finite dimensional real vector space and B is a symmetric bilinear
form on V , then V has a B-orthonormal basis.
Proof. See [5], chapter 1, theorem 7.6.

50.1.4. Convention. Let V be a finite dimensional real vector space, let B be a symmetric
bilinear form on V , and let Q be the quadratic form associated with B. Let us agree that whenever
E = (e1 , . . . , en ) is an ordered B-orthonormal basis for V , we order the basis elements in such a
way that for some positive integers p and q

 1, if 1 ≤ i ≤ p;
Q(ei ) = −1, if p + 1 ≤ i ≤ p + q;
0, if p + q + 1 ≤ i ≤ n.

50.1.5. Theorem. Let V be a finite dimensional real vector space, let B be a symmetric bilinear
form on V , and let Q be the quadratic form associated with B. P Then there exist p, q ∈ Z+ such
that if E = (e , . . . , e ) is a B-orthonormal basis for V and v = vk ek , then
1 n

p
X p+q
X
2
Q(v) = vk − vk 2 .
k=1 k=p+1

Proof. See [5], chapter 1, theorem 7.11.

151
152 50. ORTHOGONALITY WITH RESPECT TO BILINEAR FORMS

50.2. Exercises (Due: Fri. May 29)


50.2.1. Exercise. One often sees the claim that “the classification of Clifford algebras amounts
to classifying vector spaces with quadratic forms”. Explain precisely what is meant by this assertion.

50.2.2. Exercise. Let B be a symmetric bilinear form on a real finite dimensional vector
space V . Suppose that V is an orthogonal direct sum V1 ⊕ · · · ⊕ Vn of subspaces. Then B is
nondegenerate if and only if the restriction of B to Vk is nondegenerate for each k. In fact, if
Vk\ is the kernel of the restriction of B to Vk , then the kernel of B is the orthogonal direct sum
V1\ ⊕ · · · ⊕ Vn\ .

50.2.3. Exercise. Let B be a nondegenerate symmetric bilinear form on a real finite dimen-
sional vector space V . If W is a subspace of V , then the restriction of B to W is nondegenerate if
and only if its restriction to W ⊥ is nondegenerate. Equivalently, B is nondegenerate on W if and
only if V = W + W ⊥ .

50.2.4. Exercise. Let Q be a quadratic form on a real finite dimensional vector space V and
let {e1 , . . . , en } be a basis for V which is orthogonal with respect to the bilinear form B associated
with Q. Show that if Q(ek ) is nonzero for 1 ≤ k ≤ p and Q(ek ) = 0 for p < k ≤ n, then the kernel
of B is the span of {ep+1 , . . . , en }.

50.2.5. Exercise. Let Q be a quadratic form on a real finite dimensional vector space V . Show
that if dim V = n, then dim Cl(V, Q) = 2n .

50.2.6. Exercise. Let V be a real finite dimensional vector space and let Q be the quadratic
form which is identically zero on V . Identify Cl(V, Q).
CHAPTER 51

Examples of Clifford Algebras

51.1. Background
Topics: No additional topics.
51.1.1. Notation. If (V, Q) is a finite dimensional real vector space V with a nondegenerate
quadratic form Q, we often denote the Clifford algebra C(V, Q) by C(p, q) where p and q are as in
theorem 50.1.5.

153
154 51. EXAMPLES OF CLIFFORD ALGEBRAS

51.2. Exercises (Due: Mon. June 1)


51.2.1. Exercise. Let f : (V, Q) → (W, R) be a linear map between finite dimensional real
vector spaces with quadratic forms. If R(f (v)) = Q(v) for every v ∈ V we say that f is an
isometry.
(a) Show that if f is such a linear isometry, then there exists a unique unital algebra homo-
morphism Cl(f ) : Cl(V, Q) → Cl(W, R) such that Cl(f )(v) = f (v) for every v ∈ V .
(b) Show that Cl is a covariant functor from the category of vector spaces with quadratic
forms and linear isometries to the category of Clifford algebras and unital algebra homo-
morphisms.
(c) Show that if the linear isometry f is an isomorphism, then Cl(f ) is an algebra isomorphism.

51.2.2. Exercise. Let V be a real finite dimensional vector space, Q be a quadratic form on V ,
and A = Cl(V, Q) be the associated Clifford algebra.
(a) Show that the map f : V → V : v 7→ −v is a linear isometry.
(b) Let ω = Cl(f ). Show that ω 2 = id.
(c) Let A0 = {a ∈ A : ω(a) = a} and A1 = {a ∈ A : ω(a) = −a}. Show that A = A0 ⊕ A1 .
Hint. If a ∈ A, let a0 = 21 (a + ω(a)).
(d) Show that Ai Aj ⊆ Ai+j where i, j ∈ {0, 1} and i + j is addition modulo 2. This says that
a Clifford algebra is a Z2 -graded (or Z/2Z -graded) algebra.
51.2.3. Exercise. Let V = R and Q(v) = v 2 for every v ∈ V . Show that Clifford algebra Cl(1, 0)
associated with (R, Q) is isomorphic to R ⊕ R. Hint. Consider the map u1 + ve 7→ (u − v, u + v).

51.2.4. Exercise. Let V = R and Q(v) = −v 2 for every v ∈ V . The Clifford algebra Cl(V, Q)
is often denoted by Cl(0, 1).
(a) Show that Cl(0, 1) ∼
= C.
(b) Show that Cl(0, 1) can be represented as a subalgebra of M2 (R). Hint. AOLT, page 50,
Example 2.12.

51.2.5. Exercise. Let V = R2 and Q(v) = v12 + v22 for every v ∈ V . The Clifford algebra

∼ 1 0
Cl(V, Q) is often denoted by Cl(2, 0). Show that Cl(2, 0) = M2 (R). Hint. Let ε1 := ,
0 −1
 
0 1
ε2 := , and ε12 := ε1 ε2 .
1 0

51.2.6. Exercise. Let V = R2 and Q(v) = −v12 − v22 for every v ∈ V . The Clifford algebra
Cl(V, Q) is often denoted by Cl(0, 2).
(a) Show that Cl(0, 2) ∼= H.
(b) Show that Cl(0, 2) can be represented as a subalgebra of M4 (R). Hint. AOLT, pages
50–51, Example 2.13.

51.2.7. Exercise. Take a look at the web page written by Pertti Lounesto:
http://users.tkk.fi/ ppuska/mirror/Lounesto/counterexamples.htm

51.2.8. Exercise. Show that the Clifford algebra Cl(3, 1) (Minkowski space-time algebra) is
isomorphic to M4 (R). Hint. Exercise 51.2.7.
Bibliography

1. Ralph Abraham, Jerrold E. Marsden, and Tudor Ratiu, Manifolds, Tensor Analysis, and Applications, Addison-
Wesley, Reading, MA, 1983. 140, 143, 144
2. Robert A. Beezer, A First Course in Linear Algebra, 2004,
http://linear.ups.edu/download/fcla-electric-2.00.pdf. vii
3. Richard L. Bishop and Richard J. Crittenden, Geometry of Manifolds, Academic Press, New York, 1964. 111,
115, 116, 119
4. Przemyslaw Bogacki, Linear Algebra Toolkit, 2005, http://www.math.odu.edu/∼bogacki/lat. vii
5. William C. Brown, A second Course in Linear Algebra, John Wiley, New York, 1988. vii, 151
6. Stewart S. Cairns, A simple triangulation method for smooth manifolds, Bull. Amer. Math. Soc. 67 (1961),
389–390. 135
7. P. M. Cohn, Basic Algebra: Groups, Rings and Fields, Springer, London, 2003. 7
8. Charles W. Curtis, Linear Algebra: An Introductory Approach, Springer, New York, 1984. vii
9. Paul Dawkins, Linear Algebra, 2007, http://tutorial.math.lamar.edu/pdf/LinAlg/LinAlg Complete.pdf. vii
10. Manfredo P. do Carmo, Differential Forms and Applications, Springer-Verlag, Berlin, 1994. 144
11. John M. Erdman, A Companion to Real Analysis, 2007,
http://www.mth.pdx.edu/∼erdman/CTRA/CRAlicensepage.html. 33
12. , Operator Algebras: K-Theory of C ∗ -Algebras in Context, 2008,
http://www.mth.pdx.edu/∼erdman/614/operator algebras pdf.pdf. 69, 77
13. Paul R. Halmos, Finite-Dimensional Vector Spaces, D. Van Nostrand, Princeton, 1958. vii
14. Jim Hefferon, Linear Algebra, 2006, http://joshua.smcvt.edu/linearalgebra. vii
15. Kenneth Hoffman and Ray Kunze, Linear Algebra, second ed., Prentice Hall, Englewood Cliffs,N.J., 1971. vii
16. S. T. Hu, Differentiable Manifolds, Holt, Rinehart, and Winston, New York, 1969. 135
17. Thomas W. Hungerford, Algebra, Springer-Verlag, New York, 1974. 7
18. Saunders Mac Lane and Garrett Birkhoff, Algebra, Macmillan, New York, 1967. 7
19. Serge Lang, Fundamentals of Differential Geometry, Springer Verlag, New York, 1999. 143
20. John M. Lee, Introduction to Smooth Manifolds, Springer, New York, 2003. 119, 135
21. Ib Madsen and Jørgen Tornehave, From Calculus to Cohomology: de Rham cohomology and characteristic classes,
Cambridge University Press, Cambridge, 1997. 143
22. Theodore W. Palmer, Banach Algebras and the General Theory of ∗ -Algebras I–II, Cambridge University Press,
Cambridge, 1994/2001. 7
23. Steven Roman, Advanced Linear Algebra, second ed., Springer-Verlag, New York, 2005. vii, 101
24. I. M. Singer and J. A. Thorpe, Lecture Notes on Elementary Topology and Geometry, Springer Verlag, New York,
1967. 135
25. Gerard Walschap, Metric Structures in Differential Geometry, Springer-Verlag, New York, 2004. 116, 119
26. Frank W. Warner, Foundations of Differentiable Manifolds and Lie Groups, Springer Verlag, New York, 1983.
135, 143
27. Eric W. Weisstein, MathWorld, A Wolfram Web Resource, http://mathworld.wolfram.com. vii
28. Wikipedia, Wikipedia, The Free Encyclopedia, http://en.wikipedia.org. vii

155
Index

−1
α
V
(inverse of a morphism α), 33 int M (interior of a manifold with boundary), 140
(V ) (Grassmann or exterior algebra), 107 int Hn (interior of a half-space), 140
Vk
(V ) (homogeneous elements of degree k), 110  (the forgetful functor), 33
∂hsi (boundary of a simplex), 132
∂p (boundary map), 133 Abelian group, ix
∗ -algebra (algebra with involution), 81 adjoint, 81

-homomorphism (star homomorphism), 81 vector space, 34
hx, vi (alternative notation for v ∗ (x)), 28 affine subspace, 131
(f, g) (function into a product), 37 algebra
M ⊕ N (direct sum), 37 Z+ -graded, 109
S ⊗ T (tensor products of linear maps), 105 Clifford, 149
U ⊗ V (tensor product), 101 de Rham cohomology, 128
V /M (quotient of V by M ), 41 exterior, 107
ω ∧ µ (wedge product), 107 Grassmann, 107
quotient, 52
m
L∞
⊕ n (direct sum of vectors), 37
tensor, 111
Lk=0
Vk (direct sum), 111
n alternating, 94
k=1 Mk (direct sum), 37
Altk (V ) (set of alternating k-linear maps), 111
A + B (sum of sets of vectors), 9
annihilating polynomial, 60
f × g (product function), 3
annihilator, 8, 31
U  V (U is a subspace of V ), 9
antisymmetric, 17
[a, b] (closed segment in a vector space), 131
axiom of choice, 18
[x] (equivalence class containing x), 41
|K| (polyhedron of a complex), 132 basis, 17
A[x]
 
(algebra of polynomials), 59 dual, 29
A [x] (algebra of formal power series), 59 ordered, 113
F[x] (polynomial algebra with coefficients in F), 59 right-handed, 113
1 (standard one-element set), 4 βp (Betti number), 133
1V , idV , IV (the identity map on V ), 21 Betti number, 127, 133
T ← (U ) (inverse image of U under T ), 21 bilinear, 94
T → (U ) (direct image of U under T ), 21 form
(s) = (v0 , . . . , vp ) (open p -simplex), 131 associated with a quadratic form, 147
[s] = [v0 , . . . , vp ] (closed p -simplex), 131 nondegenerate, 151
hsi = hv0 , . . . , vp i (closed p -simplex), 132 B-orthonormal, 151
b (= Γ(x)), 32
x bound
V ∼ = W (isomorphism of vector spaces), 21 lower, 17
F⊥ (pre-annihilator of F ), 31 upper, 17
xs (alternative notation for x(s)), 27 boundaries, 133
M ⊥ (annihilator of M ), 31 boundary
T ∗ (adjoint of T ), 81 maps, 133
T ∗ (vector space adjoint of T ), 34 of a half-space, 140
T t (transpose of T ), 81 of a manifold, 140
f ← (B) (preimage of B under f ), 34 of a simplex, 132
f → (A) (image of A under f ), 34 bounded
v ∗ , 28 real valued function, 9
(V ∗ , d), V ∗ (cochain complex), 129 B p (M ) (space of de Rham p-̇coboundaries), 127
∂M (boundary of a manifold), 140 B p (K) (space of simplicial p -coboundaries), 135
∂Hn (boundary of a half-space), 140 Bp (K) (space of simplicial p -boundaries), 133

157
158 INDEX

calculus of morphisms, 33
polynomial functional, 60 concrete category, 33
cancellation property, 7 constant
category, 33 locally, 128
concrete, 33 polynomial, 60
chain, 18 continuous
chains, 132 differentiability, 22
characteristic continuously differentiable, 44
Euler, 133 contravariant, 33
polynomial, 71 conventions
χ(K) (Euler characteristic), 133 all categories are concrete, 33
choice all differential forms are smooth, 116
axiom of, 18 all manifolds are smooth and oriented, 115
∞ all vector field are smooth, 115
Cm (smooth functions at m), 115
Clifford all vector spaces are real, finite dimensional, and
algebra, 149 oriented, 115
map, 149 on Cartesian products, 3
closed on ordering B-orthonormal bases, 151
differential form, 125 on the standard basis for P([a, b]), 25
face of a simplex, 132 write f ω for f ∧ ω when f is a 0-form, 121
segment, 131 convex
simplex, 131 combination, 131
Cl(V, Q) (Clifford algebra), 149 hull, 131
coboundary independent, 131
p-, 129 set, 131
de Rham, 127 convolution, 59
operator, 129 coordinate
simplicial, 135 projections, 38
cochain, 135 coproduct
p-, 129 in a category, 41
complex, 129 uniqueness of, 43
map, 129 cotangent
cocycle space, 115
covariant, 33
p-, 129
Cp (K) (p -chains), 132
de Rham, 127
C p (K) (p -cochains), 135
simplicial, 135
cycle, 93
codimension, 46
length of a, 94
coefficient
cycles, 133
leading, 60
cohomology d (exterior derivative), 119
de Rham, 128 de Rham
group, 129 coboundary, 127
de Rham, 127 cocycle, 127
simplicial, 135 cohomology group, 127
combination de Rham cohomology algebra, 128
convex, 131 decomposable
formal linear, 100 element of a Grassmann algebra, 109
commutative tensor, 102
diagram, 3 degree
ring, ix of a decomposable element, 109
comparable, 18 of a homogeneous element, 109
complementary subspace, 37 of a polynomial, 59
complete δ (diagonal mapping), 4
order, 34 ∆fa , 93
complex derivative
cochain, 129 exterior, 119
simplicial, 132 partial, 116
dimension of a, 132 total, 93
components, 37 determinant, 98
composition function, 97
INDEX 159

dfa (differential of f at a), 93 even function, 40


diagonal exact
mapping, 4 differential form, 125
matrix, 69 exact sequence
diag(α1 , . . . , αn ) (diagonal matrix), 69 of cochain complexes, 129
diagonalizable, 69 of vector spaces, 42
plus nilpotent decomposition, 73 short, 42
unitarily, 89 exterior
diagram algebra, 107
commutative, 3 derivative, 119
differentiable, 93 product, 107
continuously, 22, 44 external
differential, 93 direct sum, 37, 111
p-form, 115
smooth, 115 F (S) (real valued functions on S), 4
form F (S, T ) (functions from S to T ), 4
closed, 125 face, 132
exact, 125 factorization, 64
of a function on a manifold, 116 field, ix
differential form vector, 115, 121
associated with a vector field, 121 final projection, 89
dimension finite
of a simplex, 131 support, 27
of a simplicial complex, 132 first isomorphism theorem, 43
of a vector space, 19 forgetful functor, 34
dim K (dimension of a simplicial complex K), 132 form
dim V (dimension of a vector space V ), 19 closed , 125
direct differential p-, 115
image, 21 exact, 125
sum multilinear, 94
as a product, 41 quadratic, 147
external, 37, 111 formal linear combination, 100
internal, 37 formal power series, 59
projections associated with, 49 free
disjoint object, 99
permutations, 94 vector space, 99
divides, 64 function
division bounded, 9
ring, ix diagonal, 4
divisor integrable, 9
greatest common, 65 interchange, 4
of zero, 7 product, 3
dual switching, 4
basis, 29 functional
duality functor calculus
for vector spaces, 35 polynomial, 60
Dxk (m) (partial derivative on a manifold), 116 multilinear, 94
functor
eigenspaces contravariant, 33
generalized, 73 covariant, 33
elementary tensor, 102 forgetful, 34
EN M (projection along N onto M ), 49 power set, 35
εS , ε (function from S into 1), 4 vector space duality, 35
equivalent fundamental quotient theorem, 43
orderings of vertices, 132
unitarily, 89 Γ (the map from V to V ∗∗ taking x to x
b), 32
essential generalized eigenspaces, 73
uniqueness, 43 graded algebra, 109
Euler characteristic, 133 Grassmann algebra, 107
even greatest common divisor, 65
permutation, 94 group, ix
160 INDEX

Abelian, ix right, 33
cohomology, 129 invertible, 33
de Rham cohomology, 127 element of an algebra, 52
simplicial cohomology, 135 left, 21
simplicial homology, 133 linear map, 21
symmetric, 94 right, 21
involution, 81
H(A) (self-adjoint elements of A), 81 irreducible, 63
half-space isometry, 154
boundary of a, 140 partial, 89
interior of a, 140 isomorphism, 21
half-space, upper, 140 in a category, 33
Hamel basis, 17
Hermitian, 81 kernel, 21
Hn (upper half-space), 140 of a bilinear form, 151
Hodge star operator, 114 ker T (the kernel of T ), 21
Hom(G, H), Hom(G) (group homomorphisms), 1
homogeneous l(S, F), l(S) (functions on S), 27
elements of a graded algebra, 109 l2 (R), l2 (C) (square summable sequences), 78
tensor, 102 Lagrange interpolation formula, 64
homology, 133 largest, 17
homomorphism lc (S, F), lc (S) (functions on S with finite support), 27
of Abelian groups, 1 lc (S, A) (functions from S into A with finite
ring, 7 support), 59
unital ring, 7 leading coefficient, 60
H p (M ) (the pth de Rham cohomology group), 127 left
H p (K) (the pth simplicial cohomology group), 135 -handed basis, 113
Hp (K) (the pth simplicial homology group), 133 inverse
hull, convex, 131 of a morphism, 33
invertible linear map, 21
idempotent, 49, 56 length
idS (identity function on S), 3 of a cycle, 94
identity of a vector, 77
function on a set, 3 linear, 21
resolution of the, 69 combination
image, 21, 34 formal, 100
direct, 21 ordering, 18
inverse, 21 transformation
independent invertible, 21
convex, 131 transformations, 21
indeterminant, 59 tensor products of, 105
initial projection, 89 L(V, W ), L(V ) (family of linear functions), 21
inner product Ln (V1 , . . . , Vn ) (family of n-linear functions), 94
space little-oh functions, 93
spectral theorem for, 89 locally constant, 128
integrable lower bound, 17
real valued function, 9 l(S, A) (functions from S into A), 59
integral
of a p -form over a p -chain, 139 manifold
interchange operation, 4 boundary of a, 140
interior smoothly triangulated, 135
of a half-space, 140 with boundary, 140
of a manifold with boundary, 140 interior of a, 140
internal matrix
direct sum, 37 diagonal, 69
interpolation similarity, 69
Lagrange, 64 symmetric, 19
inverse maximal, 17
image, 21 minimal, 17
left, 33 polynomial, 60
of a morphism, 33 existence of, 61
INDEX 161

Mn (A) (n × n matrices of members of A), 97 with respect to a bilinear form, 151


monic polynomial, 60 orthonormal, B-, 151
monoid, ix
morphism partial
map, 33 derivative, 116
Mor(S, T ) (morphisms from S to T ), 33 isometry, 89
morphisms, 33 ordering, 17
composition of, 33 p -boundaries, 133
mT (minimal polynomial for T ), 60 p -chains, 132
multilinear, 94 p -coboundary, 129
form, 94 simplicial, 135
functional, 94 p -cochain, 129, 135
p -cocycle, 129
N (A) (normal elements of A), 81 simplicial, 135
n-linear function, 94 p -cycles, 133
nondegenerate bilinear form, 151 permutation, 93
norm, 77 cyclic, 93
normal, 81 even, 94
nullity, 21 odd, 94
nullspace, 21 sign of a, 94
p-form, 115
o (the family of little-oh functions, 93 plane
object of a simplex, 131
map, 33 Pn ([a, b]) (polynomials of degree strictly less than n),
objects, 33 25
odd polarization identity, 77
permutation, 94 polyhedron, 132
odd function, 40 polynomial, 59
Ωp (U ) (space of p-forms on U ), 115 annihilating, 60
Ωp F , 127 characteristic, 71
Ω(U ) (the algebra of differential forms on U ), 116 constant, 60
open degree of a, 59
face of a simplex, 132 function, 60
simplex, 131 functional calculus, 60
operator, 21 irreducible, 63
diagonalizable, 69 minimal, 60
projection, 49 existence of, 61
similarity, 69 monic, 60
unilateral shift, 56 prime, 63
opposite orientation, 113 reducible, 63
order power series
complete, 34 formal, 59
ordered power set, 34
basis, 113 functor, 35
by inclusion, 17 (p, q)-shuffle, 111
linearly, 18 pre-annihilator, 31
partially, 17 preimage, 34
orientation primary decomposition theorem, 73
of a simplex, 132 prime
of a vector space, 113 polynomial, 63
oriented relatively, 65
simplex, 132 product
vector space, 113 exterior, 107
orthogonal in a category, 41
direct sum tensor, 101
projections associated with, 89 uniqueness of, 43
orthogonal projections, 85 wedge, 107
projection, 85 projection
assocuated with an orthogonal direct sum along one subspace onto another, 49
decomposition, 89 assocuated with direct sum decomposition, 49
resolution of the identity, 89 final, 89
162 INDEX

in a ∗ -algebra, 85 of a permutation, 94
initial, 89 similar, 69
operator, 49 simplex
orthogonal, 85 closed, 131
projections dimension of a, 131
coordinate, 38 face of a, 132
P(S) (power set of S), 34 open, 131
p -simplex, 131 oriented, 132
pullback of differential forms, 135 plane of a, 131
vertex of a, 132
quadratic form, 147 simplicial
associated with a bilinear form, 147 coboundary, 135
quotient cocycle, 135
algebra, 52 cohomology group, 135
map, 41, 45, 46 complex, 132
object, 45 dimension of a, 132
space, 41 simplicial homology group, 133
skeleton of a complex, 132
R∞ (sequences of real numbers), 11 skew-symmetric, 94
range, 21 smallest, 17
ran T (the range of T ), 21 smooth
rank, 21 differential form, 115
rank-plus-nullity theorem, 47 function, 115
reducible, 63 submanifold, 135
reducing subspaces, 67 triangulation, 135
reflexive, 17 vector field, 115
relation, 17 span, 13
relatively prime, 65 span(A) (the span of A), 13
resolution of the identity spectral theorem
in vector spaces, 69 for complex inner product spaces, 89
orthogonal, 89 for vector spaces, 69
retraction, 21 spectrum, 55
reverse orientation, 113 S(p, q) (shuffle permutations), 111
Riesz-Fréchet theorem square summable sequence, 78
for vector spaces, 29 star operator, 114
right Stokes’ theorem, 143
-handed basis, 113 submanifold, 135
inverse subprojection, 89
of a morphism, 33 subspace, 9
invertible linear map, 21 affine, 131
ring, ix complementary, 37
commutative, ix reducing, 67
division, ix sum
homomorphism, 7 direct, 37, 111
unital, 7 of sets of vectors, 9
with identity, ix supp F (the support of f ), 27
r -skeleton, 132 support, 27
switching operation, 4
section, 21 symmetric
segment, closed, 131 group, 94
self-adjoint, 81 matrix, 19
semigroup, ix skew-, 94
sequence
exact, 42 tangent, 93
series at a point, 115
formal power, 59 space, 115
shift operator, 56 tensor
short exact sequence, 42, 129 algebra, 111
shuffle, 111 decomposable, 102
σ (interchange operation), 4 elementary, 102
sign homogeneous, 102
INDEX 163

product, 101
of linear maps, 105

Tm (cotangent space at m), 115
Tm (tangent space at m), 115
total
derivative, 93
transitive, 17
transpose, 81, 98
transposition, 94
triangulated manifold, 135
trilinear, 94
T (V ) (the tensor algebra of V ), 111

U (A) (unitary elements of A), 81


unilateral shift operator, 56
unique factorization theorem, 64
uniqueness
essential, 43
of products, coproducts, 43
unit vector, 77
unital
ring, ix
ring homomorphism, 7
unitarily
diagonalizable, 89
equivalent, 89
unitary, 81
unitization, 51
universal, 149
upper
bound, 17
half-space, 140

vector
field, 115, 121
associated with a differential form, 121
smooth, 115
space, 8
adjoint, 34
dimension of a, 19
free, 99
orientation of a, 113
oriented, 113
spectral theorem for, 69
unit, 77
vertex
of a simplex, 132
vol (volume element or form), 114, 116
volume
element, 114
form, 116

wedge product, 107

zero divisor, 7
Z+ -graded algebra, 109
Zorn’s lemma, 18
Z p (M ) (space of de Rham p -cocycles), 127
Z p (K) (space of simplicial p -cocycles), 135
Zp (K) (space of simplicial p -cycles), 133

You might also like