Professional Documents
Culture Documents
John M. Erdman
Portland State University
2009
c John M. Erdman
Preface vii
Some Algebraic Objects ix
Chapter 1. Abelian Groups 1
1.1. Background 1
1.2. Exercises (Due: Wed. Jan. 7) 2
Chapter 2. Functions and Diagrams 3
2.1. Background 3
2.2. Exercises (Due: Fri. Jan. 9) 5
Chapter 3. Rings 7
3.1. Background 7
3.2. Exercises (Due: Mon. Jan. 12) 8
Chapter 4. Vector Spaces and Subspaces 9
4.1. Background 9
4.2. Exercises (Due: Wed. Jan. 14) 10
Chapter 5. Linear Combinations and Linear Independence 13
5.1. Background 13
5.2. Exercises (Due: Fri. Jan. 16) 14
Chapter 6. Bases for Vector Spaces 17
6.1. Background 17
6.2. Exercises (Due: Wed. Jan. 21) 19
Chapter 7. Linear Transformations 21
7.1. Background 21
7.2. Exercises (Due: Fri. Jan. 23) 22
Chapter 8. Linear Transformations (continued) 25
8.1. Background 25
8.2. Exercises (Due: Mon. Jan. 26) 26
Chapter 9. Duality in Vector Spaces 27
9.1. Background 27
9.2. Exercises (Due: Wed. Jan. 28) 28
Chapter 10. Duality in Vector Spaces (continued) 31
10.1. Background 31
10.2. Exercises (Due: Fri. Jan. 30) 32
Chapter 11. The Language of Categories 33
11.1. Background 33
11.2. Exercises (Due: Mon. Feb. 2) 35
iii
iv CONTENTS
26.1. Background 85
26.2. Exercises (Due: Fri. Mar. 13) 86
Chapter 27. The Spectral Theorem for Inner Product Spaces 89
27.1. Background 89
27.2. Exercises (Due: Mon. Mar. 30) 90
Chapter 28. Multilinear Maps 93
28.1. Background 93
Differential calculus 93
Permutations 93
Multilinear maps 94
28.2. Exercises (Due: Wed. Apr. 1) 95
Chapter 29. Determinants 97
29.1. Background 97
29.2. Exercises (Due: Fri. Apr. 3) 98
Chapter 30. Free Vector Spaces 99
30.1. Background 99
30.2. Exercises (Due: Mon. Apr. 6) 100
Chapter 31. Tensor Products of Vector Spaces 101
31.1. Background 101
31.2. Exercises (Due: Wed. Apr. 8) 102
Chapter 32. Tensor Products of Vector Spaces (continued) 103
32.1. Background 103
32.2. Exercises (Due: Fri. Apr. 10) 104
Chapter 33. Tensor Products of Linear Maps 105
33.1. Background 105
33.2. Exercises (Due: Mon. April 13) 106
Chapter 34. Grassmann Algebras 107
34.1. Background 107
34.2. Exercises (Due: Wed. April 15) 108
Chapter 35. Graded Algebras 109
35.1. Background 109
35.2. Exercises (Due: Fri. April 17) 110
Chapter 36. Existence of Grassmann Algebras 111
36.1. Background 111
36.2. Exercises (Due: Mon. April 20) 112
Chapter 37. The Hodge ∗-operator 113
37.1. Background 113
37.2. Exercises (Due: Wed. April 22) 114
Chapter 38. Differential Forms 115
38.1. Background 115
38.2. Exercises (Due: Fri. April 24) 117
Chapter 39. The Exterior Differentiation Operator 119
39.1. Background 119
vi CONTENTS
This collection of exercises is designed to provide a framework for discussion in a two-term se-
nior/first year graduate level class in linear and multilinear algebra such as the one I have conducted
fairly regularly at Portland State University.
Most recently I have been using Douglas R. Farenick’s Algebras of Linear Transformations
(Springer-Verlag 2001) as a text for parts of the course. (In these Exercises it is referred to as
AOLT.) For the more elementary parts of linear algebra there is certainly no shortage of readily
available texts. In particular there are now a number of excellent online texts which are available
free of charge. Among the best are Linear Algebra [14] by Jim Hefferon,
http://joshua.smcvt.edu/linearalgebra
A First Course in Linear Algebra [2] by Robert A. Beezer,
http://linear.ups.edu/download/fcla-electric-2.00.pdf
and Linear Algebra [9] by Paul Dawkins.
http://tutorial.math.lamar.edu/pdf/LinAlg/LinAlg_Complete.pdf
Another very useful online resource is Przemyslaw Bogacki’s Linear Algebra Toolkit [4].
http://www.math.odu.edu/~bogacki/lat
And, of course, many topics in linear algebra are discussed with varying degrees of thoroughness
in the Wikipedia [28]
http://en.wikipedia.org
and Eric Weisstein’s Mathworld [27].
http://http://mathworld.wolfram.com
For more advanced topics in linear algebra some references that I particularly like are Paul Hal-
mos’s Finite-Dimensional Vector Spaces [13], Hoffman and Kunze’s Linear Algebra [15], Charles
W. Curits’s Linear Algebra: An Introductory Approach [8], Steven Roman’s Advanced Linear Alge-
bra [23], and William C. Brown’s A Second Course in Linear Algebra [5]. Readable introductions
to some topics in multilinear algebra (exterior, graded, and Clifford algebras, for example) are a
bit harder to come by. For these I will suggest specific sources later in these Exercises.
The short introductory Background section which precedes each assignment is intended to
fix notation and provide “official” definitions and statements of important theorems for ensuing
discussions.
vii
Some Algebraic Objects
ix
CHAPTER 1
Abelian Groups
1.1. Background
Topics: groups, Abelian groups, group homomorphisms.
1.1.2. Definition. Let G and H be Abelian groups. For f and g in Hom(G, H) we define
f + g : G → H : x 7→ f (x) + g(x).
1
2 1. ABELIAN GROUPS
1.2.2. Exercise. Prove that the identity element in an Abelian group is unique.
1.2.3. Exercise. Let x be an element of an Abelian group. Prove that the inverse of x is
unique.
1.2.5. Exercise (†). Let x be an element of an Abelian group. Prove that −(−x) = x.
1.2.7. Exercise (†). Let f : G → H be a homomorphism of Abelian groups. Show that f (−x) =
−f (x) for each x ∈ G.
1.2.8. Exercise. Let E2 be the Euclidean plane. It contains points (which do not have coordi-
nates) and lines (which do not have equations). A directed segment is an ordered pair of points.
Define two directed segments to be equivalent if they are congruent (have the same length), lie on
parallel lines, and have the same direction. This is clearly an equivalence relation on the set DS
−−→
of directed segments in the plane. We denote by P Q the equivalence class containing the directed
segment (P, Q), going from the point P to the point Q. Define an operation on DS by
−−→ −−→ −→
P Q + QR = P R.
Show that this operation is well defined and that under it DS is an Abelian group.
−−→ −−→
1.2.9. Exercise. Suppose that A, B, C, and D are points in the plane such that AB = CD.
−→ −−→
Show that AC = BD.
1.2.10. Exercise. Let G and H be Abelian groups. Show that with addition as defined in 1.1.2
Hom(G, H) is an Abelian group.
CHAPTER 2
2.1. Background
Topics: functions, commutative diagrams.
2.1.1. Definition. It is frequently useful to think of functions as arrows in diagrams. For
example, the situation h : R → S, j : R → T , k : S → U , f : T → U may be represented by the
following diagram.
j
R /T
h f
S /U
k
The diagram is said to commute if k ◦ h = f ◦ j. Diagrams need not be rectangular. For instance,
R?
??
??
??d
h ??
??
?
S /U
k
is a commutative diagram if d = k ◦ h.
2.1.2. Example. Here is one diagrammatic way of stating the associative law for composition
of functions: If h : R → S, g : S → T , and f : T → U and we define j and k so that the triangles in
the diagram
j
R /T
?
g
h f
S /U
k
commute, then the square also commutes.
2.1.3. Convention. If S, T , and U are sets we will not distinguish
(S × T ) × U ,
between
S × (T × U ), and S × T × U . That is, the ordered pairs (s, t), u and s, (t, u) and the ordered
triple (s, t, u) will be treated as identical.
2.1.6. Convention. We will have use for a standard one-element set, which, if we wish, we
can regard as the Cartesian product of an empty family of sets. We will denote it by 1. For each
set S there is exactly one function from S into 1. We will denote it by εS . If no confusion is likely
to arise we write ε for εS .
2.1.7. Notation. We denote by δ the diagonal mapping of a set S into S × S. That is,
δ : S → S × S : s 7→ (s, s).
2.1.8. Notation. Let S be a set. We denote by σ the interchange (or switching) operation
on the S × S. That is,
σ : S × S → S × S : (s, t) 7→ (t, s).
2.1.9. Notation. If S and T are sets we denote by F(S, T ) the family of all functions from S
into T . We will use F(S) for F(S, R), the set of all real valued functions on S.
2.2. EXERCISES (DUE: FRI. JAN. 9) 5
commutes. What is (S, a)? Hint. Interpret a as, for example, addition (or multiplication).
2.2.2. Exercise. Let S be a set and suppose that a : S × S → S and η : 1 → S are functions
such that both diagram (D1) above and the diagram (D2) which follows commute.
9 S eK
ss O KKKK
sss KKK g
f ss KKK
ss
ss KKK
a
s
sss KKK
s
1×S / S×S o S×1 (D2)
η×id id ×η
+ +
G /H
f
commutes. What can be said about the function f ?
2.2.6. Exercise (††). Let S be a set with exactly one element. Discuss the cardinality of (that
is, the number of elements in) the sets F(∅, ∅), F(∅, S),F(S, ∅), and F(S, S),
CHAPTER 3
Rings
3.1. Background
Topics: rings; zero divisors; cancellation property; ring homomorphisms.
3.1.2. Definition. A nonzero element a of a ring is a zero divisor (or divisor of zero) if
there exists a nonzero element b of the ring such that (i) ab = 0 or (ii) ba = 0.
Most everyone agrees that a nonzero element a of a ring is a left divisor or zero if it satisfies (i)
for some nonzero b and a right divisor of zero if it satisfies (ii) for some nonzero b. There agreement
on terminology ceases. Some authors ([7], for example) use the definition above for divisor of zero;
others ([17], for example) require a divisor of zero to be both a left and a right divisor of zero; and
yet others ([18], for example) avoid the issue entirely by defining zero divisors only for commutative
rings. Palmer in [22] makes the most systematic distinctions: a zero divisor is defined as above;
an element which is both a left and a right zero divisor is a two-sided zero divisor ; and if the same
nonzero b makes both (i) and (ii) hold a is a joint zero divisor.
3.1.3. Definition. A function f : R → S between rings is a (ring) homomorphism if
f (x + y) = f (x) + f (y)
and
f (xy) = f (x)f (y)
for all x and y in R. If in addition R and S are unital rings and f (1R ) = 1S then f is a unital
(ring) homomorphism.
7
8 3. RINGS
3.2.4. Exercise. A division ring has no zero divisors. That is, if ab = 0 in a division ring, then
a = 0 or b = 0.
3.2.5. Exercise (†). A ring has the cancellation property if and only if it has no zero divisors.
3.2.6. Exercise. Let G, H, and J be Abelian groups. If f ∈ Hom(G, H) and g ∈ Hom(H, J),
then the composite of g with f belongs to Hom(G, J). Note: This composite is usually written as
gf rather than as g ◦ f .
3.2.7. Exercise. Let G be an Abelian group. Then Hom(G) is a unital ring (under the opera-
tions of addition and composition).
3.2.8. Exercise. Let (G, +) be an Abelian group, F a field, and M : F → Hom(G) be a unital
ring homomorphism. What is (G, +, M )?
CHAPTER 4
4.1. Background
Topics: vector spaces, linear (or vector) subspaces.
4.1.1. Notation.
F = F[a, b] = {f : f is a real valued function on the interval [a, b]}
P = P[a, b] = {p : p is a polynomial function on [a, b]}
P4 = P4 [a, b] = {p ∈ P : the degree of p is three or less}
Q4 = Q4 [a, b] = {p ∈ P : the degree of p is equal to 4}
C = C[a, b] = {f ∈ F : f is continuous}
D = D[a, b] = {f ∈ F : f is differentiable}
K = K[a, b] = {f ∈ F : f is a constant function}
B = B[a, b] = {f ∈ F : f is bounded}
J = J [a, b] = {f ∈ F : f is integrable}
(A function f ∈ F is bounded if there exists a number M ≥ 0 such that |f (x)| ≤ M for all x in
Rb
[a, b]. It is (Riemann) integrable if it is bounded and the Riemann integral a f (x) dx exists.)
4.1.2. Notation. For a vector space V we will write U V to indicate that U is a subspace
of V . To distinguish this concept from other uses of the word “subspace” (topological subspace, for
example) writers frequently use the expressions linear subspace, vector subspace, or linear manifold.
4.1.3. Definition. If A and B are subsets of a vector space then the sum of A and B, denoted
by A + B, is defined by
A + B := {a + b : a ∈ A and b ∈ B}.
9
10 4. VECTOR SPACES AND SUBSPACES
4.2.3. Exercise. Let V be the set of all real numbers. Define an operation of “addition” by
x y = the maximum of x and y
for all x, y ∈ V . Define an operation of “scalar multiplication” by
α x = αx
for all α ∈ R and x ∈ V . Prove or disprove: under the operations and the set V is a vector
space.
4.2.4. Exercise (†). Let V be the set of all real numbers x such that x > 0. Define an operation
of “addition” by
x y = xy
for all x, y ∈ V . Define an operation of “scalar multiplication” by
α x = xα
for all α ∈ R and x ∈ V . Prove or disprove: under the operations and the set V is a vector
space.
4.2.5. Exercise. Let V be R2 , the set of all ordered pairs (x, y) of real numbers. Define an
operation of “addition” by
(u, v) (x, y) = (u + x + 1, v + y + 1)
for all (u, v) and (x, y) in V . Define an operation of “scalar multiplication” by
α (x, y) = (αx, αy)
for all α ∈ R and (x, y) ∈ V . Prove or disprove: under the operations and the set V is a vector
space.
4.2.6. Exercise. Find the smallest subspace of R3 containing the vectors (2, −3, −3) and
(0, 3, 2).
4.2.7. Exercise. The author’s definition of “subspace” (AOLT, page 2) is not quite correct
(especially in view of the sentence following the definition). Explain why. Provide a correct defini-
tion.
4.2.8. Exercise. Let α, β, and γ be real numbers. Prove that the set of all solutions to the
differential equation
αy 00 + βy 0 + γy = 0
is a subspace of the vector space of all twice differentiable functions on R.
4.2.9. Exercise. For a fixed interval [a, b], which sets of functions in the notation list 4.1.1 are
vector subspaces of which?
4.2.10. Exercise (††). Let U and V be subspaces of a vector space W , and let {Vλ : λ ∈ Λ} be
a (perhaps infinite) family of subspaces of W . Consider the following subsets of W .
4.2. EXERCISES (DUE: WED. JAN. 14) 11
(a) U ∩ V .
(b) U ∪ V .
(c) U + V .
(d) U − V .
(e) ∩λ∈Λ Vλ .
Which of these are subspaces of W ?
4.2.11. Exercise. Let R∞ denote the (real) vector space of all sequences x = (xk ) = (xk )∞ k=1 =
(x1 , x2 , x3 , . . . ) of real numbers. Define addition and scalar multiplication pointwise. That is,
x + y = (x1 + y1 , x2 + y2 , x3 + y3 , . . . )
for all x, y ∈ R∞ and
αx = (αx1 , αx2 , αx3 , . . . )
for all α ∈ R and x ∈ R∞ . Which of the following are subspaces of R∞ ?
(a) the sequences which have infinitely many zeros.
(b) the sequences which have only finitely many nonzero terms.
(c) the decreasing sequences.
(d) the convergent sequences.
(e) the sequence which converge to zero.
(f) the arithmetic progressions (that is, sequences such that xk+1 − xk is constant).
xk+1
(g) the geometric progressions (that is, sequences such that is constant).
xk
(h) the bounded sequences (that is sequences for which there is a number M > 0 such that
|xk | ≤ M for all k). ∞
X
(i) the absolutely summable sequences (that is, sequences such that |xk | < ∞).
k=1
CHAPTER 5
5.1. Background
Topics: linear combinations, linear independence, span. (See AOLT, p. 2.)
5.1.1. Definition. Let A be a subset of a vector space V . The span of A is the set of all linear
combinations of elements of A. It is denoted by span(A) (or by spanF (A) if we wish to emphasize
the role of the scalar field F). The subset A spans the space V if V = span(A). In this case we
also say that A is a spanning set for V .
13
14 5. LINEAR COMBINATIONS AND LINEAR INDEPENDENCE
5.2.2. Exercise. Let A be a nonempty subset of a vector space V . What, exactly, do we mean
when we speak of the smallest subspace of V which contains A?
5.2.3. Exercise. Let A be a nonempty subset of a vector space V . Then the following sets are
equal:
(a) span(A);
(b) the intersection of the family of all subspaces of V which contain A; and
(c) the smallest subspace of V which contains A.
5.2.4. Exercise. Verify that supersets of linearly dependent sets are linearly dependent and
that subsets of linearly independent sets linearly independent. That is, show that if V is a vector
space, A is a linearly dependent subset of V , and B is a subset of V which contains A, then B is
linearly dependent. Also show that if B is a linearly independent subset of V and A is contained
in B, then A is linearly independent.
5.2.5. Exercise. Let w = (1, 1, 0, 0), x = (1, 0, 1, 0), y = (0, 0, 1, 1), and z = (0, 1, 0, 1).
(a) Show that {w, x, y, z} does not span (that is, generate) R4 by finding a vector u in R4
such that u ∈/ span(w, x, y, z).
(b) Show that {w, x, y, z} is a linearly dependent set of vectors by finding scalars α, β, γ, and
δ—not all zero—such that αw + βx + γy + δz = 0.
(c) Show that {w, x, y, z} is a linearly dependent set by writing z as a linear combination of
w, x, and y.
5.2.6. Exercise. In the vector space C[0, π] of continuous functions on the interval [0, π] define
the vectors f , g, and h by
f (x) = x
g(x) = sin x
h(x) = cos x
for 0 ≤ x ≤ π. Show that f , g, and h are linearly independent.
5.2.7. Exercise. In the vector space C[0, π] of continuous functions on [0, π] let f , g, h, and j
be the vectors defined by
f (x) = 1
g(x) = x
h(x) = cos x
x
j(x) = cos2
2
for 0 ≤ x ≤ π. Show that f , g, h, and j are linearly dependent by writing j as a linear combination
of f , g, and h.
5.2.8. Exercise. Let a, b, and c be distinct real numbers. Show that the vectors (1, 1, 1),
(a, b, c), and (a2 , b2 , c2 ) form a linearly independent subset of R3 .
5.2.9. Exercise (†). In the vector space C[0, 1] define the vectors f , g, and h by
f (x) = x
g(x) = ex
h(x) = e−x
5.2. EXERCISES (DUE: FRI. JAN. 16) 15
6.1. Background
Topics: basis, dimension, partial ordering, comparable elements, linear ordering, chain, maximal
element, largest element, axiom of choice, Zorn’s lemma. (See AOLT, page 2 and section 1.6.)
6.1.3. Example. The set R of real numbers is a partially ordered set under the usual relation ≤.
6.1.4. Example. A family A of subsets of a set S is a partially ordered set under the relation ⊆.
When A is ordered in this fashion it is said to be ordered by inclusion.
6.1.5. Example. Let F(S) be the family of real valued functions defined on a set S. For f ,
g ∈ F(S) write f ≤ g if f (x) ≤ g(x) for every x ∈ S. This is a partial ordering on F(S).
6.1.7. Example. Let S = {a, b, c} be a three-element set. The family P(S) of all subsets of S
is partially ordered by inclusion. Then S is the largest element of P(S)—and, of course, it is also a
maximal element of P(S). The family Q(S) of all proper subsets of S has no largest element; but
it has three maximal elements {b, c}, {a, c}, and {a, b}.
6.1.10. Definition. Two elements x and y in a partially ordered set are comparable if either
x ≤ y or y ≤ x. A chain (or linearly ordered subset) is a subset of a partially ordered set in
which every pair of elements is comparable.
6.1.12. Axiom (Zorn’s Lemma). If each chain in a nonempty partially ordered set S has an
upper bound in S, then S has a maximal element. (See AOLT, page 31, lines 4–7.)
The following theorem is an extremely important item in the set-theoretic foundations of math-
ematics. Its proof is difficult and can be found in any good book on set theory.
6.1.13. Theorem. The axiom of choice and Zorn’s lemma are equivalent.
6.2. EXERCISES (DUE: WED. JAN. 21) 19
6.2.2. Exercise (††). A spanning subset for a nontrivial vector space V is a basis for V if and
only if it is a minimal spanning set for V .
6.2.4. Exercise. Let {e1 , . . . , en } be a basis for a vector space V and v = nk=1 αk ek be a
P
vector in V . If αp 6= 0 for some p, then {e1 , . . . , ep−1 , v, ep+1 , . . . , en } is a basis for V .
6.2.5. Exercise. If some basis for a vector space V contains n elements, then every linearly
independent subset of V with n elements is also a basis. Hint. Suppose {e1 , . . . , en } is a basis for V
and {v1 , . . . , vn } is linearly independent in V . Start by using exercise 6.2.4 to show that (after
perhaps renumbering the ek ’s) the set {v1 , e2 , . . . , en } is a basis for V .
6.2.6. Exercise. In a finite dimensional vector space V any two bases have the same number
of elements. (This establishes the claim made in AOLT, page 2, lines 25–27.) This number is the
dimension of V , and is denoted by dim V .
6.2.7. Exercise. Let S3 be the vector space of all symmetric 3 × 3 matrices of real numbers.
(A matrix [aij ] is symmetric if aij = aji for all i and j.)
(a) What is the dimension of S3 ?
(b) Find a basis for S3 .
6.2.10. Exercise. Let A be a linearly independent subset of a vector space V . Then there
exists a basis for V which contains A. (This is AOLT, Theorem 1.36. Can you think of anyway to
shorten the proof?)
6.2.11. Exercise. Every vector space has a basis. (This is AOLT, Corollary 1.37.)
CHAPTER 7
Linear Transformations
7.1. Background
Topics: linear maps, kernel, range, invertible linear map, isomorphism.
7.1.1. Definition. Let V and W be vector spaces over the same field F. A function T : V → W
is linear if T (x + y) = T x + T y and T (αx) = αT x for all x, y ∈ V and α ∈ F.
7.1.2. Notation. If V and W are vector spaces the family of all linear functions from V into
W is denoted by L(V, W ). Linear functions are usually called linear transformations, linear maps,
or linear mappings. When V = W we condense the notation L(V, V ) to L(V ) and we call the
members of L(V ) operators.
7.1.4. Definition. Let T : V → W be a linear transformation between vector spaces and let U
be a subspace of V . Define T → (U ) := {T x : x ∈ U }. This is the (direct) image of U under T .
7.1.5. Definition. Let T : V → W be a linear transformation between vector spaces and let
U be a subspace of W . Define T ← (U ) := {x ∈ V : T x ∈ U }. This is the inverse image of U
under T .
7.1.6. Definition. A linear map T : V → W between vector spaces is left invertible (or
has a left inverse, or is a section) if there exists a linear map L : W → V such that LT = 1V .
(Note: 1V is the identity mapping v 7→ v on V . See AOLT, page 4, lines -13 to -10, convention.
Other notations for the identity map on V are idV and IV .) The map T is right invertible
(or has a right inverse, or is a retraction) if there exists a linear map R : W → V such that
T R = 1W . We say that T is invertible (or has an inverse, or is an isomorphism) if it is both
left invertible and right invertible. If there exists an isomorphism between two vector spaces V and
W , we say that the spaces are isomorphic and we write V ∼ = W.
Note: This is not the definition of isomorphism given in AOLT (definition on page 3, lines 17–18).
You will prove in exercise 7.2.11 that the two definitions are equivalent.
21
22 7. LINEAR TRANSFORMATIONS
V
T /W
Mα Mα
V /W
T
7.2.10. Exercise. If a linear map T : V → W between two vector spaces has both a left inverse
and a right inverse, then these inverses are equal; so there exists a linear map T −1 : W → V such
that T −1 T = 1V and T T −1 = 1W .
7.2.11. Exercise. A linear map between vector spaces is invertible if and only if it is bijective
(that is, one-to-one and onto).
CHAPTER 8
8.1. Background
Topics: matrix representations of linear maps.
8.1.1. Convention. The space Pn ([a, b]) of polynomial functions of degree strictly less than
n ∈ N on the interval [a, b] (where a < b) is a vector space of degree n. For each n = 0, 1, 2,
. . . let pn (t) = tn for all t ∈ [a, b]. Then we take {p0 , p1 , p2 , . . . , pn−1 } to be the standard basis
for Pn ([a, b]).
25
26 8. LINEAR TRANSFORMATIONS (CONTINUED)
8.2.8. Exercise. Let T : V → W be a linear map between vector spaces. If T is injective and
B is a basis for a subspace U of V , then T → (B) is a basis for T → (U ).
8.2.11. Exercise. Suppose that V and W are finite dimensional vector spaces of the same finite
dimension and that T : V → W is a linear map. Then the following are equivalent;
(a) T is injective;
(b) T is surjective; and
(c) T is invertible.
CHAPTER 9
9.1. Background
Topics: linear functionals, dual space, dual basis. (See AOLT, pages 4–6.)
9.1.2. Notation. Let S be a nonempty set and F be a field. We denote by l(S) (or by l(S, F),
or by FS , or by F(S, F)) the family of all functions α : S → F. For x ∈ l(S) we frequently write the
value of x at s ∈ S as xs rather than x(s). (Sometimes it seems a good idea to reduce the number
of parentheses cluttering a page.) Furthermore we denote by lc (S) (or by lc (S, F), or by Fc (S))
the family of all functions α : S → F with finite support; that is, those functions on S which are
nonzero at only finitely many elements of S.
27
28 9. DUALITY IN VECTOR SPACES
The isomorphism Ω creates a one-to-one correspondence between functions v in lc (B) and vectors
v in V . This is an extension of the usual notation in Rn where we write a vector v in terms of its
components:
Xn
v= vk ek .
k=1
We will go even further and use Ω to identify V with lc (B) and write
X
v= v(e) e
e∈B
instead of (*). That is, in a vector space with basis we will treat a vector as a scalar valued function
on its basis. (Of course, if this identification ever threatens to cause confusion we can always go
back to the v or Ω(v) notation.)
9.2.5. Exercise. According to convention 9.2.4 above, what is the value of f (e) when e and f
are elements of the basis B?
9.2.6. Exercise. Let V be a vector space with basis B. For every v ∈ V define a function v ∗
on V by X
v ∗ (x) = x(e) v(e) for all x ∈ V .
e∈B
Then v∗ is a linear functional on V .
9.2.7. Notation. In the preceding exercise 9.2.6 the value v ∗ (x) of v ∗ at x is often written
hx, vi.
9.2.8. Exercise. Consider the notation 9.2.7 above in the special case that the scalar field
F = R. Then h , i is an inner product on the vector space V . (See the definition of inner product
on page 11 of AOLT.)
9.2.9. Exercise. In the special case that the scalar field F = C, things above are usually done
a bit differently. For v ∈ V the function v ∗ is defined by
X
v ∗ (x) = hx, vi = x(e) v(e).
e∈B
Why do you think things are done this way?
9.2. EXERCISES (DUE: WED. JAN. 28) 29
9.2.10. Exercise. Let v be a nonzero vector in a vector space V and E be a basis for V which
contains the vector v. Then there exists a linear functional φ ∈ V ∗ such that φ(v) = 1 and φ(e) = 0
for every e ∈ E \ {v}.
9.2.11. Corollary. If v is a vector in a vector space V and φ(v) = 0 for every φ ∈ V ∗ , then
v = 0.
9.2.12. Exercise. Let V be a vector space with basis B. The map Φ : V → V ∗ : v 7→ v ∗ (see
exercise 9.2.6) is linear and injective.
9.2.13. Exercise (Riesz-Fréchet theorem for vector spaces). (†) In the preceding exercise 9.2.12
the map Φ is an isomorphism if V is finite dimensional. Thus for every φ ∈ V ∗ there exists a unique
vector a ∈ V such that a∗ = φ.
9.2.14. Exercise. If V is a finite dimensional vector space with basis {e1 , . . . , en }, then {e∗1 , . . . , e∗n }
is the dual basis for V ∗ . (See AOLT, page 5, line -7 and Proposition 1.1.)
CHAPTER 10
10.1. Background
Topics: annihilators, pre-annihilators, trace. (See AOLT, page 7.)
31
32 10. DUALITY IN VECTOR SPACES (CONTINUED)
11.1. Background
Topics: categories, objects, morphisms, functors, vector space adjoint of a linear map.
11.1.1. Definition. Let A be a class, whose members we call objects. For every pair (S, T ) of
objects we associate a set Mor(S, T ), whose members we call morphisms from S to T . We assume
that Mor(S, T ) and Mor(U, V ) are disjoint unless S = U and T = V .
We suppose further that there is an operation ◦ (called composition) that associates with
every α ∈ Mor(S, T ) and every β ∈ Mor(T, U ) a morphism β ◦ α ∈ Mor(S, U ) in such a way that:
(1) γ ◦ (β ◦ α) = (γ ◦ β) ◦ α whenever α ∈ Mor(S, T ), β ∈ Mor(T, U ), and γ ∈ Mor(U, V );
(2) for every object S there is a morphism IS ∈ Mor(S, S) satisfying α ◦ IS = α whenever
α ∈ Mor(S, T ) and IS ◦ β = β whenever β ∈ Mor(R, S).
Under these circumstances the class A, together with the associated families of morphisms, is
a category. For a few more remarks on categories and some examples see section 8.1 of my
notes [11].
11.1.3. Definition. The terminology for inverses of morphisms in categories is essentially the
β
same as for functions. Let S / T and T
α / S be morphisms in a category. If β ◦ α = IS , then β
is a left inverse of α and, equivalently, α is a right inverse of β. We say that the morphism
β
α is an isomorphism (or is invertible) if there exists a morphism T / S which is both a left
−1
and a right inverse for α. Such a function is denoted by α and is called the inverse of α.
11.1.4. Remark. All the categories that are of interest in this course are concrete categories.
A concrete category is, roughly speaking, one in which the objects are sets with additional
structure (algebraic operations, inner products, norms, topologies, and the like) and the morphisms
are maps (functions) which preserve, in some sense, the additional structure. If A is an object in
f
some concrete category C, we denote by A its underlying set. And if A / B is a morphism
33
34 11. THE LANGUAGE OF CATEGORIES
11.1.6. Definition. A partially ordered set is order complete if every nonempty subset has
a supremum (that is, a least upper bound) and an infimum (a greatest lower bound).
11.1.7. Definition. Let S be a set. Then the power set of S, denoted by P(S), is the family
of all subsets of S.
11.1.8. Notation. Let f : S → T be a function between sets. Then we define f → (A) =
{f (x) : x ∈ A} and f ← (B) = {x ∈ S : f (x) ∈ B}. We say that f → (A) is the image of A under
f and that f ← (B) is the preimage of B under f .
11.2. EXERCISES (DUE: MON. FEB. 2) 35
11.2.2. Exercise (The vector space duality functor). Let T ∈ L(V, W ) where V and W are
vector spaces. Show that the pair of maps V 7→ V ∗ and T 7→ T ∗ is a contravariant functor from
the category of vector spaces and linear maps into itself. Show that (the morphism map of) this
functor is linear. (That is, show that (S + T )∗ = S ∗ + T ∗ and (αT )∗ = αT ∗ .)
11.2.3. Exercise (†). Let T : V → W be a linear map between vector spaces. Show that
ker T ∗ = (ran T )⊥ .
Is there a relationship between T being surjective and T ∗ being injective?
11.2.4. Exercise. Let T : V → W be a linear map between vector spaces. Show that
ker T = (ran T ∗ ) ⊥ .
Is there a relationship between T being injective and T ∗ being surjective?
11.2.5. Exercise (The power set functors). Let S be a nonempty set.
(a) The power set P(S) of S partially ordered by ⊆ is order complete.
(b) The class of order complete partially ordered sets and order preserving maps is a category.
(c) For each function f between sets let P(f ) = f → . Then P is a covariant functor from the
category of sets and functions to the category of order complete partially ordered sets and
order preserving maps.
(d) For each function f between sets let P(f ) = f ← . Then P is a contravariant functor from
the category of sets and functions to the category of order complete partially ordered sets
and order preserving maps.
CHAPTER 12
Direct Sums
12.1. Background
Topics: internal direct sum, external direct sum. (See AOLT, pages 8–9.)
12.1.1. Definition. Let M and N be subspaces of a vector space V . We say that the space V
is the (internal) direct sum of M and N if M + N = V and M ∩ N = {0}. In this case we
write V = M ⊕ N and say that M and N are complementary subspaces. We also say that M is
a complement of N and that N is a complement of M . Similarly, if M1 , . . . , Mn are subspaces
of a vector space V , if V = M1 + · · · + Mn , and if Mj ∩ Mk = {0} whenever j 6= k, then we say
that V is the (internal) direct sum of M1 , . . . , Mn and we write
M n
V = M1 ⊕ · · · ⊕ Mn = Mk .
k=1
12.1.2. Notation. If the vector space V is a direct sum of subspaces M and N , then, according
to exercise 12.2.1, every v ∈ V can be written uniquely in the form m + n where m ∈ M and n ∈ N .
We will indicate this by writing v = m ⊕ n. It is important to realize that this notation makes
sense only when we have in mind a particular direct sum decomposition M ⊕ N of V . Thus in this
context when we see v = a ⊕ b we conclude that v = a + b, that a ∈ M , and that b ∈ N .
12.1.3. Definition. Let V and W be vector spaces over a field F. To make the Cartesian
product V × W into a vector space we define addition by
(v, w) + (v1 , w1 ) = (v + v1 , w + w1 )
(where v, v1 ∈ V and w, w1 ∈ W ), and we define scalar multiplication by
α(v, w) = (αv, αw)
(where α ∈ F, v ∈ V , and w ∈ W ). The resulting vector space we call the (external) direct
sum of V and W . It is conventional to use the same notation V ⊕ W for external direct sums that
we use for internal direct sums.
(The superscripts have nothing to do with powers.) Notice that we now have functions h1 : S → T
and h2 : S → U . These are the components of h. In abbreviated notation h = (h1 , h2 ).
The (external) direct sum of two vector spaces V1 and V2 is best thought of not just as the vector
space V1 ⊕V2 defined in 12.1.3 but as this vector space together with two distinguished “projection”
mappings defined below. (The reason for this assertion may be deduced from definition 13.1.1 and
exercise 13.2.1.)
37
38 12. DIRECT SUMS
12.1.5. Definition. Let V1 and V2 be vector spaces. For k = 1, 2 define the coordinate
projections πk : V1 ⊕ V2 → Vk by πk (v1 , v2 ) = vk . Notice two simple facts:
(i) π1 and π2 are surjective linear maps; and
(ii) idV1 ⊕V2 = (π1 , π2 ).
12.2. EXERCISES (DUE: WED. FEB. 4) 39
12.2.2. Exercise. Suppose that a vector space V is the direct sum of subspaces M1 , . . . , Mn .
Show that for every v ∈ V there exist unique vectors mk ∈ Mk (for k = 1, . . . , n) such that
v = m1 + · · · + mn .
12.2.4. Exercise. Let U be the plane x + y + z = 0 and V be the line x = − 34 y = 3z. The
purpose of this exercise is to see (in two different ways) that R3 is not the direct sum of U and V .
(a) If R3 were equal to U ⊕ V , then U ∩ V would contain only the zero vector. Show that this
is not the case by finding a vector x 6= 0 in R3 such that x ∈ U ∩ V .
(b) If R3 were equal to U ⊕ V , then, in particular, we would have R3 = U + V . Since both
U and V are subsets of R3 , it is clear that U + V ⊆ R3 . Show that the reverse inclusion
R3 ⊆ U + V is not correct by finding a vector x ∈ R3 which cannot be written in the form
u + v where u ∈ U and v ∈ V .
(c) We have seen in part (b) that U + V 6= R3 . Then what is U + V ?
12.2.6. Exercise (††). Let C = C([0, 1]) be the (real) vector space of all continuous real valued
functions on the interval [0, 1] with addition and scalar multiplication defined pointwise.
(a) Let f1 (t) = t and f2 (t) = t4 for 0 ≤ t ≤ 1. Let U be the set of all functions of the form
αf1 + βf2 where α, β ∈ R. Show that U is a subspace of C.
(b) Let V be the set of all functions g in C which satisfy
Z 1 Z 1
tg(t) dt = 0 and t4 g(t) dt = 0.
0 0
Show that V is a subspace of C.
(c) Show that C = U ⊕ V .
(d) Let f (t) = t2 for 0 ≤ t ≤ 1. Find functions u ∈ U and v ∈ V such that f = u + v.
40 12. DIRECT SUMS
12.2.7. Exercise. Let C = C[−1, 1] be the vector space of all continuous real valued functions
on the interval [−1, 1]. A function f in C is even if f (−x) = f (x) for all x ∈ [−1, 1]; it is odd
if f (−x) = −f (x) for all x ∈ [−1, 1]. Let Co = {f ∈ C : f is odd } and Ce = {f ∈ C : f is even }.
Show that C = Co ⊕ Ce .
12.2.8. Exercise. Prove that the external direct sum of two vector spaces (as defined in 12.1.3)
is indeed a vector space.
12.2.9. Exercise. Let V1 and V2 be vector spaces. Prove that following fact about the direct
sum V1 ⊕ V2 : for every vector space W and every pair of linear maps T1 : W → V1 and T2 : W → V2
there exists a unique linear map S : W → V1 ⊕ V2 which makes the following diagram commute.
W
???
??
T1 ??T2
S ??
??
?
V1 o V 1 ⊕ V2 / V2
π1 π2
12.2.10. Exercise. Every subspace of a vector space has a complement. That is, if M is a
subspace of a vector space V , then there exists a subspace N of V such that V = M ⊕ N .
12.2.11. Exercise. Let V be a vector space and suppose that V = U ⊕ W . Prove that if B
is a basis for U and C is a basis for W , then B ∪ C is a basis for V . From this conclude that
dim V = dim U + dim W .
12.2.13. Exercise. A linear transformation has a left inverse if and only if it is injective (one-
to-one). It has a right inverse if and only if it is surjective (onto).
CHAPTER 13
13.1. Background
Topics: products, coproducts, quotient spaces, exact sequences. (See AOLT, pages 9–10.)
13.1.1. Definition. Let A1 and A2 be objects in a category C. We say that a triple (P, π1 , π2 ),
where P is an object and πk : P → Ak (k = 1, 2) are morphisms, is a product of A1 and A2 if
for every object B in C and every pair of morphisms fk : B → Ak (k = 1, 2) there exists a unique
map g : B → P such that fk = πk ◦ g for k = 1, 2.
B
???
??
f1 ??f2
g ??
??
?
A1 o P / A2
π1 π2
Notice that what you showed in exercise 12.2.9 is that the external direct sum is a product in
the category of vector spaces and linear maps.
A triple (P, j1 , j2 ), where P is an object and jk : Ak → P , (k = 1, 2) are morphisms, is a
coproduct of A1 and A2 if for every object B in C and every pair of morphisms Fk : Ak → B
(k = 1, 2) there exists a unique map G : P → B such that Fk = G ◦ jk for k = 1, 2.
? BO ?_
???
F1 ?? F2
G ??
??
??
A1 /P o A2
j1 j2
41
42 13. PRODUCTS AND QUOTIENTS
is said to be exact at Vn if ran jn = ker jn+1 . A sequence is exact if it is exact at each of its
constituent vector spaces. A sequence of vector spaces and linear maps of the form
j
0 /U /V k /W /0
is a short exact sequence. (Here 0 denotes the trivial 0-dimensional vector space, and the
unlabeled arrows are the obvious linear maps.)
13.2. EXERCISES (DUE: MON. FEB. 9) 43
13.2.2. Exercise (††). Show that in the category of sets and maps (functions) the product and
the coproduct are not the same.
13.2.3. Exercise. Show that in a arbitrary category (or in the category of vector spaces and
linear maps, if you prefer) that products (and coproducts) are essentially unique. (Essentially
unique means unique up to isomorphism. That is, if (P, π1 , π2 ) and (Q, ρ1 , ρ2 ) are both products
of two given objects, then P ∼
= Q.)
13.2.4. Exercise (†). Verify the assertions made in definition 13.1.2. In particular, show that
∼ is an equivalence relation, that addition and scalar multiplication of the set of equivalence classes
is well defined, that under these operations V /M is a vector space, and that the quotient map is
linear.
The following exercise is called the fundamental quotient theorem or the first isomorphism
theorem for vector spaces. (See AOLT, theorem 1.6, page 10.)
13.2.5. Exercise. Let V and W be vector spaces and M V . If T ∈ L(V, W ) and ker T ⊇ M ,
then there exists a unique Te ∈ L(V /M , W ) which makes the following diagram commute.
V C
CCC
CC
CC T
π CC
CC
CC
CC
C!
V /M /W
Te
Furthermore, Te is injective if and only if ker T = U ; and Te is surjective if and only if T is.
Corollary: ran T ∼
= V / ker T .
13.2.7. Exercise. Let U and V be vector spaces. Then the following sequence is short exact:
ι1 π2
0 /U /U ⊕V /V / 0.
13.2.8. Exercise. Suppose a < b. Let K be the family of constant functions on the interval
[a, b], C 1 be the family of all continuously differentiable functions on [a, b], and C be the family of
all continuous functions on [a, b]. (A function f is said to be continuously differentiable if
its derivative f 0 exists and is continuous.)
Specify linear maps j and k so that the following sequence is short exact:
j
0 /K / C1 k /C / 0.
13.2.9. Exercise. Let C be the family of all continuous functions on the interval [0, 2]. Let E1
be the mapping from C into R defined by E1 (f ) = f (1). (The functional E1 is called evaluation
at 1.)
Find a subspace F of C such that the following sequence is short exact.
E1
0 /F ι /C /R / 0.
13.2.10. Exercise. If j : U → V is an injective linear map between vector spaces, then the
sequence
j
0 /U /V π / V / ran j /0
is exact.
13.2.11. Exercise. Consider the following diagram in the category of vector spaces and linear
maps.
j
0 /U /V k /W /0
f g h
0 /U 0 /V 0 /W 0 /0
j 0 k0
If the rows are exact and the left square commutes, then there exists a unique linear map h : W →
W 0 which makes the right square commute.
13.2.12. Exercise. Consider the following diagram of vector spaces and linear maps
j
0 /U /V k /W /0
f g h
0 /U 0 /V 0 /W 0 /0
j0 k0
where the rows are exact and the squares commute. Prove the following.
(a) If g is surjective, so is h.
(b) If f is surjective and g is injective, then h is injective.
(c) If f and h are surjective, so is g.
(d) If f and h are injective, so is g.
CHAPTER 14
14.1. Background
Topics: quotient map in a category, quotient object.
45
46 14. PRODUCTS AND QUOTIENTS (CONTINUED)
14.2.2. Exercise (†). Prove the converse of the preceding exercise. That is, suppose that U ,
V , and W are vector spaces and that V ∼
= U ⊕ W ; prove that there exists linear maps j and k such
that the sequence 0 /U j /V k /W / 0 is exact. Hint. Suppose g : U ⊕ W → V is an
isomorphism. Define j and k in terms of g.
j
14.2.3. Exercise. Show that if 0 /U /V /W / 0 is an exact sequence of vector
k
∼
spaces and linear maps, then W = V / ran j. Thus, if U V and j is the inclusion map, then
W ∼= V /U . Give two different proofs of this result: one using exercise 13.2.5 and the other using
exercise 13.2.12.
14.2.4. Exercise. Prove the converse of exercise 14.2.3. That is, suppose that j : U → V is an
injective linear map between vector spaces and that W ∼= V / ran j; prove that there exists a linear
j
map k which makes the sequence 0 /U /V k /W / 0 exact.
14.2.7. Exercise. Suppose that a vector space V is the direct sum of subspaces U and W .
Some authors define the codimension of U to be dim W . Others define it to be dim V /U . Show
that these are equivalent.
14.2.8. Exercise. Prove that if M is a finite dimensional subspace of a vector space V , then
dim V /M = dim V − dim M .
14.2.9. Exercise (††). Prove that in the category of vector spaces and linear maps every
surjective linear map is a quotient map.
14.2.10. Exercise. Let U , V , and W be vector spaces. If S ∈ L(U, V ), T ∈ L(V, W ), then the
sequence
0 / ker S / ker T S / ker T / coker S / coker T S / coker T /0
is exact.
14.2. EXERCISES (DUE: WED. FEB. 11) 47
For obvious reasons the next result is usually called the rank-plus-nullity theorem. (See AOLT,
theorem 1.8, page 10.)
14.2.11. Exercise. Let T : V → W be a linear map between vector spaces. Give a very simple
proof that if V is finite dimensional, then
rank T + nullity T = dim V.
14.2.12. Exercise. Show that if V0 , V1 , . . . , Vn are finite dimensional vector spaces and the
sequence
dn d1
0 / Vn / Vn−1 / ... / V1 / V0 /0
n
X
is exact, then (−1)k dim Vk = 0.
k=0
CHAPTER 15
Projection Operators
15.1. Background
Topics: idempotent maps, projections. (See AOLT, section 3.2, pages 81–86.) If you have not yet
discovered Professor Farenick’s web page where he has a link to a list of corrections to AOLT, this
might be a good time to look at it. See
http://www.math.uregina.ca/~farenick/fixups.pdf
15.1.2. Definition. Let V be a vector space and suppose that V = M ⊕ N . We know from
an earlier exercise 12.2.1 that for each v ∈ V there exist unique vectors m ∈ M and n ∈ N such
that v = m + n. Define a function EN M : V → V by EN M v = m. The function EN M is called the
projection of V along N onto M . (This terminology is, of course, optimistic. We must prove
that EN M is in fact a projection operator.)
49
50 15. PROJECTION OPERATORS
15.2.5. Exercise. Let E and F be projection operators on a vector space V . Then E +F = idV
if and only if EF = F E = 0 and ker E = ran F .
15.2.6. Exercise (†). If M ⊕ N is a direct sum decomposition of a vector space V , then the
function EN M defined in 15.1.2 is a projection operator whose range is M and whose kernel is N .
15.2.9. Exercise. Let M be the line y = 2x and N be the y-axis in R2 . Find [EM N ] and
[EN M ].
15.2.10. Exercise. Let E be the projection of R3 onto the plane 3x − y + 2z = 0 along the
z-axis and let F be the projection of R3 onto the z-axis along the plane 3x − y + 2z = 0.
(a) Find [E].
(b) Where does F take the point (4, 5, 1)?
15.2.12. Exercise. Suppose a finite dimensional vector space V has the direct sum decompo-
sition V = M ⊕ N and that E = EM N is the projection along M onto N . Show that E ∗ is the
projection in L(V ∗ ) along N ⊥ onto M ⊥ .
CHAPTER 16
Algebras
16.1. Background
Topics: algebras, matrix algebras, standard matrix units, ideals, center, central algebra, polyno-
mial functional calculus, quaternions, representation, faithful representation, left regular represen-
tation, permutation matrices. (See AOLT, sections 2.1–2.3.)
16.1.1. Definition. Let A be an algebra over a field F. The unitization of A is the unital alge-
bra à = A × F in which addition and scalar multiplication are defined pointwise and multiplication
is defined by
(a, λ) · (b, µ) = (ab + µa + λb, λµ).
51
52 16. ALGEBRAS
16.2.2. Exercise. Also in AOLT at the top of page 43 the author says that the ideal generated
by a subset S of an algebra A can be characterized by as
X m
aj sj bj : m = 1, 2, 3, . . . , sj ∈ S, and aj , bj ∈ A .
j=1
Prove that for a unital algebra this is correct..
16.2.3. Exercise. In AOLT at the bottom of page 43 the author constructs the Cartesian
product algebra A1 × · · · × An of algebras A1 , . . . An . Show that this is indeed a product in the
category of algebras and algebra homomorphisms. Under what circumstances will the product
algebra A1 × · · · × An be unital?
16.2.4. Exercise (†). Prove that product algebra Mn (F)×Mn (F) is neither simple nor central.
(See AOLT, section 2.7, exercise 3.)
16.2.6. Exercise. In AOLT at the bottom of page 44 and the top of page 45 the author
constructs the quotient algebra A/J, where A is an algebra and J is an ideal in A. Prove that the
operations on A/J are well-defined and that under these operations A/J is in fact an algebra.
16.2.9. Exercise. Prove that the unitization à of an algebra A is in fact a unital algebra with
(0, 1) as its identity. Prove also that A is (isomorphic to) a subalgebra of à with codimension 1.
(See AOLT, pages 49–50.)
Furthermore, φe is injective if and only if ker φ = U ; and φe is surjective if and only if φ is.
Corollary: ran φ ∼
= A/ ker φ. (See AOLT, page 45, Theorem 2.5.)
16.2. EXERCISES (DUE: MON. FEB. 16) 53
Spectra
17.1. Background
Topics: eigenvalues, spectrum, annihilating polynomial, minimal polynomial, division algebra,
centralizer, skew-centralizer, spectral mapping theorem. (See AOLT, sections 2.4–2.5.)
17.1.1. Definition. Let a be an element of a unital algebra A over a field F. The spectrum
of a, denoted by σA (a) or just σ(a), is the set of all λ ∈ F such that a − λ1 is not invertible.
NOTE: This definition, which is the “official” one for this course, differs from—and is
not equivalent to—the one given on page 61 of AOLT.
55
56 17. SPECTRA
17.2.3. Exercise. Prove that every finite rank linear map between vector spaces is a linear
combination of rank 1 linear maps.
17.2.4. Exercise (†). Give an example to show that the result stated in AOLT, Theorem 2.24,
part 1 need not hold in an infinite dimensional unital algebra. Hint. Let l1 (N, R) be the family
of all absolutely summable sequences of real numbers, that is, the set of all sequences (an ) of real
numbers such that ∞
P
k=1 n | < ∞. Consider
|a the operator T in the algebra L(l1 (N, R)) which takes
1
the sequence (an ) to the sequence n an .
17.2.5. Exercise. Give an example to show that the result stated in AOLT, Theorem 2.24, part
2 need not hold in an infinite dimensional unital algebra. Hint. Consider the unilateral shift op-
erator U in the algebra L(l(N, R)) which takes the sequence (an ) to the sequence (0, a1 , a2 , a3 , . . . ).
17.2.6. Exercise. Give an example to show that the result stated in AOLT, Theorem 2.24,
part 3 need not hold in an infinite dimensional unital algebra.
17.2.7. Exercise. If z is an element of the algebra C of complex numbers, then σ(z) = {z}.
17.2.8. Exercise. Let f be an element of the algebra C([a, b]) of continuous complex valued
functions on the interval [a, b]. Find the spectrum of f .
17.2.10. Exercise. Let a be an element of a unital algebra such that a2 = 1. Then either
(i) a = 1, in which case σ(a) = {1}, or
(ii) a = −1, in which case σ(a) = {−1}, or
(iii) σ(a) = {−1, 1}.
1
Hint. In (iii) to prove σ(a) ⊆ {−1, 1}, consider (a + λ1).
1 − λ2
Polynomials
18.1. Background
Topics: formal power series, polynomials, convolution, indeterminant, degree of a polynomial,
polynomial functional calculus, annihilating polynomial, monic polynomial, minimal polynomial.
18.1.1. Notation. We make the convention that the set of natural numbers N does not include
zero but the set Z+ of positive integers does. Thus Z+ = N ∪ {0}.
18.1.2. Notation. If S is a set and A is an algebra, l(S, A) denotes the vector space of all
functions from S into A with pointwise operations of addition and scalar multiplication, and lc (S, A)
denotes the subspace of functions with finite support.
18.1.3. Definition. Let A be a unital commutative algebra. On the vector space l(Z+ , A)
P n
P
define a binary operation ∗ (often called convolution) by (f ∗ g)n = fj g k = fj g n−j
j+k=n j=0
(where f , g ∈ l(Z+ , A) and n ∈ Z+ . An element of l(Z+ , A) is a formal power series (with
coefficients in A) and an element of lc (Z+ , A) is a polynomial (with coefficients in A).
18.1.4. Remark. We regard the algebra A as a subset of l(Z+ , A) by identifying the element
a ∈ A with the element (a, 0, 0, 0, . . . ) ∈ l(Z+ , A). Thus the map a 7→ (a, 0, 0, 0, . . . ) becomes an
inclusion map. (Technically speaking, of course, the map ψ : a 7→ (a, 0, 0, 0, . . . ) is an injective
unital homomorphism and A ∼ = ran ψ.)
18.1.6. Definition. Let A be a unital commutative algebra. In the algebra l(Z+ , A) of formal
power series the special sequence x = (0, 1A , 0, 0, 0, . . . ) is called the indeterminant of l(Z+ , A).
Notice that the sequence xn = x x x · · · x (n factors) has the property that (xn )n = 1 and (xn )k = 0
whenever k 6= n. It is conventional to take x0 to be the identity (1A , 0, 0, 0, . . . ) of l(Z+ , A).
18.1.8. Definition. A nonzero polynomial p, being an element of l0 (Z+ , A), has finite support.
So there exists n0 ∈ Z+ such that pn = 0 whenever n > n0 . The smallest such n0 is the degree
59
60 18. POLYNOMIALS
Pn 18.1.9.k Definition. Let A be a unital algebra over a field F. For each polynomial p =
k=0 pk x with coefficients in F define
n
X
pe: A → A : a 7→ pk ak .
k=0
Then pe is the polynomial function on A determined by the polynomial p. Also for fixed a ∈ A
define
Φ : F[x] → A : p 7→ pe(a).
The mapping Φ is the polynomial functional calculus determined by the element a.
18.1.10. Definition. Let V be a vector space over a field F and T ∈ L(V ). A nonzero polyno-
mial p ∈ F[x] such that pe(T ) = 0 is an annihilating polynomial for T . A monic polynomial of
smallest degree that annihilates T is a minimal polynomial for T .
18.2.4. Exercise. If p and q are polynomials with coefficients in a unital commutative algebra
A, then
(i) deg(p + q) ≤ max{deg p, deg q}, and
(ii) deg(pq) ≤ deg p + deg q.
If A is a field, the equality holds in (ii).
18.2.5. Exercise. Show that if A is a unital commutative algebra, then so is l(A, A) under
pointwise operations of addition, multiplication, and scalar multiplication.
18.2.6. Exercise (†). Prove that for each a ∈ A the polynomial functional calculus Φ : F[x] →
A defined in 18.1.9 is a unital algebra homomorphism. Show also that the map Ψ : F[x] →
l(A, A) : p 7→ pe is a unital algebra homomorphism. (Pay especially close attention to the fact
that “multiplication” on F[x] is convolution whereas “multiplication” on l(A, A) is defined point-
wise.) What is the image under Φ of the indeterminant x? What is the image under Ψ of the
indeterminant x?
18.2.8. Exercise (†). Give an example to show that the polynomial functional calculus Φ may
fail to be injective. Hint. The preceding exercise 18.2.7.
18.2.9. Exercise. Let A be a unital algebra over a field F with finite dimension m. Show that
for every a ∈ A there exists a polynomial p ∈ F[x] such that 1 ≤ deg p ≤ m and pe(a) = 0.
18.2.10. Exercise. Let V be a finite dimensional vector space. Prove that every T ∈ L(V ) has
a minimal polynomial. Hint. Use exercise 18.2.9..
18.2.11. Exercise. Let f and d be polynomials with coefficients in a field F and suppose that
d 6= 0. Then there exist unique polynomials q and r in F[x] such that
(i) f = dq + r and
62 18. POLYNOMIALS
18.2.12. Exercise. Let V be a finite dimensional vector space and T ∈ L(V ). Prove that the
minimal polynomial for T is unique.
CHAPTER 19
Polynomials (continued)
19.1. Background
Topics: irreducible polynomial, prime polynomial, Langrange interpolation formula, unique fac-
torization theorem, greatest common divisor, relatively prime.
63
64 19. POLYNOMIALS (CONTINUED)
19.2.2. Exercise (†). Let T be an operator on a finite dimensional vector space over a field F.
If p ∈ F[x] and pe(T ) = 0, then mT divides p. (If p, p1 ∈ F[x], we say that p1 divides p if there
exists q ∈ F[x] such that p = p1 q.)
19.2.3. Exercise. Let T be the operatoron thereal vector space R2 whose matrix representa-
0 −1
tion (with respect to the standard basis) is . Find the minimal polynomial mT of T and
1 0
show that it is irreducible (over R).
19.2.6. Exercise. Let T be an operator on a finite dimensional vector space V over a field
F and Φ : F[x] → L(V ) be the associated polynomial functional calculus. If p is a polynomial of
degree m ≥ 1 in F[x] and Jp is the principal ideal generated by p, then the sequence
0 / Jp / F[x] Φ / ran Φ /0
is exact.
19.2.7. Exercise (Lagrange Interpolation Formula). Prove that the polynomials defined in 19.1.2
form a basis for the vector space V of all polynomials with coefficients in F and degree less than or
equal to n and that for each polynomial q ∈ V
Xn
q= q(tk )pk .
k=0
19.2.8. Exercise. Use the Lagrange Interpolation Formula to find the polynomial with coeffi-
cients in R and degree no greater than 3 whose values at −1, 0, 1, and 2 are, respectively, −6, 2,
−2, and 6.
19.2.10. Exercise. Prove the Unique Factorization Theorem: Let F be a field. A nonscalar
monic polynomial in F[x] can be factored in exactly one way (except for the order of the factors)
as a product of monic primes in F[x] .
19.2. EXERCISES (DUE: MON. FEB. 23) 65
19.2.11. Exercise. Let F be a field. Then every nonzero ideal in F[x] is principal.
19.2.12. Exercise. If p1 , . . . , pn are polynomials, not all zero, with coefficients in a field F, then
there exists a unique monic polynomial d in the ideal generated by p1 , . . . , pn such that d divides pk
for each k = 1, . . . , n and, furthermore, any polynomial which divides each pk also divides d. This
polynomial d is the greatest common divisor of the pk ’s. The polynomials pk are relatively
prime if their greatest common denominator is 1.
CHAPTER 20
Invariant Subspaces
20.1. Background
Topics: invariant subspaces, invariant subspace lattice, reducing subspaces, transitive algebra,
Burnside’s theorem, triangulable. (See AOLT sections 3.1 and 3.3.)
67
68 20. INVARIANT SUBSPACES
20.2.3. Exercise. Find infinitely many subspaces of the vector space of polynomial functions
on R which are invariant under the differentiation operator.
2 0 0
20.2.4. Exercise. Let T be the operator on R3 whose matrix representation is −1 3 2.
Find a plane and a line in R3 which reduce T . 1 −1 0
20.2.7. Exercise (†). Let M be a subspace of a vector space V and T ∈ L(V ). Show that
if M is invariant under T , then ET E = T E for every projection E onto M . Show also that if
ET E = T E for some projection E onto M , then M is invariant under T .
20.2.8. Exercise. Suppose a vector space V has the direct sum decomposition V = M ⊕ N .
An operator T on V is reduced by the pair (M, N ) if and only if ET = T E, where E = EM N is
the projection along M onto N .
20.2.9. Exercise. Let M and N be complementary subspaces of a vector space V (that is, V
is the direct sum of M and N ) and let T be an operator on V . Show that if M is invariant under
T , then M ⊥ is invariant under T ∗ and that if T is reduced by the pair (M, N ), then T ∗ is reduced
by the pair (M ⊥ , N ⊥ ).
21.1. Background
Topics: similarity of operators, diagonalizable, resolution of the identity, spectral theorem for vector
spaces. (See section 1.1 of my notes on operator algebras [12].)
21.1.1. Definition. Suppose that on a vector space V there exist projection operators E1 , . . . ,
En such that
(i) IV = E1 + E2 + · · · + En and
(ii) Ei Ej = 0 whenever i 6= j.
Then we say that the family {E1 , E2 , . . . , En } of projections is a resolution of the identity.
21.1.2. Definition. Two operators on a vector space (or two n × n matrices) R and T are
similar if there exists an invertible operator (or matrix) S such that R = S −1 T S.
69
70 21. THE SPECTRAL THEOREM FOR VECTOR SPACES
21.2.6. Exercise. Let R and T be operators on a vector space. If R is similar to T and p ∈ F[x]
is a polynomial, then p(R) is similar to p(T ).
21.2.8. Exercise. Let A be an n × n matrix with entries from a field F. Then A, regarded as
an operator on Fn , is diagonalizable if and only if it is similar to a diagonal matrix.
21.2.9. Exercise (†). Let E1 , . . . , En be the projections associated with a direct sum decom-
position V = M1 ⊕· · ·⊕Mn of a vector space V and let T be an operator on V . Then each subspace
Mk is invariant under T if and only if T commutes with each projection Ek .
21.2.10. Exercise. Prove the spectral theorem for vector spaces 21.1.5.
22.1. Background
Topics: representation, faithful representation, irreducible representation, Cayley-Hamilton theo-
rem. (See AOLT, section 3.4.)
71
72 22. THE SPECTRAL THEOREM FOR VECTOR SPACES (CONTINUED)
22.2.5. Exercise. Let T be the operator on R3 whose matrix representation is given in exer-
cise 22.2.2. Write T as a linear combination of projections.
22.2.6. Exercise. Let T be an operator on a finite dimensional vector space over a field F.
Show that λ is an eigenvalue of T if and only if cT (λ) = 0.
22.2.8. Exercise. If T is an operator on a finite dimensional vector space, then its minimal
polynomial and characteristic polynomial have the same roots.
22.2.9. Exercise. Prove the Caley-Hamilton theorem 22.1.2. Hint. A proof is in the text (see
AOLT, pages 92–94). The problem here is to fill in missing details and provide in a coherent fashion
any background material necessary to understanding the proof.
CHAPTER 23
23.1. Background
Topics: primary decomposition theorem, diagonalizable plus nilpotent decomposition, Jordan
normal form.
23.1.1. Theorem (Primary Decomposition Theorem). Let T ∈ L(V ) where V is a finite di-
mensional vector space. Factor the minimal polynomial
Yn
mT = pkrk
k=1
into powers of distinct irreducible monic polynomials p1 , . . . , pn and let Wk = ker pk (T rk ) for
each k. Then
(i) V = nk=1 Wk ,
L
(ii) each Wk is invariant under T , and
(iii) if Tk = T W , then mT = pkrk .
k k
In the preceding theorem the spaces Wk are the generalized eigenspaces of the operator T .
23.1.2. Theorem. Let T be an operator on a finite dimensional vector space V . Suppose that
the minimal polynomial for T factors completely into linear factors
mT (x) = (x − λ1 )d1 . . . (x − λr )dr
where λ1 , . . . λr are the (distinct) eigenvalues of T . For each k let Wk be the generalized eigenspace
ker(T − λk I)dk and let E1 , . . . , Er be the projections associated with the direct sum decomposition
V = W1 ⊕ W2 ⊕ · · · ⊕ Wr .
Then this family of projections is a resolution of the identity, each Wk is invariant under T , the
operator
D = λ 1 E1 + · · · + λ r Er
is diagonalizable, the operator
N =T −D
is nilpotent, and N commutes with D.
Furthermore, if D1 is diagonalizable, N1 is nilpotent, D1 + N1 = T , and D1 N1 = N1 D1 , then
D1 = D and N1 = N .
23.1.3. Corollary. Every operator on a finite dimensional complex vector space can be written
as the sum of two commuting operators, one diagonalizable and the other nilpotent.
73
74 23. DIAGONALIZABLE PLUS NILPOTENT DECOMPOSITION
2 −1 0 3
(a) The characteristic polynomial of T is (λ − 2)p where p = .
r
(b) The minimal polynomial of T is (λ − 2) where r = .
a b b b
b a b b
(c) The diagonalizable part of T is D = b b a b where a =
and b = .
b b b a
−a b c −b
−a b c −b
(d) The nilpotent part of T is N = −a b c −b where a =
,b= , and
c= .
a −b c b
1 0 0 1 −1
0 1 −2 3 −3
5
23.2.6. Exercise. Let T be the operator on R whose matrix representation is 0 0 −1 2 −2.
1 −1 1 0 1
1 −1 1 −1 2
(a) Find the characteristic polynomial of T .
Answer: cT (λ) = (λ + 1)p (λ − 1)q where p = and q = .
(b) Find the minimal polynomial of T .
Answer: mT (λ) = (λ + 1)r (λ − 1)s where r = and s = .
23.2. EXERCISES (DUE: WED. MAR. 4) 75
23.2.7. Exercise. Prepare and deliver a (30–45 minute) blackboard presentation on the Jordan
normal form of a matrix.
CHAPTER 24
24.1. Background
Topics: inner products, norm, unit vector, square summable sequences, the Schwarz (or Cauchy-
Schwarz) inequality. (See AOLT, section 1.3 and also section 1.2 of my lecture notes on operator
algebras [12].)
24.1.3. Theorem (Parallelogram law). If x and y are vectors in an inner product space, then
kx + yk2 + kx − yk2 = 2kxk2 + 2kyk2 .
24.1.4. Theorem (Polarization identity). If x and y are vectors in a complex inner product
space, then
hx, yi = 41 (kx + yk2 − kx − yk2 + i kx + iyk2 − i kx − iyk2 ) .
What is the correct formula for a real inner product space?
77
78 24. INNER PRODUCT SPACES
24.2.3. Exercise. On the vector space C([0, 1]) of continuous real valued functions on the
interval [0, 1] the uniform norm is defined by kf ku = sup{|f (x)| : 0 ≤ x ≤ 1}. Prove that the
uniform norm is not inducedp by an inner product. That is, prove that there is no inner product on
C([0, 1]) such that kf ku = hf, f i for all f ∈ C([0, 1]). Hint. Use exercise 24.2.2
24.2.6. Exercise (†). Notice that part (a) is a special case of part (b).
2
(a) Show that if a, b, c > 0, then 21 a + 13 b + 61 c ≤ 12 a2 + 13 b2 + 16 c2 .
(b) Show that if a1 , . . . , an , w1 , . . . , wn > 0 and nk=1 wk = 1, then
P
X n 2 Xn
ak wk ≤ ak 2 wk .
k=1 k=1
P∞ 2
P∞ −1 a
24.2.7. Exercise. Show that if k=1 ak converges, then k=1 k k converges absolutely.
P∞24.2.8.2 Exercise. A sequence (ak ) of (real or) complex numbers is square summable if
k=1 |ak | < ∞. The vector space of all square summable sequences of real numbers (respectively,
complex numbers) is denoted by l2 (R) (respectively, l2 (C)). When no confusion will result, both
are denoted by l2 . If a, b ∈ l2 , define
∞
X
ha, bi = ak bk .
k=1
Show that this definition makes sense and makes l2 into an inner product space.
24.2.9. Exercise (†). Use vector methods (no coordinates, no major results from Euclidean
geometry) to show that the midpoint of the hypotenuse of a right triangle is equidistant from the
vertices. Hint. Let 4ABC be a right triangle and O be the midpoint of the hypotenuse AB. What
−→ −−→ −−→ −−→
can you say about < AO + OC, CO + OB >?
24.2.10. Exercise. Use vector methods to show that if a parallelogram has perpendicular
diagonals, then it is a rhombus (that is, all four sides have equal length). Hint. Let ABCD be
−→ −−→
a parallelogram. Express the dot (inner) product of the diagonals AC and DB in terms of the
−−→ −−→
lengths of the sides AB and BC.
24.2.11. Exercise. Use vector methods to show that an angle inscribed in a semicircle is a
right angle.
24.2. EXERCISES (DUE: MON. MAR. 9) 79
24.2.12. Exercise. Let a be a vector in an inner product space H. If hx, ai = 0 for every
x ∈ H, then a = 0.
24.2.13. Exercise. Let S, T : H → K be linear maps between inner product spaces H and K.
If hSx, yi = hT x, yi for every x ∈ H and y ∈ K, then S = T .
CHAPTER 25
25.1. Background
Topics: orthogonality, orthogonal complement, adjoint, involution, ∗ -algebra, ∗ -homomorphism,
self-adjoint, Hermitian, normal, unitary.
25.1.2. Definition. Let H and K be inner product spaces and T : H → K be a linear map. If
there exists a function T ∗ : K → H which satisfies
hT x, yi = hx, T ∗ yi
for all x ∈ H and y ∈ K, then T ∗ is the adjoint of T .
When H and K are real vector spaces, the adjoint of T is usually called the transpose of T
and the notation T t is used (rather than T ∗ ).
81
82 25. ORTHOGONALITY AND ADJOINTS
25.2.2. Exercise (†). Give a proof of the Riesz representation theorem that is much simpler
than the one given in AOLT, page 15, Theorem 1.15. Hint. Use exercises 9.2.9 and 9.2.13.
25.2.3. Exercise. Show by example that the Riesz representation theorem (AOLT, page 15,
Theorem 1.15) does not
P∞hold (as stated) inPinfinite dimensional spaces. Hint. Consider the function
∞
φ : lc (N) → E : x 7→ k=1 αk where x = k=1 αk ek , the ek ’s being the standard basis vectors for
l0 (N).
25.2.4. Exercise. Show that if S is a set of mutually perpendicular vectors in an inner product
space and 0 ∈
/ S, then the set S is linearly independent.
25.2.6. Exercise. Show that if M is a subspace of a finite dimensional inner product space H,
then H = M ⊕ M ⊥ . Show also that this need not be true in an infinite dimensional space.
25.2.9. Exercise. It seems unlikely that the similarity between the results of the exercises 25.2.5
and 25.2.8 and those you obtained in exercises 10.2.5 and 10.2.6 could be purely coincidental.
Explain carefully what is going on here.
25.2.12. Exercise. Let a and b be elements of a ∗ -algebra. Show that a commutes with b if
and only if a∗ commutes with b∗ .
25.2.14. Exercise. Let a be an element of a unital ∗ -algebra. Show that a∗ is invertible if and
only if a is. And when a is invertible we have
∗
(a∗ )−1 = a−1 .
25.2.15. Exercise. Let a be an element of a unital ∗ -algebra. Show that λ ∈ σ(a) if and only
if λ ∈ σ(a∗ ).
CHAPTER 26
Orthogonal Projections
26.1. Background
Topics: orthogonal projections on an inner product space, projections in an algebra with involu-
tion, orthogonality of projections in a ∗ -algebra.
85
86 26. ORTHOGONAL PROJECTIONS
26.2.2. Exercise (†). Let T : H → K be a linear map between inner product spaces. Show
that if the adjoint of T exists, then it is linear.
26.2.4. Exercise. Let T : H → K be a linear map between inner product spaces. Show that if
the adjoint of T exists, then so does the adjoint of T ∗ and T ∗∗ = T .
26.2.6. Exercise. Let H be an inner product space and M and N be subspaces of H such that
H = M + N and M ∩ N = ∅. (That is, we suppose that H is the vector space direct sum of M
and N .) Also let P = EN M be the projection of H along N onto M . Prove that P is self-adjoint
(P ∗ exists and P ∗ = P ) if and only if M ⊥ N .
26.2.10. Exercise (†). Let T : H → K be a linear map between inner product spaces. Show
that
ker T ∗ = (ran T )⊥ .
Is there a relationship between T being surjective and T ∗ being injective?
26.2.11. Exercise. Let T : H → K be a linear map between inner product spaces. Show that
ker T = (ran T ∗ )⊥ .
Is there a relationship between T being injective and T ∗ being surjective?
26.2.12. Exercise. It seems unlikely that the similarity between the results of the two preceding
exercises and those you obtained in exercises 11.2.3 and 11.2.4 could be purely coincidental. Explain
carefully what is going on here.
26.2. EXERCISES (DUE: FRI. MAR. 13) 87
26.2.13. Exercise. A necessary and sufficient condition for two projections p and q in a
∗ -algebrato be orthogonal is that pq + qp = 0.
26.2.15. Exercise. Let P and Q be orthogonal projections on an inner product space V . Then
P ⊥ Q if and only if ran P ⊥ ran Q. In this case P + Q is an orthogonal projection whose kernel is
ker P ∩ ker Q and whose range is ran P + ran Q.
CHAPTER 27
27.1. Background
Topics: subprojections, partial isometries, initial and final projections, orthogonal resolutions of
the identity, the spectral theorem for complex inner product spaces.
27.1.4. Definition. Two elements a and b of a ∗ -algebra A are unitarily equivalent if there
exists a unitary element u of A such that b = u∗ au.
27.1.6. Theorem (Spectral Theorem: Complex Inner Product Space Version). If N is a normal
operator on a finite dimensional complex inner product space V , then N is unitarily diagonalizable
and can be written as
Xn
N= λ k Pk
k=1
where λ1 , . . . , λn are the (distinct) eigenvalues of N and {P1 , . . . , Pn } is the orthogonal resolution
of the identity whose orthogonal projections are associated with the corresponding eigenspaces M1 ,
. . . , Mn .
89
90 27. THE SPECTRAL THEOREM FOR INNER PRODUCT SPACES
27.2.3. Exercise. Let P and Q be orthogonal projections on an inner product space V . Then
the following are equivalent:
(i) P ≤ Q;
(ii) kP xk ≤ kQxk for all x ∈ V ; and
(iii) ran P ⊆ ran Q.
In this case Q − P is an orthogonal projection whose kernel is ran P + ker Q and whose range is
ran Q ran P .
Notation: If M and N are subspaces of an inner product space with N M , then M N denotes
the orthogonal complement of N in M (so N ⊥ (M N ) and M = N ⊕ (M N ) ).
27.2.5. Exercise. Let v be a partial isometry in a ∗ -algebra, p be its initial projection, and q
be its final projection. Then
(a) v ∗ is a partial isometry,
(b) p is a projection,
(c) p is the smallest projection such that vp = v,
(d) q is a projection, and
(e) q is the smallest projection such that qv = v.
27.2.6. Exercise. Let N be a normal operator on a finite dimensional complex inner product
space H. Show that kN xk = kN ∗ xk for all x ∈ H. (See AOLT, page 21, Proposition 1.22.)
.
27.2.12. Exercise. An element of a ∗ -algebra is normal if and only if its real part and its
imaginary part commute. (See AOLT, page 23, proposition 1.25.)
Multilinear Maps
28.1. Background
Topics: free objects, free vector space, the “little-oh” functions, tangency, differential, permuta-
tions, cycles, symmetric group, multilinear maps, alternating multilinear maps.
Differential calculus
The following four definitions introduce the concepts necessary for a (civilized) discussion of
differentiability of real valued functions on Rn . In these definitions f and g are functions from Rn
into R and a is a point in Rn .
28.1.1. Definition. For every h ∈ Rn let
∆fa := f (a + h) − f (a).
28.1.2. Definition. The function f belongs to the family o of “little-oh” functions if for every
c > 0 there exists a δ > 0 such that kf (x)k ≤ ckxk whenever kxk < δ.
28.1.3. Definition. Functions f and g are tangent (at 0) if f − g ∈ o.
28.1.4. Definition. The function f is differentiable at the point a if there exists a linear
map dfa from Rn into R which is tangent to ∆fa . We call dfa the differential (or total
derivative) of f at a.
L(Rn , L(Rn , L(Rn , R))). It is moderately unpleasant to contemplate what an element of L(Rn , L(Rn , R))
or of L(Rn , L(Rn , L(Rn , R))) might “look like”. And clearly as we pass to even higher order differ-
entials things look worse and worse. It is comforting to discover that an element of L(Rn , L(Rn , R))
may be regarded as a map from (Rn )2 into R which is bilinear (that is, linear in both of its vari-
ables), and that L(Rn , L(Rn , L(Rn , R))) may be thought of as a map from (Rn )3 into R which is
linear in each of its three variables. More generally, if V1 , V2 , V3 , and W are arbitrary vector spaces
it will be possible to identify the vector space L(V1 , L(V2 , W ))) with the space of bilinear maps from
V1 × V2 to W , the vector space L(V1 , L(V2 , L(V3 , W ))) with the trilinear maps from V1 × V2 × V3
to W , and so on (see exercise 28.2.3).
Permutations
A bijective map σ : X → X from a set X onto itself is a permutation of the set. If x1 ,
x2 , . . . , xn are distinct elements of a set X, then the permutation of X that maps x1 7→ x2 ,
x2 7→ x3 , . . . , xn−1 7→ xn , xn 7→ x1 and leaves all other elements of X fixed is a cycle (or cyclic
93
94 28. MULTILINEAR MAPS
Multilinear maps
28.1.9. Definition. Let V1 , V2 , . . . , Vn , and W be vector spaces over a field F. We say that
a function f : V1 × · · · × Vn → W is multilinear (or n-linear) if it is linear in each of its n
variables. We ordinarily call 2-linear maps bilinear and 3-linear maps trilinear. We denote by
Ln (V1 , . . . , Vn ; W ) the family of all n-linear maps from V1 × · · · × Vn into W . A multilinear map
from the product V1 × · · · × Vn into the scalar field F is a multilinear form (or a multilinear
functional.
28.2.2. Exercise. Show that composition of operators on a vector space V is a bilinear map
on L(V ).
28.2.3. Exercise. Show that if U , V , and W are vector spaces, then so is L2 (U, V ; W ). Show
also that the spaces L(U, L(V, W )) and L2 (U, V ; W ) areisomorphic. Hint. The isomorphism is
implemented by the map
F : L(U, L(V, W )) → L2 (U, V ; W ) : φ 7→ φ̂
where φ̂(u, v) := (φ(u))(v) for all u ∈ U and v ∈ V . (Recall remark 28.1.5.)
28.2.7. Exercise. Let V and W be vector spaces. Then every alternating multilinear map
f : V n → W is skew-symmetric. Hint. Consider f (u + v, u + v) in the bilinear case.
CHAPTER 29
Determinants
29.1. Background
Topics: determinant of a matrix, determinant function on Mn (A).
NOTE: In the following material on determinants, we will assume that the scalar fields underlying
all the vector spaces we encounter are of characteristic zero. Thus multilinear functions will be
alternating if and only if they are skew-symmetric. (See exercises 28.2.7 and 29.2.3.)
29.1.2.
n Remark. Let A be a unital commutative algebra. In the sequel we identify the algebra
An = An × · · · × An (n factors) with the algebra Mn (A) of n × n matrices of elements of A by
n
regarding the term ak in (a1 , . . . , an ) ∈ An as the k th column vector of an n×n matrix of elements
of A. There are many standard notations for the same thing: Mn (A), An × · · · × An (n factors),
n 2
An , An×n , and An , for example.
The identity matrix, which we usually denote by I, in Mn (A) is (e1 , . . . , en ), where e1 , . . . , en
are the standard basis vectors for An ; that is, e1 = (1A , 0, 0, . . . ), e2 = (0, 1A , 0, 0, . . . ), and so on.
97
98 29. DETERMINANTS
29.2.3. Exercise. If V and W are vector spaces over a field F of characteristic zero and
f : V n → W is a skew-symmetric multilinear map, then f is alternating.
29.2.4. Exercise. Let ω be an n-linear functional on a vector space V over a field of charac-
teristic zero. If ω(v 1 , . . . , v n ) = 0 whenever v i = v i+1 for some i, then ω is skew-symmetric and
therefore alternating.
29.2.8. Exercise. Show that the determinant function on Mn (A) (where A is a unital com-
mutative algebra) is unique.
30.1. Background
Topics: free object, free vector space. (See AOLT, section 6.1.)
99
100 30. FREE VECTOR SPACES
30.2.2. Exercise. Let S be an arbitrary nonempty set and F be a field. Prove that there exists
a vector space V over F which is free on S. Hint. Given the set S let V be the set of all F-valued
functions on S which have finite support. Define addition and scalar multiplication pointwise.
The map ι : s 7→ χ{s} of each element s ∈ S to the characteristic function of {s} is the desired
injection. To verify that V is free over S it must be shown that for every vector space W and
f fe
every function S / W there exists a unique linear map V / W which makes the following
diagram commute.
S?
ι / V V
??
??
?? ˜
? f f˜
f ??
??
W W
30.2.3. Exercise. Prove that every vector space is free. Hint. Of course, part of the problem
is to specify a set S on which the given vector space is free.
31.1. Background
Topics: vector space tensor products, elementary tensors. (See AOLT, section 6.2. A more
extensive and very careful exposition of tensor products can be found in chapter 14 of [23].)
31.1.1. Definition. Let U and V be vector spaces. A vector space U ⊗ V together with a
bilinear map τ : U × V → U ⊗ V is a tensor product of U and V if for every vector space W
and every bilinear map B : U × V → W , there exists a unique linear map B
e : U ⊗ V → W which
makes the following diagram commute.
U ×V
τ / U ⊗V
??
??
??
?? e
B ??
B
??
W
101
102 31. TENSOR PRODUCTS OF VECTOR SPACES
31.2.2. Exercise. Show that in the category of vector spaces and linear maps tensor products
exist. Hint. Let U and V be vector spaces over a field F. Consider the free vector space lc (U ×V ) =
lc (U × V , F). Define
∗ : U × V → lc (U × V ) : (u, v) 7→ χ{(u,v)} .
Then let
S1 = {(u1 + u2 ) ∗ v − u1 ∗ v − u2 ∗ v : u1 , u2 ∈ U and v ∈ V },
S2 = {(αu) ∗ v − α(u ∗ v) : α ∈ F, u ∈ U , and v ∈ V },
S3 = {u ∗ (v1 + v2 ) − u ∗ v1 − u ∗ v2 : u ∈ U and v1 , v2 ∈ V },
S4 = {u ∗ (αv) − α(u ∗ v) : α ∈ F, u ∈ U , and v ∈ V },
S = span(S1 ∪ S2 ∪ S3 ∪ S4 ), and
U ⊗ V = lc (U × V )/S .
Also define
τ : U × V → U ⊗ V : (u, v) 7→ [u ∗ v].
Then show that U ⊗ V and τ satisfy the conditions stated in definition 31.1.1.
NOTE: It is conventional to write u ⊗ v for τ (u, v) = [u ∗ v]. Tensors of the form u ⊗ v are
called elementary tensors (or decomposable tensors or homogeneous tensors). Keep
in mind that not every member of U ⊗ V is of the form u ⊗ v.
31.2.3. Exercise. Let u and v be elements of finite dimensional vector spaces U and V , re-
spectively. Show that if u ⊗ v = 0, then either u = 0 or v = 0.
31.2.6. Exercise. Let U and V be finite dimensional vector spaces and {fj }nj=1 be a basis
for V . Show that for every element t ∈ U ⊗ V there exist unique vectors u1 , . . . un ∈ U such that
Xn
t= uj ⊗ fj .
j=1
32.1. Background
Topics: No additional topics.
103
104 32. TENSOR PRODUCTS OF VECTOR SPACES (CONTINUED)
32.2.7. Exercise. Let u1 , u2 ∈ U and v1 , v2 ∈ V where U and V are finite dimensional vector
spaces. Show that if u1 ⊗ v1 = u2 ⊗ v2 6= 0, then u2 = αu1 and v2 = βv1 where αβ = 1.
CHAPTER 33
33.1. Background
Topics: tensor products of linear maps.
105
106 33. TENSOR PRODUCTS OF LINEAR MAPS
33.2.4. Exercise. Suppose that R ∈ L(U, W ) and that S, T ∈ L(V, X) where U , V , W , and
X are finite dimensional vector spaces. Then
R ⊗ (S + T ) = R ⊗ S + R ⊗ T.
33.2.5. Exercise. Suppose that S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and X
are finite dimensional vector spaces. Then for all α, β ∈ R
(αS) ⊗ (βT ) = αβ(S ⊗ T ).
33.2.6. Exercise. Suppose that Q ∈ L(U, W ), R ∈ L(V, X), S ∈ L(W, Y ), and that T ∈
L(X, Z) where U , V , W , X, Y , and Z are finite dimensional vector spaces. Then
(S ⊗ T )(Q ⊗ R) = SQ ⊗ T R.
33.2.7. Exercise. Suppose that S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and X
are finite dimensional vector spaces. Show that if S and T are invertible, then so is S ⊗ T and
(S ⊗ T )−1 = S −1 ⊗ T −1 .
33.2.8. Exercise. Suppose that S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and X
are finite dimensional vector spaces. Show that if S ⊗ T = 0, then either S = 0 or T = 0.
33.2.9. Exercise. Suppose that S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and X
are finite dimensional vector spaces. Then
ran(S ⊗ T ) = ran S ⊗ ran T.
33.2.10. Exercise. Suppose that S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and X
are finite dimensional vector spaces. Then
ker(S ⊗ T ) = ker S ⊗ V + U ⊗ ker T.
33.2.11. Exercise. Suppose that S ∈ L(U, W ) and that T ∈ L(V, X) where U , V , W , and X
are finite dimensional vector spaces. Then
(S ⊗ T ) t = S t ⊗ T t .
Grassmann Algebras
34.1. Background
Topics: group algebras, Grassmann algebras, wedge product.
V
34.1.1. Definition. Let V be an d-dimensional vector space over a field F. We say that (V )
is the Grassmann algebra (or the exterior algebra) over V if
V
V over F (multiplication is denoted by ∧),
(1) (V ) is a unital algebra
(2) V is “contained in” (V ),
(3) v ∧ vV= 0 for every v ∈ V ,
(4) dim(
V (V )) = 2d , and
(5) (V ) is generated by 1Λ(V ) and V .
The multiplication ∧ in a Grassmann algebra is called the wedge product (or the exterior
product).
107
108 34. GRASSMANN ALGEBRAS
34.2.2. Exercise. In the middle of page 48 of AOLT the author gives a “succinct formula” for
the product of two elements in a group algebra. Show that this formula is correct.
34.2.3. Exercise. Check the author’s computation of xy in Example 2.10, page 48, AOLT. Find
the product both by using the formula given in the middle of page 48 and by direct multiplication.
34.2.5. Exercise. In definition 34.1.1 why is “contained in” in quotation marks. Give a more
precise version of condition (2).
34.2.6. Exercise.
V In the last condition of definition 34.1.1 explain more precisely what is meant
by saying that (V ) is generated by 1 and V .
V
34.2.7. Exercise. Show that if (V ) is a Grassmann algebra over a vector space V , then
1V(V ) ∈
/ V.
34.2.9. Exercise. Let V be a d-dimensional vector space with basis E = {e1 , . . . , ed }. For each
∧ ei2 ∧ · · · ∧ eip . Also let e∅ = 1V(V ) . Show
nonempty subset S = {ei1 , ei2 , . . . , eip } of E let eS = ei1 V
that {eS : S ⊆ E} is a basis for the Grassmann algebra (V ).
CHAPTER 35
Graded Algebras
35.1. Background
Topics: graded algebras, homogeneous elements, decomposable elements.
109
110 35. GRADED ALGEBRAS
and 1 (V ) = V . If the dimension of V is d, take k (V ) = {0} for all k > d. (And if you wish to
V V
35.2.2. Exercise. If the dimension of a vector space V is 3 or less, then every homogeneous
element of the corresponding Grassmann algebra is decomposable.
35.2.3. Exercise. If the dimension of a (finite dimensional) vector space V is at least four,
then there exist homogeneous elements in the corresponding Grassmann algebra which are not
decomposable. Hint. Let e1 , e2 , e3 , and e4 be distinct basis elements of V and consider (e1 ∧ e2 ) +
(e3 ∧ e4 ).
35.2.4. Exercise. The elements v1 , v2 , . . . , vp in a vector space V areVlinearly independent if
and only if v1 ∧ v2 ∧ · · · ∧ vp 6= 0 in the corresponding Grassmann algebra (V ).
35.2.5. Exercise. Let T : V → W be a linear map between finite dimensional vector spaces
(over the same field). Then there exists a unique extension of T to a unital algebra homomorphism
(T ) : (V ) → (W ). This extension maps k (V ) into k (W ) for each k ∈ N.
V V V V V
V V
35.2.6. Exercise. The pair of maps V 7→ (V ) and T 7→ (T ) is a covariant functor from
the category of vector spaces and linear maps to the category of unital algebras and unital algebra
homomorphisms.
Vp d
35.2.7. Exercise. If V is a vector space of dimension d, then dim (V ) = p for 0 ≤ p ≤ d.
Vp Vq
35.2.8. Exercise. If V is a finite dimensional vector space, ω ∈ (V ), and µ ∈ (V ), then
pq
ω ∧ µ = (−1) µ ∧ ω.
CHAPTER 36
36.1. Background
Topics: tensor algebras, shuffle permutations, (A good reference for much of this material—and
the following material on differential forms—is chapter 4 of [3].)
36.1.1. Definition. Let V0 , V1 , V2 , . . . be vector spaces (over the same field). Then their
∞
L
(external) direct sum, which is denoted by Vk , is defined to be the set of all functions
k=0
v : Z+ → ∞ +
S
k=0 Vk with finite support such that v(k) = vk ∈ Vk for each k ∈ Z . The usual
pointwise addition and scalar multiplication make this set into a vector space.
T k (V ) ⊗ T m (V ) ∼
= T k+m (V )
and extending by linearity to all of T (V ). The resulting algebra is the tensor algebra of V (or
generated by V ).
36.1.3. Notation. If V is a vector space over F and k ∈ N we denote by Altk (V ) the set of
all alternating k-linear maps from V k into F. (The space Alt1 (V ) is just V ∗ .) Additionally, take
Alt0 (V ) = F.
111
112 36. EXISTENCE OF GRASSMANN ALGEBRAS
36.2.2. Exercise. Let V be a finite dimensional vector space and J be the ideal in the tensor
algebra T (V ) generated by the set of all elements of the form v ⊗ v where v ∈ V . Show that the
quotient algebra T (V )/J is the Grassmann algebra over V ∗ (or, equivalently, over V ).
36.2.3. Exercise. Show that if V is a finite dimensional vector space and k > dim V , then
Altk (V ) = {0}.
36.2.4. Exercise. Give an example of a (4, 5)-shuffle permutation σ of the set N9 = {1, . . . , 9}
such that σ(7) = 4.
36.2.5. Exercise. Show that definition 36.1.5 is not overly optimistic by verifying that if ω ∈
Altp (V ) and µ ∈ Altq (V ), then ω ∧ µ ∈ Altp+q (V ).
36.2.6. Exercise. Show that the multiplication defined in 36.1.5 is associative. That is if
ω ∈ Altp (V ), µ ∈ Altq (V ), and ν ∈ Altr (V ), then
ω ∧ (µ ∧ ν) = (ω ∧ µ) ∧ ν.
36.2.7. Exercise. Let V be a finite dimensional vector space. Explain in detail how to make
Altk (V ) (or, if you prefer, Altk (V ∗ ) ) into a vector space for each k ∈ Z and how to make the
collection of these into a Z-graded algebra. Show that this algebra is the Grassmann algebra
generated by V . Hint. Take Altk (V ) = {0} for each k < 0 and extend the definition of the wedge
product so that if α ∈ Alt0 (V ) = F and ω ∈ Altp (V ), then α ∧ ω = αω.
for all v 1 , . . . , v p ∈ V .
37.1. Background
Topics: right-handed basis, orientation, opposite orientation, Hodge star operator.
37.1.1. Definition. Let E be a basis for an n-dimensional vector space V . Then the n-tuple
(e1 , . . . , en ) is an ordered basis for V if e1 , . . . , en are distinct elements of E.
37.1.2. Definition. Let E = (e1 , . . . , en ) be an ordered basis for Rn . We say that the basis E
is right-handed if det[e1 , . . . , en ] > 0 and left-handed otherwise.
113
114 37. THE HODGE ∗-OPERATOR
37.2.3. Exercise. Let V be an n-dimensional real inner product space. In exercise 9.2.13
we established an isomorphism Φ : v 7→ v ∗ between V and its dual space V ∗ . Show how this
isomorphism can be used to induce an inner product on V ∗ . Then show how this may be used to
create an inner product on Altp (V ) for 2 ≤ p ≤ n. Hint. For v, w ∈ V let hv ∗ , w∗ i = hv, wi. Then
for ω1 , . . . , ωp , µ1 , . . . , µp ∈ Alt1 (V ) let hω1 ∧ · · · ∧ ωp , µ1 ∧ · · · ∧ µp i = det[hωj , µk i].
37.2.4. Exercise. Let V be an d-dimensional oriented real inner product space. Fix a unit
vector vol ∈ Altd (V ). This vector is called a volume element. (In the case where V = Rd , we
will always choose vol = e∗1 ∧ · · · ∧ e∗d where (e1 , . . . , ed ) is the usual ordered basis for Rd .)
Let ω ∈ Altp (V ) and q = d − p. Show that there exists a vector ∗ ω ∈ Altq (V ) such that
h∗ ω, µi vol = ω ∧ µ
for each µ ∈ Alt (V ). Show that the map ω 7→ ∗ ω from Altp (V ) into Altq (V ) is a vector space
q
37.2.5. Exercise. Let V be a finite dimensional oriented real inner product space of dimen-
sion n. Suppose that p + q = n. Show that ∗ ∗ ω = (−1)pq ω for every ω ∈ Altp (V ).
CHAPTER 38
Differential Forms
38.1. Background
Topics: tangents, tangent space, cotangent space, partial derivatives on a manifold, vector fields,
differential forms, the differential of a map between manifolds. (One good source for much of this
material is chapters 1 and 4 of [3].
38.1.1. Notation. Let m be a point on a manifold M . A real valued function f belongs to Cm∞
by the vector spaces Ωp (U ) and the wedge product by Ω(U ). In everything that follows when
we refer to a p-form or a differential form we will assume that it is smooth.
38.1.5. Definition. Let φ = (x1 , . . . , xd ) be a chart (or coordinate system) at a point m on
a d-dimensional manifold M . We define the partial derivative at m with respect to xk ,
denoted by Dxk (m), to be the tangent given by the formula
∂(f ◦ φ−1 )
Dxk (m)(f ) = (φ(m))
∂uk
where ∂/∂uk is the usual partial derivative with respect to the k th coordinate on Rd . Depending
on which variable we are interested in we often write Dxk (f )(m) for Dxk (m)(f ). It is common
practice to write ∂x∂ k for Dxk . In Rn we often write Dk for Duk , where, as above, u1 , . . . , un are
the standard coordinates on Rn . Also, if f is a smooth real valued function on a manifold, we may
write fxk (or just fk ) for Dxk (f ).
38.1.6. Theorem. Let φ = (x1 , . . . , xd ) be a chart (or coordinate system) at a point m on a
d-dimensional manifold M . If t is a tangent at m, then
d
X
t= t(xk ) Dxk
k=1
For a proof of this theorem consult [3] (section 1.3, theorem 1) or [25] (chapter 1, proposition
3.1).
38.1.7. Definition. Let F : M → N be a smooth function between manifolds and m be a point
in M . Define dFm : Tm → TF (m) , the differential of F at m, by dFm (t)(g) = t(g ◦ F ) for all
t ∈ Tm and all g ∈ CF∞(m) .
38.1.8. Notation. In exercise 37.2.4 we defined a volume element vol in Rd . We will adopt the
same notation vol for the differential form dx1 ∧· · ·∧dxd on a d-manifold M (where φ = (x1 , . . . , xd )
is a chart on M ). This is called the volume form.
38.2. EXERCISES (DUE: FRI. APRIL 24) 117
38.2.4. Exercise. Help future students. Design some reasonably elementary exercises which
you think would help clarify points in the background material for this assignment that you found
confusing at first reading.
CHAPTER 39
39.1. Background
Topics: exterior derivative.
Proofs of the existence and uniqueness of such a function can be found in [20] (theorem 12.14),
[25] (chapter 1, theorem 11.1), and [3] (section 4.6).
119
120 39. THE EXTERIOR DIFFERENTIATION OPERATOR
39.2.4. Exercise. In beginningRRcalculus texts some curious arguments are given for replacing
the expression dx dy in the integral R f dx dy by r dr dθ when we change from rectangular to polar
coordinates in the plane. Show that if we interpret dx dy as the differential form dx ∧ dy, then this
is a correct substitution. (Assume additionally that R is a region in the open first quadrant and
that the integral of f over R exists.)
39.2.5. Exercise. Give an explanation similar to the one in the preceding exercise of the change
in triple integrals from rectangular to spherical coordinates.
Differential Calculus on R3
40.1. Background
Topics: association of vector fields with 1-forms, gradient, curl, divergence.
40.1.1. Notation. If, on some manifold, f is a 0-form and ω is a differential form (of arbitrary
degree), we write f ω instead of f ∧ ω for the product of the forms.
121
122 40. DIFFERENTIAL CALCULUS ON R3
40.2.8. Exercise. Let f be a scalar field on an open subset U of R3 . Show that grad f (the
gradient of f ) is the vector field associated with the 1-form df .
40.2.9. Exercise. Let F and G be vector fields on a region in R3 . Show that if ω and µ are,
respectively, their associated 1-forms, then ∗ (ω ∧ µ) is the 1-form associated with F × G.
40.2.11. Exercise. Let f be a scalar field. Write the left side of Laplace’s equation fxx + fyy +
fzz = 0 in terms of d, ∗, and f only.
40.2.12. Exercise. Suppose that F is a smooth vector field in R3 and that ω is its associated
1-form. Show that ∗ dω is the 1-form associated with curl F.
40.2.13. Exercise. Let F be a vector field on R3 and ω be its associated 1-form. Show that
∗ d ∗ ω = div F. (Here div F is the divergence of F .)
40.2.14. Exercise. Let F be a vector field on an open subset of R3 . Use differential forms (but
not partial derivatives) to show that div curl F = 0.
40.2. EXERCISES (DUE: WED. APRIL 29) 123
40.2.15. Exercise. Let f be a smooth scalar field (that is, a 0-form) in R3 . Use differential
forms (but not partial derivatives) to show that curl grad f = 0.
CHAPTER 41
41.1. Background
Topics: closed differential forms, exact differential forms,
where dp−1 and dp are exterior differentiation operators. Elements of ker dp are called closed
p-forms and elements of ran dp−1 are exact p-forms.
125
126 41. CLOSED AND EXACT FORMS
41.2.2. Exercise. With the shameless notations suggested in exercise 41.2.1 try to make sense
of the claim that for a vector field F in R3
d(F · dS) = (div F) dV
where dV is the “volume element” dx ∧ dy ∧ dz.
41.2.4. Exercise. Show that if ω and µ are closed differential forms, then so is ω ∧ µ.
41.2.5. Exercise. Show that if ω is an exact differential form and µ is a closed form, then ω ∧ µ
is exact.
42.1. Background
Topics: cocycles, coboundaries, the de Rham cohomology group.
42.2.3. Exercise. In exercise 42.2.1 you showed that Ωp was a functor for each p. What about
Ω itself? Is it a functor? Explain.
42.2.7. Exercise. Let V = {0} be the 0-dimensional Euclidean space. Compute the pth de
Rham cohomology group H p (V ) for all p ∈ Z.
42.2.9. Exercise. Let U be the union of m disjoint open intervals in R. Compute H p (U ) for
all p ∈ Z.
◦
42.2.10. Exercise. Let U ⊆ Rn . For [ω] ∈ H p (U ) and [µ] ∈ H q (U ) define
[ω][µ] = [ω ∧ µ] ∈ H p+q (U ).
Explain why exercise 41.2.4 is necessary for this definition to make sense. Prove also that this def-
inition does not depend on L the representatives chosen from the equivalence classes. Show that this
definition makes H ∗ (U ) = H p (U ) into a Z-graded algebra. This is the de Rham cohomology
algebra of U . p∈Z
42.2.11. Exercise. Prove that with definition 42.1.3 H ∗ becomes a contravariant functor from
the category of open subsets of Rn and smooth maps to the category of Z-graded algebras and their
homomorphisms.
CHAPTER 43
Cochain Complexes
43.1. Background
Topics: cochain complexes, cochain maps.
of vector spaces and linear maps is a cochain complex if dp ◦ dp−1 = 0 for all p ∈ Z. Such a
sequence may be denoted by (V ∗ , d) or just by V ∗ .
Gp Gp+1
... / Wp / Wp+1 / ...
δp
commutes.
0 / U∗ F /V∗ G / W∗ /0
of cochain complexes and cochain maps is (short) exact if for every p ∈ Z the sequence
Fp Gp
0 / Up / Vp / Wp /0
129
130 43. COCHAIN COMPLEXES
0 / U∗ F /V∗ G / W∗ /0
Hint. If w is a cocycle in Wp , then, since Gp is surjective, there exists v ∈ Vp such that w = Gp (v).
It follows that dv ∈ ker Gp+1 = ran Fp+1 so that dv = Fp+1 (u) for some u ∈ Up+1 . Let ηp ([w]) = [u].
CHAPTER 44
Simplicial Homology
44.1. Background
Topics: convex, convex combinations, convex hull, convex independent, closed simplex, open
simplex, oriented simplex, face of a simplex, simplicial complex, r -skeleton of a complex, polyhedron
of a complex, chains, boundary maps, cycles, boundaries, Betti number, Euler characteristic.
44.1.2. Definition. If a and b are vectors in the vector space V , then the closed segment
between a and b, denoted by [a, b], is {(1 − t)a + tb : 0 ≤ t ≤ 1}.
44.1.3. CAUTION. Notice that there is a slight conflict between this notation, when applied
to the vector space R of real numbers, and the usual notation for closed intervals on the real line.
In R the closed segment [a, b] is the same as the closed interval [a, b] provided that a ≤ b. If a > b,
however, the closed segment [a, b] is the same as the segment [b, a], it contains all numbers c such
that b ≤ c ≤ a, whereas the closed interval [a, b] is empty.
44.1.4. Definition. A subset C of a vector space V is convex if the closed segment [a, b] is
contained in C whenever a, b ∈ C.
44.1.5. Definition. Let A be a subset of a vector space V . The convex hull of A is the
smallest convex subset of V which contain A.
44.1.8. Definition. Let p ∈ Z+ . The closed convex hull of a convex independent set S =
{v0 , . . . , vp } of p + 1 vectors in some vector space is a closed p -simplex. It is denoted by [s] or
by [v0 . . . . , vp ]. The integer p is the dimension of Pthe simplex. The open p -simplex determined
by the set S is the set of all convex combinations pk=0 αk vk of elements of S where each αk > 0.
The open simplex will be denoted by (s) or by (v0 , . . . , vp ). We make the special convention that
a single vector {v} is both a closed and an open 0 -simplex.
If [s] is a simplex in Rn then the plane of [s] is the affine subspace of Rn having the least
dimension which contains [s]. It turns out that the open simplex (s) is the interior of [s] in the
plane of [s].
131
132 44. SIMPLICIAL HOMOLOGY
For all other p let ∂p be the zero map. The maps ∂p are called boundary maps. Notice that each
∂p is a linear map.
44.1.15. Definition. Let K be a simplicial complex in Rn and 0 ≤ p ≤ dim K. Define Zp (K) =
Zp to be the kernel of ∂p : Cp → Cp−1 and Bp (K) = Bp to be the range of ∂p+1 : Cp+1 → Cp . The
members of Zp are p -cycles and the members of Bp are p -boundaries.
It is clear from exercise 44.2.2 that Bp is a subspace of the vector space Zp . Thus we may define
Hp (K) = Hp to be Zp /Bp . It is the pth simplicial homology group of K. (And, of course, Zp ,
Bp , and Hp are the trivial vector space whenever p < 0 or p > dim K.)
44.1.16. Definition. Let K be a simplicial complex. The number βp := dim Hp (K) is the pth
Betti number of the complex K. And χ(K) := dim
P K p
p=0 (−1) βp is the Euler characteristic
of K.
134 44. SIMPLICIAL HOMOLOGY
44.2.3. Exercise. Let K be the topological boundary (that is, the 1 -skeleton) of an oriented
2 -simplex in R2 . Compute Cp (K), Zp (K), Bp (K), and Hp (K) for each p.
44.2.4. Exercise. What changes in exercise 44.2.3 if K is taken to be the oriented 2 -simplex
itself.
44.2.5. Exercise. Let K be the simplicial complex in R2 comprising two triangular regions
similarly oriented with a side in common. For all p compute Cp (K), Zp (K), Bp (K), and Hp (K).
44.2.6. Exercise. Let K be a simplicial complex. For 0 ≤ p ≤ dim K let αp be the number of
p -simplexes in K. That is, αp = dim Cp (K). Show that
dim
XK
χ(K) = (−1)p αp .
p=0
CHAPTER 45
Simplicial Cohomology
45.1. Background
Topics: simplicial cochain, simplicial cocycles, simplicial coboundaries, simplicial cohomology
group, smooth submanifold, smoothly triangulated manifold, de Rham’s theorem, pullback of dif-
ferential forms.
∗
45.1.1. Definition. Let K be a simplicial complex. For each p ∈ Z let C p (K) = Cp (K) .
The elements of C p (K) are (simplicial) p -cochains. Then the adjoint ∂p ∗ of the boundary map
∂p : Cp+1 (K) → Cp (K)
is the linear map
∂p ∗ = ∂ ∗ : C p (K) → C p+1 (K) .
(Notice that ∂ ∗ ◦ ∂ ∗ = 0.)
Also define
(1) Z p (K) := ker ∂p ∗ ;
(2) B p (K) := ran ∂p−1 ∗ ; and
(3) H p (K) := Z p (K)/B p (K).
Elements of Z p (K) are (simplicial) p -cocycles and elements of B p (K) are (simplicial) p -
coboundaries. The vector space H p (K) is the pth simplicial cohomology group of K.
45.1.2. Definition. Let F : N → M be a smooth injection between smooth manifolds. The
pair (N, F ) is a smooth submanifold of M if dFn is injective for every n ∈ N .
45.1.3. Definition. Let M be a smooth manifold, K be a simplicial complex in Rn , and
h : [K] → M be a homeomorphism. The triple (M,
K, h) is a smoothly triangulated manifold
if for every open simplex (s) in K the map h : [s] → M has an extension hs : U → M to a
[s]
neighborhood U of [s] lying in the plane of [s] such that (U, hs ) is a smooth submanifold of M .
45.1.4. Theorem. A smooth manifold can be triangulated if and only if it is compact.
The proof of this theorem is tedious enough that very few textbook authors choose to include
it in their texts. You can find a “simplified” proof in [6].
45.1.5. Theorem (de Rham’s theorem). If (M, K, φ) is a smoothly triangulated manifold, then
H p (M ) ∼
= H p (K)
for every p ∈ Z.
You can find proofs of de Rham’s theorem in [16], chapter IV, theorem 3.1; [20], theorem 16.12;
[24], pages 167–173; and [26], theorem 4.17.
45.1.6. Definition (pullbacks of differential forms). Let F : M → N be a smooth mapping
between smooth manifolds. Then there exists an algebra homomorphism F ∗ : Ω(N ) → Ω(M ),
called the pullback associated with F which satisfies the following conditions:
135
136 45. SIMPLICIAL COHOMOLOGY
F∗ F∗
Ωp (M ) / Ωp+1 (M )
d
commutes for every p ∈ Z.
CHAPTER 46
46.1. Background
Topics: integral of a differential form over a simplicial complex, integral of a differential form over
a manifold, manifolds with boundary.
where the right hand side is an ordinary Riemann integral. If hv0 i is a 0 -simplex, we make a special
definition Z
f = f (v0 )
hv0 i
for every 0 -form f .
Extend the preceding definition
P to p -chains by requiring the integral to be linear as a function
of simplexes; that is, if c = as hsi is a p -chain (in some simplicial complex) and µ is a p -form,
define Z Z
X
µ= a(s) µ .
c hsi
139
140 46. INTEGRATION OF DIFFERENTIAL FORMS
The “tangential component of F ”, written FT may be regarded as the 1 -form nk=1 F k dxk .
P
Make sense of the preceding definition in terms of the definition of the integral of 1 -forms over
a smoothly triangulated manifold. For simplicity take n = 2. Hint. Suppose we have the following:
(1) ht0 , t1 i (with t0 < t1 ) is an oriented 1 -simplex in R;
(2) V is an open subset of R2 ;
(3) c : J → V is an injective smooth curve in V , where J is an open interval containing [t0 , t1 ];
and
(4) ω = a dx + b dy is a smooth 1 -form on V .
First show that
c∗ (dx) (t) = Dc1 (t)
for t0 ≤ t ≤ t1 . (We drop the notational distinction between c and its extension cs to J. Since
the tangent space Tt is one-dimensional for every t, we identify Tt with R. Choose v (in (3) of
definition 45.1.6) to be the usual basis vector in R, the number 1.)
Show in a similar fashion that
c∗ (dy) (t) = Dc2 (t) .
Then write an expression for c∗ (ω) (t). Finally conclude that 1 ω (ht0 , t1 i) is indeed equal to
R
R t1
t0 h(a, b) ◦ c, Dci as claimed in 46.1.
46.2.2. Exercise. Let S1 be the unit circle in R2 oriented counterclockwise and let F be the
3 3 3 3
R defined by F(x, y) = (2x − y ) i + (x + y ) j. Use your work in exercise 46.2.1 to
vector field
calculate S1 FT . Hint. You may use without proof two facts: (1) the integral does not depend
on the parametrization (triangulation) of the curve, and (2) the results of exercise 46.2.1 hold also
for simple closed curves in R2 ; that is, for curves c : [t0 , t1 ] → R2 which are injective on the open
interval (t0 , t1 ) but which satisfy c(t0 ) = c(t1 ).
46.2.3. Exercise. Let V be an open subset of R3 , F : V → R3 be a smooth vector field, and
(S, K, h) be a smoothly triangulated 2 -manifold suchRRthat S ⊆ V . It is conventional to define the
“normal component of F over S ”, often denoted by S FN , by the formula
ZZ ZZ
FN = hF ◦ h, ni
S K
(b) Let u and v (in that order) be the coordinates in the plane of [K] and x, y, and z (in that
order) be the coordinates in R3 . Show that h∗ (dx) = h11 du + h12 dv. Also compute h∗ (dy)
and h∗ (dz).
Remark. If at each point in [K] we identify the tangent plane to R2 with R2 itself and if
we use conventional notation, the “v” which appears in (3) of definition 45.1.6 is just not
written. One keeps in mind that the components of h and all the differential forms are
functions on (a neighborhood of) [K].
(c) Now find h∗ (ω). (Recall that ω = FN is defined above.)
(d) Show for each simplex (s) in K that
Z ZZ
ω (hsi) = hF ◦ h, ni .
2 [s]
Pn
(e) Finally show that if hs1 i, . . . , hsn i are the oriented 2 -simplexes of K and c = k=1 hsk i,
then Z ZZ
ω (c) = hF ◦ h, ni .
2 [K]
Stokes’ Theorem
47.1. Background
Topics: Stokes’ theorem.
47.1.1. Theorem (Stokes’ theorem). Suppose that (M, K,Rh) is Ran oriented smoothly triangu-
lated manifold with boundary. Then the integration operator = p p∈Z is a cochain map from
the cochain complex (Ω∗ (M ), d ) to the cochain complex (C ∗ (K), ∂ ∗ ).
This is an important and standard theorem, which appears in many versions and with many
different proofs. See, for example, [1], theorem 7.2.6; [19], chapter XVII, theorem 2.1; [21], theorem
10.8; or [26], theorems 4.7 and 4.9.
Recall that when we say in Stokes’ theorem that the integration operator is a cochain map, we
are saying that the following diagram commutes.
d / Ωp (M ) d / Ωp+1 (M ) d /
R R
/ C p (K) / C p+1 (K) /
∂∗ ∂∗ ∂∗
In more conventional notation all mention of the triangulating simplicial complex K and of the
map h is suppressed. This is justified by the fact that it can be shown that the value of the integral
is independent of the particular triangulation used. Then when the equations of the form (47.2) are
added over all the (p + 1) -simplexes comprising K we arrive at a particularly simple formulation
of (the conclusion of) Stokes’ theorem
Z Z
dω = ω. (47.3)
M ∂M
One particularly important topic that has been glossed over in the preceding is a discussion of
orientable manifolds (those which possess nowhere vanishing volume forms), their orientations, and
the manner in which an orientation of a manifold with boundary induces an orientation on its
143
144 47. STOKES’ THEOREM
boundary. One of many places where you can find a careful development of this material is in
sections 6.5 and 7.2 of [1].
47.1.2. Theorem.
R Let ω be a 1 -form on a connected open subset U of R2 . Then ω is exact on
U if and only if C ω = 0 for every simple closed curve in U .
For a proof of this result see [10], chapter 2, proposition 1.
47.2. EXERCISES (DUE: WED. MAY 20) 145
47.2.2. Exercise. What classical theorem do we get (for smooth functions) from the version
of Stokes’ theorem given by equation (47.3) in the special case that ω is a 0 -form and M is a
1- manifold (with boundary) in R.
47.2.3. Exercise. What classical theorem do we get (for smooth functions) from the version
of Stokes’ theorem given by equation (47.3) in the special case that ω is a 0 -form and M is a
1- manifold (with boundary) in R3 .
47.2.4. Exercise. What classical theorem do we get (for smooth functions) from the version
of Stokes’ theorem given by equation (47.3) in the special case that ω is a 1 -form and M is a
2- manifold (with boundary) in R2 .
3 − y 3 ) dx + (x3 + y 3 ) dy (where S1 is
R
47.2.5. Exercise. Use exercise 47.2.4to compute S1 (2x
the unit circle oriented counterclockwise).
47.2.6. Exercise. What classical theorem do we get (for smooth functions) from the version
of Stokes’ theorem given by equation (47.3) in the special case that ω is a 1 -form and M is a
2- manifold (with boundary) in R3 .
47.2.7. Exercise. What classical theorem do we get (for smooth functions) from the version
of Stokes’ theorem given by equation (47.3) in the special case that ω is a 2 -form and M is a
3- manifold (with boundary) in R3 .
47.2.8. Exercise. Your good friend Fred R. Dimm calls you on his cell phone seeking help
with a math problem. He says that he wants to evaluate the integral of the normal component of
the vector field on R3 whose coordinate functions are x, y, and z (in that order) over the surface
of a cube whose edges have length 4. Fred is concerned that he’s not sure of the coordinates of
the vertices of the cube. How would you explain to Fred (over the phone) that it doesn’t matter
where the cube is located and that it is entirely obvious that the value of the surface integral he is
interested in is 192?
CHAPTER 48
Quadratic Forms
48.1. Background
Topics: quadratic forms.
147
148 48. QUADRATIC FORMS
48.2.2. Exercise. Let Q be a quadratic form on a real finite dimensional vector space V . Prove
that
Q(u + v + w) − Q(u + v) − Q(u + w) − Q(v + w) + Q(u) + Q(v) + Q(w) = 0
for all u, v, w ∈ V .
48.2.3. Exercise. Let Q be a quadratic form on a real finite dimensional vector space V .
Prove that Q(αv) = α2 Q(v) for all α ∈ R and v ∈ V . Hint. First use exercise 48.2.2 to show that
Q(2v) = 4Q(v) for every v ∈ V .
CHAPTER 49
49.1. Background
Topics: an algebra universal over a vector space, Clifford map; Clifford algebra.
49.1.1. Definition. Let V be a vector space. A pair (U, ι), where U is a unital algebra and
ι : V → U is a linear map, is universal over V if for every unital algebra A and every linear map
f : V → A there exists a unique unital algebra homomorphism fe: U → A such that fe ◦ ι = f .
49.1.2. Definition. Let V be a real finite dimensional vector space with a quadratic form Q
and A be a real unital algebra. A map f : V → A is a Clifford map if
(i) f is linear, and
2
(ii) f (v) = Q(v)1A for every v ∈ V .
49.1.3. Definition. Let V be a real finite dimensional vector space with a quadratic form Q.
The Clifford algebra over V is a real unital algebra Cl(V, Q), together with a Clifford map
j : V → Cl(V, Q), which satisfies the following universal condition: for every real unital alge-
bra A and every Clifford map f : V → A, there exists a unique unital algebra homomorphism
fb: Cl(V, Q) → A such that fb ◦ j = f .
149
150 49. DEFINITION OF CLIFFORD ALGEBRA
49.2.2. Exercise. Let V be a vector space. Prove that there exists a unital algebra which is
universal over V .
49.2.4. Exercise. Let V be a real finite dimensional vector space with a quadratic form Q.
Prove that if the Clifford algebra Cl(V, Q) exists, then it is unique up to isomorphism.
49.2.5. Exercise. Let V be a real finite dimensional vector space with a quadratic form Q.
Prove that the Clifford algebra Cl(V, Q) exists. Hint. Recall the definition of the tensor algebra
T (V ) in 36.1.2. Try T (V )/J where J is the ideal in T (V ) generated by elements of the form
v ⊗ v − Q(v)1T (V ) where v ∈ V .
CHAPTER 50
50.1. Background
Topics: orthogonality with respect to bilinear forms, the kernel of a bilinear form, nondegenerate
bilinear forms.
50.1.1. Definition. Let B be a symmetric bilinear form on a real vector space V . Vectors v
and w in V are orthogonal, in which case we write v ⊥ w, if B(v, w) = 0. The kernel of B is
the set of all k ∈ V such that k ⊥ v for every v ∈ V . The bilinear form is nondegenerate if its
kernel is {0}.
50.1.2. Definition. Let V be a finite dimensional real vector space and B be a symmetric
bilinear form on V . An ordered basis E = (e1 , . . . , en ) for V is B-orthonormal if
(a) B(ei , ej ) = 0 whenever i 6= j and
(b) for each i ∈ Nn the number B(ei , ei ) is −1 or +1 or 0.
50.1.3. Theorem. If V is a finite dimensional real vector space and B is a symmetric bilinear
form on V , then V has a B-orthonormal basis.
Proof. See [5], chapter 1, theorem 7.6.
50.1.4. Convention. Let V be a finite dimensional real vector space, let B be a symmetric
bilinear form on V , and let Q be the quadratic form associated with B. Let us agree that whenever
E = (e1 , . . . , en ) is an ordered B-orthonormal basis for V , we order the basis elements in such a
way that for some positive integers p and q
1, if 1 ≤ i ≤ p;
Q(ei ) = −1, if p + 1 ≤ i ≤ p + q;
0, if p + q + 1 ≤ i ≤ n.
50.1.5. Theorem. Let V be a finite dimensional real vector space, let B be a symmetric bilinear
form on V , and let Q be the quadratic form associated with B. P Then there exist p, q ∈ Z+ such
that if E = (e , . . . , e ) is a B-orthonormal basis for V and v = vk ek , then
1 n
p
X p+q
X
2
Q(v) = vk − vk 2 .
k=1 k=p+1
151
152 50. ORTHOGONALITY WITH RESPECT TO BILINEAR FORMS
50.2.2. Exercise. Let B be a symmetric bilinear form on a real finite dimensional vector
space V . Suppose that V is an orthogonal direct sum V1 ⊕ · · · ⊕ Vn of subspaces. Then B is
nondegenerate if and only if the restriction of B to Vk is nondegenerate for each k. In fact, if
Vk\ is the kernel of the restriction of B to Vk , then the kernel of B is the orthogonal direct sum
V1\ ⊕ · · · ⊕ Vn\ .
50.2.3. Exercise. Let B be a nondegenerate symmetric bilinear form on a real finite dimen-
sional vector space V . If W is a subspace of V , then the restriction of B to W is nondegenerate if
and only if its restriction to W ⊥ is nondegenerate. Equivalently, B is nondegenerate on W if and
only if V = W + W ⊥ .
50.2.4. Exercise. Let Q be a quadratic form on a real finite dimensional vector space V and
let {e1 , . . . , en } be a basis for V which is orthogonal with respect to the bilinear form B associated
with Q. Show that if Q(ek ) is nonzero for 1 ≤ k ≤ p and Q(ek ) = 0 for p < k ≤ n, then the kernel
of B is the span of {ep+1 , . . . , en }.
50.2.5. Exercise. Let Q be a quadratic form on a real finite dimensional vector space V . Show
that if dim V = n, then dim Cl(V, Q) = 2n .
50.2.6. Exercise. Let V be a real finite dimensional vector space and let Q be the quadratic
form which is identically zero on V . Identify Cl(V, Q).
CHAPTER 51
51.1. Background
Topics: No additional topics.
51.1.1. Notation. If (V, Q) is a finite dimensional real vector space V with a nondegenerate
quadratic form Q, we often denote the Clifford algebra C(V, Q) by C(p, q) where p and q are as in
theorem 50.1.5.
153
154 51. EXAMPLES OF CLIFFORD ALGEBRAS
51.2.2. Exercise. Let V be a real finite dimensional vector space, Q be a quadratic form on V ,
and A = Cl(V, Q) be the associated Clifford algebra.
(a) Show that the map f : V → V : v 7→ −v is a linear isometry.
(b) Let ω = Cl(f ). Show that ω 2 = id.
(c) Let A0 = {a ∈ A : ω(a) = a} and A1 = {a ∈ A : ω(a) = −a}. Show that A = A0 ⊕ A1 .
Hint. If a ∈ A, let a0 = 21 (a + ω(a)).
(d) Show that Ai Aj ⊆ Ai+j where i, j ∈ {0, 1} and i + j is addition modulo 2. This says that
a Clifford algebra is a Z2 -graded (or Z/2Z -graded) algebra.
51.2.3. Exercise. Let V = R and Q(v) = v 2 for every v ∈ V . Show that Clifford algebra Cl(1, 0)
associated with (R, Q) is isomorphic to R ⊕ R. Hint. Consider the map u1 + ve 7→ (u − v, u + v).
51.2.4. Exercise. Let V = R and Q(v) = −v 2 for every v ∈ V . The Clifford algebra Cl(V, Q)
is often denoted by Cl(0, 1).
(a) Show that Cl(0, 1) ∼
= C.
(b) Show that Cl(0, 1) can be represented as a subalgebra of M2 (R). Hint. AOLT, page 50,
Example 2.12.
51.2.5. Exercise. Let V = R2 and Q(v) = v12 + v22 for every v ∈ V . The Clifford algebra
∼ 1 0
Cl(V, Q) is often denoted by Cl(2, 0). Show that Cl(2, 0) = M2 (R). Hint. Let ε1 := ,
0 −1
0 1
ε2 := , and ε12 := ε1 ε2 .
1 0
51.2.6. Exercise. Let V = R2 and Q(v) = −v12 − v22 for every v ∈ V . The Clifford algebra
Cl(V, Q) is often denoted by Cl(0, 2).
(a) Show that Cl(0, 2) ∼= H.
(b) Show that Cl(0, 2) can be represented as a subalgebra of M4 (R). Hint. AOLT, pages
50–51, Example 2.13.
51.2.7. Exercise. Take a look at the web page written by Pertti Lounesto:
http://users.tkk.fi/ ppuska/mirror/Lounesto/counterexamples.htm
51.2.8. Exercise. Show that the Clifford algebra Cl(3, 1) (Minkowski space-time algebra) is
isomorphic to M4 (R). Hint. Exercise 51.2.7.
Bibliography
1. Ralph Abraham, Jerrold E. Marsden, and Tudor Ratiu, Manifolds, Tensor Analysis, and Applications, Addison-
Wesley, Reading, MA, 1983. 140, 143, 144
2. Robert A. Beezer, A First Course in Linear Algebra, 2004,
http://linear.ups.edu/download/fcla-electric-2.00.pdf. vii
3. Richard L. Bishop and Richard J. Crittenden, Geometry of Manifolds, Academic Press, New York, 1964. 111,
115, 116, 119
4. Przemyslaw Bogacki, Linear Algebra Toolkit, 2005, http://www.math.odu.edu/∼bogacki/lat. vii
5. William C. Brown, A second Course in Linear Algebra, John Wiley, New York, 1988. vii, 151
6. Stewart S. Cairns, A simple triangulation method for smooth manifolds, Bull. Amer. Math. Soc. 67 (1961),
389–390. 135
7. P. M. Cohn, Basic Algebra: Groups, Rings and Fields, Springer, London, 2003. 7
8. Charles W. Curtis, Linear Algebra: An Introductory Approach, Springer, New York, 1984. vii
9. Paul Dawkins, Linear Algebra, 2007, http://tutorial.math.lamar.edu/pdf/LinAlg/LinAlg Complete.pdf. vii
10. Manfredo P. do Carmo, Differential Forms and Applications, Springer-Verlag, Berlin, 1994. 144
11. John M. Erdman, A Companion to Real Analysis, 2007,
http://www.mth.pdx.edu/∼erdman/CTRA/CRAlicensepage.html. 33
12. , Operator Algebras: K-Theory of C ∗ -Algebras in Context, 2008,
http://www.mth.pdx.edu/∼erdman/614/operator algebras pdf.pdf. 69, 77
13. Paul R. Halmos, Finite-Dimensional Vector Spaces, D. Van Nostrand, Princeton, 1958. vii
14. Jim Hefferon, Linear Algebra, 2006, http://joshua.smcvt.edu/linearalgebra. vii
15. Kenneth Hoffman and Ray Kunze, Linear Algebra, second ed., Prentice Hall, Englewood Cliffs,N.J., 1971. vii
16. S. T. Hu, Differentiable Manifolds, Holt, Rinehart, and Winston, New York, 1969. 135
17. Thomas W. Hungerford, Algebra, Springer-Verlag, New York, 1974. 7
18. Saunders Mac Lane and Garrett Birkhoff, Algebra, Macmillan, New York, 1967. 7
19. Serge Lang, Fundamentals of Differential Geometry, Springer Verlag, New York, 1999. 143
20. John M. Lee, Introduction to Smooth Manifolds, Springer, New York, 2003. 119, 135
21. Ib Madsen and Jørgen Tornehave, From Calculus to Cohomology: de Rham cohomology and characteristic classes,
Cambridge University Press, Cambridge, 1997. 143
22. Theodore W. Palmer, Banach Algebras and the General Theory of ∗ -Algebras I–II, Cambridge University Press,
Cambridge, 1994/2001. 7
23. Steven Roman, Advanced Linear Algebra, second ed., Springer-Verlag, New York, 2005. vii, 101
24. I. M. Singer and J. A. Thorpe, Lecture Notes on Elementary Topology and Geometry, Springer Verlag, New York,
1967. 135
25. Gerard Walschap, Metric Structures in Differential Geometry, Springer-Verlag, New York, 2004. 116, 119
26. Frank W. Warner, Foundations of Differentiable Manifolds and Lie Groups, Springer Verlag, New York, 1983.
135, 143
27. Eric W. Weisstein, MathWorld, A Wolfram Web Resource, http://mathworld.wolfram.com. vii
28. Wikipedia, Wikipedia, The Free Encyclopedia, http://en.wikipedia.org. vii
155
Index
−1
α
V
(inverse of a morphism α), 33 int M (interior of a manifold with boundary), 140
(V ) (Grassmann or exterior algebra), 107 int Hn (interior of a half-space), 140
Vk
(V ) (homogeneous elements of degree k), 110 (the forgetful functor), 33
∂hsi (boundary of a simplex), 132
∂p (boundary map), 133 Abelian group, ix
∗ -algebra (algebra with involution), 81 adjoint, 81
∗
-homomorphism (star homomorphism), 81 vector space, 34
hx, vi (alternative notation for v ∗ (x)), 28 affine subspace, 131
(f, g) (function into a product), 37 algebra
M ⊕ N (direct sum), 37 Z+ -graded, 109
S ⊗ T (tensor products of linear maps), 105 Clifford, 149
U ⊗ V (tensor product), 101 de Rham cohomology, 128
V /M (quotient of V by M ), 41 exterior, 107
ω ∧ µ (wedge product), 107 Grassmann, 107
quotient, 52
m
L∞
⊕ n (direct sum of vectors), 37
tensor, 111
Lk=0
Vk (direct sum), 111
n alternating, 94
k=1 Mk (direct sum), 37
Altk (V ) (set of alternating k-linear maps), 111
A + B (sum of sets of vectors), 9
annihilating polynomial, 60
f × g (product function), 3
annihilator, 8, 31
U V (U is a subspace of V ), 9
antisymmetric, 17
[a, b] (closed segment in a vector space), 131
axiom of choice, 18
[x] (equivalence class containing x), 41
|K| (polyhedron of a complex), 132 basis, 17
A[x]
(algebra of polynomials), 59 dual, 29
A [x] (algebra of formal power series), 59 ordered, 113
F[x] (polynomial algebra with coefficients in F), 59 right-handed, 113
1 (standard one-element set), 4 βp (Betti number), 133
1V , idV , IV (the identity map on V ), 21 Betti number, 127, 133
T ← (U ) (inverse image of U under T ), 21 bilinear, 94
T → (U ) (direct image of U under T ), 21 form
(s) = (v0 , . . . , vp ) (open p -simplex), 131 associated with a quadratic form, 147
[s] = [v0 , . . . , vp ] (closed p -simplex), 131 nondegenerate, 151
hsi = hv0 , . . . , vp i (closed p -simplex), 132 B-orthonormal, 151
b (= Γ(x)), 32
x bound
V ∼ = W (isomorphism of vector spaces), 21 lower, 17
F⊥ (pre-annihilator of F ), 31 upper, 17
xs (alternative notation for x(s)), 27 boundaries, 133
M ⊥ (annihilator of M ), 31 boundary
T ∗ (adjoint of T ), 81 maps, 133
T ∗ (vector space adjoint of T ), 34 of a half-space, 140
T t (transpose of T ), 81 of a manifold, 140
f ← (B) (preimage of B under f ), 34 of a simplex, 132
f → (A) (image of A under f ), 34 bounded
v ∗ , 28 real valued function, 9
(V ∗ , d), V ∗ (cochain complex), 129 B p (M ) (space of de Rham p-̇coboundaries), 127
∂M (boundary of a manifold), 140 B p (K) (space of simplicial p -coboundaries), 135
∂Hn (boundary of a half-space), 140 Bp (K) (space of simplicial p -boundaries), 133
157
158 INDEX
calculus of morphisms, 33
polynomial functional, 60 concrete category, 33
cancellation property, 7 constant
category, 33 locally, 128
concrete, 33 polynomial, 60
chain, 18 continuous
chains, 132 differentiability, 22
characteristic continuously differentiable, 44
Euler, 133 contravariant, 33
polynomial, 71 conventions
χ(K) (Euler characteristic), 133 all categories are concrete, 33
choice all differential forms are smooth, 116
axiom of, 18 all manifolds are smooth and oriented, 115
∞ all vector field are smooth, 115
Cm (smooth functions at m), 115
Clifford all vector spaces are real, finite dimensional, and
algebra, 149 oriented, 115
map, 149 on Cartesian products, 3
closed on ordering B-orthonormal bases, 151
differential form, 125 on the standard basis for P([a, b]), 25
face of a simplex, 132 write f ω for f ∧ ω when f is a 0-form, 121
segment, 131 convex
simplex, 131 combination, 131
Cl(V, Q) (Clifford algebra), 149 hull, 131
coboundary independent, 131
p-, 129 set, 131
de Rham, 127 convolution, 59
operator, 129 coordinate
simplicial, 135 projections, 38
cochain, 135 coproduct
p-, 129 in a category, 41
complex, 129 uniqueness of, 43
map, 129 cotangent
cocycle space, 115
covariant, 33
p-, 129
Cp (K) (p -chains), 132
de Rham, 127
C p (K) (p -cochains), 135
simplicial, 135
cycle, 93
codimension, 46
length of a, 94
coefficient
cycles, 133
leading, 60
cohomology d (exterior derivative), 119
de Rham, 128 de Rham
group, 129 coboundary, 127
de Rham, 127 cocycle, 127
simplicial, 135 cohomology group, 127
combination de Rham cohomology algebra, 128
convex, 131 decomposable
formal linear, 100 element of a Grassmann algebra, 109
commutative tensor, 102
diagram, 3 degree
ring, ix of a decomposable element, 109
comparable, 18 of a homogeneous element, 109
complementary subspace, 37 of a polynomial, 59
complete δ (diagonal mapping), 4
order, 34 ∆fa , 93
complex derivative
cochain, 129 exterior, 119
simplicial, 132 partial, 116
dimension of a, 132 total, 93
components, 37 determinant, 98
composition function, 97
INDEX 159
Abelian, ix right, 33
cohomology, 129 invertible, 33
de Rham cohomology, 127 element of an algebra, 52
simplicial cohomology, 135 left, 21
simplicial homology, 133 linear map, 21
symmetric, 94 right, 21
involution, 81
H(A) (self-adjoint elements of A), 81 irreducible, 63
half-space isometry, 154
boundary of a, 140 partial, 89
interior of a, 140 isomorphism, 21
half-space, upper, 140 in a category, 33
Hamel basis, 17
Hermitian, 81 kernel, 21
Hn (upper half-space), 140 of a bilinear form, 151
Hodge star operator, 114 ker T (the kernel of T ), 21
Hom(G, H), Hom(G) (group homomorphisms), 1
homogeneous l(S, F), l(S) (functions on S), 27
elements of a graded algebra, 109 l2 (R), l2 (C) (square summable sequences), 78
tensor, 102 Lagrange interpolation formula, 64
homology, 133 largest, 17
homomorphism lc (S, F), lc (S) (functions on S with finite support), 27
of Abelian groups, 1 lc (S, A) (functions from S into A with finite
ring, 7 support), 59
unital ring, 7 leading coefficient, 60
H p (M ) (the pth de Rham cohomology group), 127 left
H p (K) (the pth simplicial cohomology group), 135 -handed basis, 113
Hp (K) (the pth simplicial homology group), 133 inverse
hull, convex, 131 of a morphism, 33
invertible linear map, 21
idempotent, 49, 56 length
idS (identity function on S), 3 of a cycle, 94
identity of a vector, 77
function on a set, 3 linear, 21
resolution of the, 69 combination
image, 21, 34 formal, 100
direct, 21 ordering, 18
inverse, 21 transformation
independent invertible, 21
convex, 131 transformations, 21
indeterminant, 59 tensor products of, 105
initial projection, 89 L(V, W ), L(V ) (family of linear functions), 21
inner product Ln (V1 , . . . , Vn ) (family of n-linear functions), 94
space little-oh functions, 93
spectral theorem for, 89 locally constant, 128
integrable lower bound, 17
real valued function, 9 l(S, A) (functions from S into A), 59
integral
of a p -form over a p -chain, 139 manifold
interchange operation, 4 boundary of a, 140
interior smoothly triangulated, 135
of a half-space, 140 with boundary, 140
of a manifold with boundary, 140 interior of a, 140
internal matrix
direct sum, 37 diagonal, 69
interpolation similarity, 69
Lagrange, 64 symmetric, 19
inverse maximal, 17
image, 21 minimal, 17
left, 33 polynomial, 60
of a morphism, 33 existence of, 61
INDEX 161
in a ∗ -algebra, 85 of a permutation, 94
initial, 89 similar, 69
operator, 49 simplex
orthogonal, 85 closed, 131
projections dimension of a, 131
coordinate, 38 face of a, 132
P(S) (power set of S), 34 open, 131
p -simplex, 131 oriented, 132
pullback of differential forms, 135 plane of a, 131
vertex of a, 132
quadratic form, 147 simplicial
associated with a bilinear form, 147 coboundary, 135
quotient cocycle, 135
algebra, 52 cohomology group, 135
map, 41, 45, 46 complex, 132
object, 45 dimension of a, 132
space, 41 simplicial homology group, 133
skeleton of a complex, 132
R∞ (sequences of real numbers), 11 skew-symmetric, 94
range, 21 smallest, 17
ran T (the range of T ), 21 smooth
rank, 21 differential form, 115
rank-plus-nullity theorem, 47 function, 115
reducible, 63 submanifold, 135
reducing subspaces, 67 triangulation, 135
reflexive, 17 vector field, 115
relation, 17 span, 13
relatively prime, 65 span(A) (the span of A), 13
resolution of the identity spectral theorem
in vector spaces, 69 for complex inner product spaces, 89
orthogonal, 89 for vector spaces, 69
retraction, 21 spectrum, 55
reverse orientation, 113 S(p, q) (shuffle permutations), 111
Riesz-Fréchet theorem square summable sequence, 78
for vector spaces, 29 star operator, 114
right Stokes’ theorem, 143
-handed basis, 113 submanifold, 135
inverse subprojection, 89
of a morphism, 33 subspace, 9
invertible linear map, 21 affine, 131
ring, ix complementary, 37
commutative, ix reducing, 67
division, ix sum
homomorphism, 7 direct, 37, 111
unital, 7 of sets of vectors, 9
with identity, ix supp F (the support of f ), 27
r -skeleton, 132 support, 27
switching operation, 4
section, 21 symmetric
segment, closed, 131 group, 94
self-adjoint, 81 matrix, 19
semigroup, ix skew-, 94
sequence
exact, 42 tangent, 93
series at a point, 115
formal power, 59 space, 115
shift operator, 56 tensor
short exact sequence, 42, 129 algebra, 111
shuffle, 111 decomposable, 102
σ (interchange operation), 4 elementary, 102
sign homogeneous, 102
INDEX 163
product, 101
of linear maps, 105
∗
Tm (cotangent space at m), 115
Tm (tangent space at m), 115
total
derivative, 93
transitive, 17
transpose, 81, 98
transposition, 94
triangulated manifold, 135
trilinear, 94
T (V ) (the tensor algebra of V ), 111
vector
field, 115, 121
associated with a differential form, 121
smooth, 115
space, 8
adjoint, 34
dimension of a, 19
free, 99
orientation of a, 113
oriented, 113
spectral theorem for, 69
unit, 77
vertex
of a simplex, 132
vol (volume element or form), 114, 116
volume
element, 114
form, 116
zero divisor, 7
Z+ -graded algebra, 109
Zorn’s lemma, 18
Z p (M ) (space of de Rham p -cocycles), 127
Z p (K) (space of simplicial p -cocycles), 135
Zp (K) (space of simplicial p -cycles), 133