Professional Documents
Culture Documents
Vector Spaces
We saw dierent types of vectors last session with very similar algebraic properties.
Other mathematical objects share these properties, and we will investigate these:
functions,
nite vector spaces,
polynomials,
matrices.
Because they have very similar structures, techniques useful for dealing with one of these
may be useful for others.
1. Spaces of functions
Let I be an interval, for example, [0, 1], and write C(I, R) for the set of all continuous
real-valued functions on I. We say that functions f and g are equal, and we write f = g,
if and only if f (x) = g(x) for all x I. Given functions f and g in C(I, R) and R,
we dene new functions f + g and f in C(I, R) as follows: (f + g)(x) = f (x) + g(x)
for all x I and (f )(x) = f (x) for all x I. We write f for (1)f , that is,
(f )(x) = f (x) for all x in I, and 0 for the zero function, i.e., 0(x) = 0 for all x in I.
Proposition 1. The set C(I, R) of all continuous real-valued functions on the interval
I has the following properties:
(1) for all f, g C(I, R),
f + g C(I, R)
(closure under addition)
(2) for all f C(I, R) and R,
f C(I, R)
(closure under scalar multiplication)
(3) for all f C(I, R),
f +0= 0+f =f
(existence of zero)
(4) for all f in C(I, R),
f + (f ) = (f ) + f = 0
(existence of additive inverses)
(5) for all f, g, h C(I, R),
(f + g) + h = f + (g + h)
(addition is associative)
1
1. VECTOR SPACES
0 1
0 1
1 0
Addition
0
1
0 1
0 0
0 1
Multiplication
Problem 4. Write Z3 for the integers modulo 3. This is the set {0, 1, 2}, with addition
and multiplication dened thus:
0
1
2
0
0
1
2
1
1
2
0
2
2
0
1
0
1
2
Addition
0
0
0
0
1
0
1
2
2
0
2
1
Multiplication
u+v =v+u
(addition is commutative)
(5) there exists a vector 0 such that for all v V ,
v+0=v
(there exists a zero vector)
1. VECTOR SPACES
1v = v
(1 is a multiplicative identity)
(8) for all v V and all , F,
()v = (v)
(multiplication is associative)
(9) for all u, v V and all , R,
( + )u = u + u
(u + v) = u + v
(scalar multiplication distributes over addition).
Usually we just write v, rather than v. From these dening properties of a vector
space, called axioms, we can deduce other properties.
Proposition 5. Suppose that V is a vector space over F. Then
(1) for all u V , there is only one z V (the vector 0) such that u + z = u
(uniqueness of the zero vector)
(2) for all u, v, w V ,
if u + v = u + w, then v = w
(cancellation property)
(3) for all u V , there is only one v V such that u + v = 0
(uniqueness of inverses)
(4) for all F,
0 = 0
(5) for all v V ,
0v = 0
(6) for all v V ,
(1)v + v = 0 = v + (1)v
(7) for all v V and F,
if v = 0 then = 0 or v = 0
(8) for all v, w V and all , F,
if v = v and v = 0, then =
if v = w and = 0, then v = w.
Proof. These follow from the vector space axioms. We consider only (7) and (8).
To prove (7), note that if v = 0 and = 0, then v = 1 v = 1 0 = 0.
To prove (8), observe that if v = v, then ( )v = 0, so = or v = 0 by (7).
Similarly, if v = w and = 0, then 1 v = 1 w, so v = w.
Note that (1)v = v.
4. SUBSPACES
4. Subspaces
A subspace of a vector space V over a eld F is a nonempty subset of V which is a
vector space in its own right; in particular, V is a subspace of itself. We are often asked
to decide when a subset is a subspace, and this might require us to check up to ten items.
This is quite tedious. Consequently, the following theorem is convenient.
Theorem 6 (Subspace theorem). If S is a subset of a vector space V over a field F,
then S is a subspace if and only if S has all three of the following properties:
(1) S is not empty
(2) S is closed under addition
(3) S is closed under scalar multiplication.
Further, S is not a subspace if 0
/ S or if any of the above properties fails to hold.
Proof. First, note that vector space axioms (3), (4), (7), (8) and (9) automatically
hold for subsets. Axioms (1) and (2) might not hold, but are ensured by hypotheses (2)
and (3) of this proposition. Finally, 0 = 0v and v = (1)v, so axioms (5) and (6) follow
from hypothesis (3). Thus if all the hypotheses hold, so do all the vector space axioms.
Conversely, if hypothesis (1) is false or if 0
/ S, then vector space axiom (5) cannot
hold; if hypothesis (2) or (3) fails, then axiom (1) or (2) fails.
Problem 7. Show that the set of vectors (x1 , x2 )T in R2 such that
x2 = 2x1 + c
is a subspace if and only if c = 0.
Answer. Suppose that c = 0. Clearly (0, 0)T lies in S, so that S is not empty. If
x, y S, then x2 = 2x1 and y2 = 2y1, so x2 + y2 = 2(x1 + y1 ), and x + y S. Finally, if
x S and R, then x2 = 2x1 , so x2 = 2x1 , and x S.
x2
x1 = 2x2
x1
x1 = 2x2 4
1. VECTOR SPACES
H1
H2
Problem 11. Suppose that A Mm,n (R), and dene S = {x Rn : Ax = b}. Show
that S is a subspace if and only if b = 0.
Answer. Suppose rst that b = 0. Then A0 = b, so 0
/ S. Consequently, S is not
a subspace.
Now suppose that b = 0. Then A0 = b, so that 0 S, and S is not empty.
Next, suppose that x, y S. Then Ax = 0 and Ay = 0, so
A(x + y) = Ax + Ay = 0,
and x + y S, i.e., S is closed under addition.
Finally, suppose that R and x S. Then Ax = 0, so Ax = 0, and A(x) = 0.
Hence x S, i.e., S is closed under scalar multiplication.
By the subspace theorem, S is a subspace of Rn .
Problem 12. Suppose A Mm,n (R). Show that the set of b for which Ax = b has
at least one solution is a subspace.
Problem 13. Let S denote the set {p P3 (C) : p(1) = 0}. Show that S is a subspace
of P3 (C).
Answer. First, 0 S, so S is not empty.
Next, if p, q S, then p(1) = 0 and q(1) = 0, so
(p + q)(1) = p(1) + q(1) = 0,
and p + q S, i.e., S is closed under addition.
Further, if C and p S, then (p)(1) = p(1) = 0, so p S, i.e., S is closed
under scalar multiplication.
By the subspace theorem, S is a subspace of P3 (C).
5. Linear combinations and spans
Definition 14. A linear combination of vectors v 1 , v 2 , . . . , v n is a vector of the form
1 v 1 + 2 v 2 + + n v n ,
where 1 , 2 , . . . , n are scalars.
Definition 15. The span of the vectors v 1 , v 2 , . . . , v n is the set of all linear combinations of v 1 , v 2 , . . . , v n : it is written span{v1 , v 2 , . . . , v n }.
In a vector space, all nite sums of the form
1 v 1 + 2 v 2 + + n v n
are well-defined, i.e., have an unambiguous meaning. We could put brackets round the
sum in many dierent ways, but it can be shown that all give the same result.
Challenge Problem. In how many dierent ways can we bracket the sum of n
vectors? Prove (by induction) that they are all equivalent.
Theorem 16. The smallest subspace of a vector space which contains the vectors v 1 ,
v 2 , . . . , v n is their span.
This theorem says two things: the span is a subspace, and it is the smallest subspace.
Proof. By the subspace theorem, to show that span{v 1 , v 2 , . . . , v n } is a subspace,
we must show it is nonempty and closed under addition and scalar multiplication.
First, if 1 = 2 = = n = 0, then
1 v 1 + 2 v 2 + + n v n = 0,
so that 0 span{v 1 , v 2 , . . . , v n }, and the span is not empty.
Next, if w, w span{v1 , . . . , v n }, then there exist scalars 1 , . . . , n and 1 , . . . , n
such that
w = 1 v 1 + 2 v 2 + + n v n
w = 1 v 1 + 2 v 2 + + n v n .
1. VECTOR SPACES
0
2
1
c1
1
1
2
c2 .
1 3 3 c3
Note that the columns of the augmented matrix are the vectors in question.
We solve this by row-reductions:
R1 R2 , R3 = R3 + R1
1 1
2
0 2
1
0 2 1
R3 = R2 + R3
1 1 2
0 2 1
0 0 0
c2
c1
c2 + c3
c2
.
c1
c1 + c2 + c3
10
1. VECTOR SPACES
i.e.,
t + 1 = 1 (t3 3t) + 2 (t3 t2 t) + 3 (t2 t 1)
= (1 + 2 )t3 + (2 + 3 )t2 (31 + 2 + 3 )t 3 .
So, we would have
1 + 2
2 + 3
31 + 2 + 3
3
=0
=0
= 1
= 1.
11
We dene the column space of a matrix A to be the subspace spanned by its columns.
Problem 24. Is (1, 0, 5)T in the column space of the matrix
1 1
2 1 ?
3 1
Answer. It is equivalent to ask whether the system represented by the augmented
matrix
1
1 1
2 1
0
5
3 1
has a solution. We proceed to row-reduce the augmented matrix.
R2 = R2 2R1 , R3 = R3 3R1
R3 = R3 4R2
1 1
0 1
0 4
1
2
8
1 1
0 1
0 0
1
2 .
0
5
3
1
Problem 25. Let S denote span{x3 x + 1, x3 x2 1, x2 x + 2} in P3 (R). Which
polynomials q(x) belong to S?
Answer. Suppose that q is a linear combination of these polynomials. Then q is of
degree at most 3. Write q(x) = a3 x3 + a2 x2 + a1 x + a0 . Then
q(x) = 1 (x3 x + 1) + 2 (x3 x2 1) + 3 (x2 x + 2)
= (1 + 2 )x3 + (3 2 )x2 (1 + 3 )x + (1 2 + 23 )
12
1. VECTOR SPACES
precisely when
1 + 2
3 2
1 3
1 2 + 23
= a3
= a2
= a1
= a0 .
a3
1
1
0
0 1 1
a2
1 0 1 a1 .
a0
1 1 2
We reduce this to row-echelon form:
R3 = R3 + R1 , R4 = R4 R1
1 1
0 1
0 1
0 2
R3 = R3 + R2 , R4 = R4 2R2
1 1
0 1
0 0
0 0
0
1
1
2
0
1
0
0
a3
a2
a1 + a3
a0 a3
a3
a2
.
a1 + a2 + a3
a0 2a2 a3
Therefore
a3 x3 + a2 x2 + a1 x + a0 span{x3 x 1, x3 x2 1, x2 x + 2}
if and only if a1 + a2 + a3 = 0 and a0 2a2 a3 = 0.
Answer. These are two subspaces of R3 . The rst subspace clearly contains every
vector the second subspace. The question is whether the rst subspace is larger.
Let b = (b1 , b2 , b3 )T . Then
b span{(3, 0, 1)T , (1, 1, 1)T , (0, 3, 2)T }
if and only if the system represented by the augmented matrix
b1
3
1
0
0
1 3 b2
b3
1 1 2
has a solution, which is when b1 + 2b2 + 3b3 = 0.
6. LINEAR DEPENDENCE
13
Similarly, b span{(3, 0, 1)T , (1, 1, 1)T } if and only if b1 + 2b2 + 3b3 = 0. So the
two spans are the same.
The answer to the question why the extra vector does not change the span will be our
next concern.
6. Linear Dependence
Definition 27. Vectors v 1 , . . . , v n are linearly independent if the only possible choice
of scalars 1 , . . . , n for which
1 v 1 + + n v n = 0
(1)
1 0 + 2 2 + 3 2
= 0 .
1
0
1
0
Equivalently,
11 + 12 + 03 = 0
01 + 22 + 23 = 0
11 + 02 13 = 0.
This can be represented by the augmented matrix
1 1 0
0
0 2 2
0 .
1 0 1 0
We row-reduce this to
1 1 0
0 2 2
0 0 0
0
0 .
0
(2)
14
1. VECTOR SPACES
Thus the original system (2) has innitely many solutions, and in particular has nonzero
solutions, e.g., 1 = 1, 2 = 1, 3 = 1. Since
1
1
0
0
1 0 1 2 + 1 2 = 0 ,
1
0
1
0
p3 (t) = 1 + t ,
p2 (t) = 1 t
p4 (t) = 1 t3
=0
=0
=0
= 0.
Note that the rows of the system of equations correspond to the x0 terms, the x1
terms, the x2 terms and the x3 terms. This is not an accidentas far as questions of
spans and linear dependence are concerned, the polynomial a0 + a1 x + a2 x2 + + an xn
behaves like the vector (a0 , a1 , a2 , . . . , an )T .
Problem 30. Are the functions 1, ex and e2x linearly independent?
Answer. Suppose that
(3)
1 + 2 ex + 3 e2x = 0
x
2x
for all x. If this implies that 1 = 2 = 3 = 0, then 1, e and e are linearly independent.
One way to show this is via limits: if (3) holds, then
lim (1 + 2 ex + 3 e2x ) = lim 0,
6. LINEAR DEPENDENCE
so 1 = 0. Now
15
2 ex + 3 e2x = 0
so dividing by ex ,
2 + 3 ex = 0.
Another limit argument shows that 2 = 0, and then 3 ex = 0, so 3 = 0. Thus 1, ex and
e2x are linearly independent.
Challenge Problem. Suppose that 1 < 2 < < n . Show that e1 x , e2 x , . . . ,
en x are linearly independent functions.
Challenge Problem. Suppose that 0 < 1 < 2 < < n . Show that sin 1 x,
sin 2 x, . . . , sin n x are linearly independent functions.
Theorem 31. Suppose that v 1 , . . . , v n are vectors, and that v span{v 1 , . . . , vn }.
If v 1 , . . . , v n are linearly independent, there is only one choice of scalars 1 , . . . , n such
that
v = 1 v 1 + + n v n .
(4)
If v 1 , . . . , v n are linearly dependent, there is more than one choice of scalars 1 , . . . , n
such that (4) holds.
Proof. Suppose that the vectors v 1 , . . . , v n are linearly independent. Because v is
in span{v 1 , . . . , v n }, there exist 1 , . . . , n so that
v = 1 v 1 + + n v n .
Suppose that 1 , . . . , n are scalars (possibly dierent from 1 , . . . , n ) so that
v = 1 v 1 + + n v n .
Subtracting,
0 = (1 1 )v 1 + + (n n )v n .
By linear independence, 1 1 = = n n = 0, so i = i , and there is only one
representation of v in terms of the v i .
Suppose now that v 1 , . . . , v n are linearly dependent. Then there exist 1 , . . . , n , not
all zero, so
1 v 1 + + n v n = 0.
Since v span{v 1 , . . . v n }, there exist 1 , . . . , n so that
v = 1 v 1 + + n v n .
Adding,
v = (1 + 1 )v 1 + + (n + n )v n .
Since not all i are zero, this is a dierent representation of v in terms of the v i .
Theorem 32. The vectors v 1 , . . . , v n are linearly dependent if and only if at least
one of them may be written as a linear combination of the others.
16
1. VECTOR SPACES
v i = 1 v 1 + + i1 v i1 + i+1 v i+1 + + n v n .
Put i = 1. Then
1 v 1 + + i1 v i1 + i v i + i+1 v i+1 + + n v n = 0,
i.e., v 1 , . . . , v n are linearly dependent.
6. LINEAR DEPENDENCE
17
v 2 = (2, 1, 0, 3)T ,
v 3 = (0, 1, 1, 0)T ,
v 4 = (1, 1, 1, 1)T
1 2 0 4 0
2 1 1 1
0
,
0 0 1 0
0
3 3 0 3 0
which we row-reduce to
1 2 0 4
0 3 1 9
0 0 1 0
0 0 0 0
0
0
.
0
0
18
1. VECTOR SPACES
The preceding example is a particular case of the following theorem, whose proof
follows from the previous results.
Theorem 35. If S is a linearly independent set of vectors, then for any proper subset
S of S,
span(S ) span(S).
However, if S is a linearly dependent set of vectors, then there is at least one proper subset
S of S so that
span(S ) = span(S).
For example, consider the vectors (0, 0)T and (1, 0)T . These are linearly dependent
because
2(0, 0)T + 0(1, 0)T = (0, 0)T .
Also, span{(0, 0)T , (1, 0)T } = span{(1, 0)T }, but span{(0, 0)T , (1, 0)T } = span{(0, 0)T }.
For later use, we need one more theorem.
Theorem 36. Suppose that v 1 , . . . , v n are linearly independent vectors.
If v n+1 span{v 1 , . . . , v n }, then v 1 , . . . , v n , v n+1 are linearly dependent.
If v n+1
/ span{v 1 , . . . , v n }, then v 1 , . . . , v n , v n+1 are linearly independent.
Proof. The rst half is already proved. Suppose that v 1 , . . . , v n are linearly independent and that v n+1
/ span{v 1 , . . . , v n }. If 1 , . . . , n+1 are scalars so that
1 v 1 + + n v n + n+1 v n+1 = 0,
then n+1 = 0, for otherwise we could make v n+1 the subject, and this would show that
v n+1 span{v 1 , . . . , v n }, which is not allowed. But now
1 v 1 + + n v n = 0,
so 1 = = n = 0 since v 1 , . . . , v n are linearly independent.
Since we have now shown that 1 = = n = n+1 = 0, it follows that v 1 , . . . , v n+1
are linearly independent, as required.
7. Bases and dimensions
A basis (plural bases) for a vector space V is a linearly independent spanning set
for V . The number of elements in a basis for V does not depend on the basis. This
number is called the dimension of V , and is written dim(V ). This result is found by
comparing the number of elements in spanning sets and in linearly independent sets.
Theorem 37. If {v 1 , . . . , v m } is a set of linearly independent vectors in the vector
space V , and {w1 , . . . , w n } is a spanning set for V , then m n.
Proof. We suppose that {v 1 , . . . , v m } is linearly independent and that {w1 , . . . , w n }
is a spanning set.
19
(6)
(Note that the aij have been transposed, going from the array (5) to the array (6)). Then
(a11 1 + a21 2 + + am1 m )w 1
+ (a12 1 + a22 2 + + am2 m )w 2
+
...
+ (a1n 1 + a2n 2 + + amn m )wn = 0,
or equivalently,
1 (a11 w 1 + a12 w 2 + + a1n w n )
+ 2 (a21 w 1 + a22 w 2 + + a2n w n )
+
...
+ m (am1 w1 + am2 w2 + + amn w n ) = 0,
i.e., 1 v 1 + 2 v 2 + + m v m = 0, and so 1 = 2 = = m = 0, as the vectors v 1 , v 2 ,
. . . , v m are linearly independent.
We have just shown that the only solution to the equations (6) is 1 = 2 = =
m = 0. This implies that, if the augmented matrix corresponding to these equations,
namely
..
.
.
a1n a2n amn 0
is row-reduced, then all columns are leading columns. This in turn implies that m < n,
as required.
Corollary 38. If {v 1 , . . . , v m } and {w1 , . . . , wn } are both bases of the vector space
V , then m = n.
Proof. Since {v 1 , . . . , v m } spans V and {w1 , . . . , w n } is linearly independent, m n.
Conversely, since {v 1 , . . . , vm } is linearly independent and {w 1 , . . . , wn } spans V , n m.
Combining these inequalities, m = n.
20
1. VECTOR SPACES
v2 =
v3 =
v1 =
2
3
0 .
0
0
1
If 1 v 1 + 2 v 2 + 3 v 3 = 0, then
1
0
= 0 ,
21 + 32 0
3
0
x1
x1
1
0
0
x2
x2
=
= x1 0 + x2 1 + x4 0 ,
x3 2x1 + 3x2
2
3
0
x4
x4
0
0
1
i.e.,
x = x1 v 1 + x2 v 2 + x4 v 3 ,
and {v 1 , v 2 , v 3 } spans V . Thus {v 1 , v 2 , v 3 } is a basis, and so dim(V ) = 3.
The obvious question to ask about this solution is where the vectors v 1 , v 2 and v 3
came from. We shall see the answer to this later.
Suppose that V is a vector space of dimension n, and that {v 1 , . . . , v m } V . If
{v 1 , . . . , v m } spans V , then m n. If m = n, then {v 1 , . . . , v m } is a basis, while if
m > n, then it is possible to remove m n of the vectors to get a basis. Similarly, if
{v 1 , . . . , v m } is linearly independent, then m n. If m = n, then {v 1 , . . . , v m } is a basis,
while if m < n, then it is possible to add n m vectors to get a basis.
21
...
if v n span(Sn1 )
Sn1
.
Sn1 {v n } if v n
/ span(Sn1 )
v1 =
3 , v 2 = 2 ,
0
1
0
1
v3 =
1 ,
0
2
4
v4 =
2 ,
2
0
1
v5 =
2 .
2
2 2 0 2 0 a1
1 2 1 4 1 a2
3 2 1 2 2 a3
0 1 0 2 2 a4
has a solution. Reducing
1
0
0
0
2 1
4 1
a2
1 0
2 2
a4
.
0 2 2 2
a1 2a2 + 2a4
0 0
0 5 a2 + 2a4 a1 + a3
22
1. VECTOR SPACES
2 2 0 0 a1
1 2 1 1 a2
3 2 1 2 a3 ,
0 1 0 2 a4
which row-reduces to
1
0
0
0
2 1 1
1 0 2
0 2 2
0 0 5
a2
a4
.
a1 2a2 + 2a4
a1 a2 + a3 + 2a4
From this, we see that it is possible to write all vectors a as a linear combination of v 1 , v 2 ,
v 3 and v 5 uniquely. The uniqueness implies that these vectors are linearly independent,
by Theorem 31. If we omitted a dierent vector, we cannot see immediately what we
would end up with in the row-reduction process.
The method of solution of Problem 43 works for coordinate vectors in general. Given
vectors v 1 , . . . , v n in Rm , to get a linearly independent set, we write the vectors as a
matrix, row-reduce it, and remove the vectors corresponding to nonleading columns.
This method also works for polynomials: as we have seen, they behave like coordinate
vectors. But for other kinds of vectors, like functions, we cannot use this method. The
point of Theorem 42 is that it works for all vector spacesall we need is a way to decide
whether vectors are in the span of other vectors.
How do we go about enlarging sets of linearly independent vectors to get bases?
Theorem 44. Suppose that {w1 , . . . , wm } is a spanning set for a vector space V , and
that {v 1 , . . . , v k } is a linearly independent set. Then we can find vectors v k+1 , . . . , v n so
that {v 1 , . . . , v k , v k+1 , . . . , v n } is a basis of V .
Proof. Consider the set {v 1 , . . . , v k , w1 , . . . , wm }. Since it contains a spanning set,
it is a spanning set. We apply the algorithm of Theorem 42 to it. When we do this, none
of the vectors v 1 , . . . , v k will be discarded, because they are linearly independent. Indeed,
the algorithm adds vectors to the set until it reaches a vector which depends linearly on
the previously chosen vectors, and the rst vector which might depend on the previously
chosen vectors is w 1 .
23
Problem 45. Show that the set {(1, 2, 0, 1)T , (1, 3, 0, 1)T } is linearly independent in
R and that
(1, 2, 0, 1)T , (1, 3, 0, 1)T , (1, 0, 0, 0)T , (0, 1, 0, 0)T , (0, 0, 1, 0)T , (0, 0, 0, 1)T
4
1 1 0
2 3 0
0 0 0 .
1 1 0
This row-reduces to
1
0
0
0
1
1
0
0
0
0
.
0
0
The vectors are linearly independent, because both columns are leading columns, but do
not span, because there are some all-zero rows.
For the second part, we consider the augmented matrix:
1 1 1 0 0 0 a1
2 3 0 1 0 0 a2
0 0 0 0 1 0 a3
1 1 0 0 0 1 a4
and row-reduce it:
a1
1 1 1 0 0 0
0 1 2 1 0 0 a2 2a1
0 0 0 0 1 0
a3
a4 a1
0 0 1 0 0 1
1 1 1 0 0 0
a1
0 1 2 1 0 0 a2 2a1
0 0 1 0 0 1
a4 a1
0 0 0 0 1 0
a3
24
1. VECTOR SPACES
This system is consistent, so has solutions for any (a1 , . . . , a4 )T in R4 , and hence the set of
vectors spans R4 . Alternatively, we could argue that the set of standard basis vectors {e1 ,
. . . , e4 } spans R4 , and so any collection of vectors containing these is certainly spanning.
Finally, we observe that columns 4 and 6 in the reduced matrix from the previous part
are non-leading. We omit the corresponding vectors. Then
(1, 2, 0, 1)T , (1, 3, 0, 1)T , (1, 0, 0, 0)T , (0, 0, 1, 0)T
is a basis for R4 , containing the initial vectors (1, 2, 0, 1)T and (1, 3, 0, 1)T .