Professional Documents
Culture Documents
Section 7.1
Chapter 7
7.1
1. If ~v is an eigenvector of A, then A~v = ~v .
Hence A3~v = A2 (A~v ) = A2 (~v ) = A(A~v ) = A(A~v ) = A(2~v ) = 2 A~v = 3~v , so ~v is an
eigenvector of A3 with eigenvalue 3 .
3. We know A~v = ~v , so (A + 2In )~v = A~v + 2In~v = ~v + 2~v = ( + 2)~v , hence ~v is an
eigenvector of (A + 2In ) with eigenvalue + 2.
5. Assume A~v = ~v and B~v = ~v for some eigenvalues , . Then (A + B)~v = A~v + B~v =
~v + ~v = ( + )~v so ~v is an eigenvector of A + B with eigenvalue + .
7. We know A~v = ~v so (A In )~v = A~v In~v = ~v ~v = ~0 so a nonzero vector ~v is in
the kernel of (A In ) so ker(A In ) 6= {~0} and A In is not invertible.
a b
1
1
a
9. We want
=
for any . Hence
=
, i.e., the desired matrices
c d
0
0
c
0
b
must have the form
, they must be upper triangular.
0 d
a b
2
2
,
11. We want
=
. So, 2a + 3b = 2 and 2c + 3d = 3. Thus, b = 22a
3
c d
3
3
a 22a
3
will fit.
. So all matrices of the form
and d = 32c
3
c 32c
3
3
t
6 6
v1
v1
v1
(with t 6= 0).
13. Solving
=4
, we get
= 5
15 13
v2
v2
v2
t
15. Any vector on L is unaffected by the reflection, so that a nonzero vector on L is an
eigenvector with eigenvalue 1. Any vector on L is flipped about L, so that a nonzero
vector on L is an eigenvector with eigenvalue 1. Picking a nonzero vector from L and
one from L , we obtain a basis consisting of eigenvectors.
17. No (real) eigenvalues
19. Any nonzero vector in L is an eigenvector with eigenvalue 1, and any nonzero vector in
the plane L is an eigenvector with eigenvalue 0. Form a basis consisting of eigenvectors
by picking any nonzero vector in L and any two nonparallel vectors in L .
21. Any nonzero vector in R3 is an eigenvector with eigenvalue 5. Any basis for R3 consists
of eigenvectors.
167
Chapter 7
1
0
hence S 1 AS =
...
0
0
2
0
0
0
0
.
0 n
168
Section 7.1
1 1
we want a matrix A such that A
1
1
1
1 1
4 2
, we get A =
.
1
1
2
4
169
2 6
. Multiplying on the right by
2
6
Chapter 7
35. Let be an eigenvalue of S 1 AS. Then for some nonzero vector ~v , S 1 AS~v = ~v , i.e.,
AS~v = S~v = S~v so is an eigenvalue of A with eigenvector S~v .
Conversely, if is an eigenvalue of A with eigenvector w,
~ then Aw
~ = w,
~ for some nonzero
w.
~
Therefore, S 1 AS(S 1 w)
~ = S 1 Aw
~ = S 1 w
~ = S 1 w,
~ so S 1 w
~ is an eigenvector of
1
S AS with eigenvalue .
0.6
0.8
is a scalar multiple of an orthogonal matrix. By Fact 7.1.2, the
0.8 0.6
possible eigenvalues of the orthogonal matrix are 1, so that the possible eigenvalues
of A are 5. In part b we see that both are indeed eigenvalues.
2
1
b. Solve A~v = 5~v to get ~v1 =
, ~v2 =
.
1
2
37. a. A = 5
0
0
0
. So b = 0, and d = (for any ). Thus, we need
=
=
39. We want
1
1
a 0
1 0
0 0
0 0
1 0
0 0
0 0
matrices of the form
=a
+c
+d
. So,
,
,
c d
0 0
1 0
0 1
0 0
1 0
0 1
is a basis of V , and dim(V )= 3.
1
1
a b
a b
1
1
= 1
= 2
41. We want
, and
. So, a + b = 1 = c + d and
1
2
c d
c d
1
2
a + 2b = 2 and 22 = c + 2d.
a
c
b
d
2
2
1
2
2 1
2 1
1 1
1
+ 2
.
2 1
2 2
2 1
1 1
So a basis of V is
,
, and dim(V )= 2.
2 1
2 2
43. A = AIn = A[ ~e1 . . . ~en ] = [ 1~e1 . . . n~en ], where the eigenvalues 1 , . . . , n are
arbitrary. Thus A can be any diagonal matrix, and dim(V ) = n.
45. Consider a vector w
~ that is not parallel to ~v . We want A[~v w]
~ = [~v a~v + bw],
~ where , a
and b are arbitrary constants. Thus the matrices A in V are of the form A = [~v a~v +
bw][~
~ v w]
~ 1 . Using summary 4.1.6, we see that [~v ~0][~v w]
~ 1 , [~0 ~v ][~v w]
~ 1 , [~0 w][~
~ v w]
~ 1 is a
basis of V , so that dim(V ) = 3.
170
Section 7.1
a(t)
53. Let ~v (t) = b(t) be the amount of gold each has after t days. And A~v (t) = ~v (t + 1).
c(t)
0 1 1
1
1
1
a(t + 1) = 12 b(t) + 21 c(t), etc, so that A = 21 1 0 1 . A 1 = 1 , so 1
1 1 0
1
1
1
171
Chapter 7
1
1
2
has eigenvalue 1 = 1. A 1 = 12 , so 1 has eigenvalue 2 = 21 . Also,
0
0
0
1
2
1
1
A 0 = 0 , so 0 has eigenvalue 3 = 21 .
1
1
1
2
6
1
1
1
a. ~v (0) = 1 = 3 1 + 2 1 + 0 .
2
1
0
1
1
1
1
So, ~v (t) = At~v (0) = At 3 1 + 2 1 + 0
1
0
1
1
1
1
1
1
1
= 3At 1 + 2At 1 + At 0 = 3t1 1 + 2t2 1 + t3 0
1
0
1
1
0
1
1
1
1
= 3 1 + 2( 21 )t 1 + ( 21 )t 0 .
1
0
1
3
2365 ,
1
2365 .
b(365) = 3 2( 21 )365 = 3 +
1
2364
and
7.2
1. 1 = 1, 2 = 3 by Fact 7.2.2.
5
4
3. det(AI2 ) = det
= 2 4+3 = (1)(3) = 0 so 1 = 1, 2 = 3.
2
1
11
15
= 2 4 + 13 so det(A I2 ) = 0 for no real .
5. det(A I2 ) = det
6
7
7. = 1 with algebraic multiplicity 3, by Fact 7.2.2.
9. fA () = ( 2)2 ( 1) so
1 = 2 (Algebraic multiplicity 2)
172
Section 7.2
2 = 1.
11. fA () = 3 2 1 = ( + 1)(2 + 1) = 0
= 1 (Algebraic multiplicity 1).
13. fA () = 3 + 1 = ( 1)(2 + + 1) so = 1 (Algebraic multiplicity 1).
2 44(1k)
15. fA () = 2 2 + (1 k) = 0 if 1,2 =
=1 k
2
The matrix A has 2 distinct real eigenvalues when k > 0, no real eigenvalues when k < 0.
17. fA () = 2 a2 b2 = 0 so 1,2 = a2 + b2 .
1
1
1
1
, 2 = 4 . If ~x0 =
, 1 = 1 and ~v2 =
27. a. We know ~v1 =
then ~x0 = 31 ~v1 + 23 ~v2 ,
1
2
0
so by Fact 7.1.3,
173
Chapter 7
1
3
2
3
1 t
4
t
23 14 .
0
If ~x0 =
then ~x0 = 31 ~v1 31 ~v2 , so by Fact 7.1.3,
1
t
x1 (t) = 31 13 14
t
x2 (t) = 23 + 13 14 . See Figure 7.6.
x2 (t) =
2
3
174
b. At approaches
Section 7.2
1
3
1 1
, as t . See part c for a justification.
2 2
c. Let us think about the first column of At , which is At~e1 . We can use Fact 7.1.3 to
compute At~e1 .
1
b
; a straightforward computation shows that
+ c2
Start by writing ~e1 = c1
1
c
c
1
and c2 = b+c
.
c1 = b+c
1
b
1
c
t
t
, where 2 = a b.
Now A ~e1 = b+c
+ b+c (2 )
1
c
b
1
t
Since |2 | < 1, the second summand goes to zero, so that lim (A ~e1 ) = b+c
.
t
c
b
b b
1
1
, so that lim At = b+c
.
Likewise, lim (At~e2 ) = b+c
c
t
c c
t
n
X
j=1
0
b. By part a, we have c = 17, b = 5 and a = , so M = 0
1 0
0 1 .
5 17
0 1 0
0
0 0
0
1
35. A =
, with fA () = (2 + 1)2
0
0 0 1
0
0 1
0
37. We can write fA () = ( 0 )2 g(), for some polynomial g. The product rule for
derivatives tells us that fA0 () = 2( 0 )g() + ( 0 )2 g 0 (), so that fA0 (0 ) = 0, as
claimed.
175
Chapter 7
a b
c d
e
g
f
h
tr(BA) =tr
e
g
a
c
b
d
f
h
are equal.
=tr
ae + bg
cf + dh
= ae + bg + cf + dh.
=tr
ea + f c
gb + hd
= ea + f c + gb + hd. So they
41. So there exists an invertible S such that B = S 1 AS, and tr(B) =tr(S 1 AS) =tr((S 1 A)S).
By Exercise 40, this equals tr(S(S 1 A)) =tr(A).
43. tr(AB BA) =tr(AB)tr(BA) =tr(AB)tr(AB) = 0, but tr(In ) = n, so no such A, B
exist. We have used Exercise 40.
45. fA () = 2 tr(A)+det(A) = 2 2+(34k). We want fA (5) = 251034k = 0,
or, 12 4k = 0, or k = 3.
2 0
47. Let M = [ ~v1 ~v2 ]. We want A[ ~v1 ~v2 ] = [ ~v1 ~v2 ]
, or, [ A~v1 A~v2 ] = [ 2~v1 3~v2 ].
0 3
Since ~v1 or ~v2 must be nonzero, 2 or 3 must be an eigenvalue of A.
49. As in problem 47, such an M will exist if A has an eigenvalue 2, 3 or 4.
7.3
1. 1 = 7, 2 = 9, E7 = ker
4
2 8
1
0 8
= span
, E9 = ker
= span
1
0 0
0
0 2
1
4
Eigenbasis:
,
0
1
3. 1 = 4, 2 = 9, E4 = span
Eigenbasis:
1
3
, E9 = span
1
2
1
3
,
1
2
5. No real eigenvalues as fA () = 2 2 + 2.
7. 1 = 1, 2 = 2, 3 = 3, eigenbasis: ~e1 , ~e2 , ~e3
1
0
1
9. 1 = 2 = 1, 3 = 0, eigenbasis: 0 , 1 , 0
0
0
1
176
Section 7.3
1
1
1
11. 1 = 2 = 0, 3 = 3, eigenbasis: 1 , 0 , 1
0
1
1
0
1
1
13. 1 = 0, 2 = 1, 3 = 1, eigenbasis: 1 , 3 , 1
0
1
2
0
15. 1 = 0, 2 = 3 = 1, E0 = span 1 . We can use Kyle Numbers to see that
0
1
2
E1 = ker
3
4
1
0
1
0
2
1
1
= span 1 .
1
2
2
There is no eigenbasis since the eigenvalue 1 has algebraic multiplicity 2, but the geometric
multiplicity is only 1.
17. 1 = 2 = 0, 3 = 4 = 1
1
0
0
0
0 1 1 0
with eigenbasis ,
, ,
0
1
0
0
0
0
0
1
19. Since 1 is the only eigenvalue, with algebraic multiplicity 3, there exists an eigenbasis for
A if (and only if) the geometric
multiplicity
of the eigenvalue 1 is 3 as well, that is, if
0 a b
E1 = R3 . Now E1 = ker 0 0 c is R3 if (and only if) a = b = c = 0.
0 0 0
If a = b = c = 0 then E1 is 3-dimensional with eigenbasis ~e1 , ~e2 , ~e3 .
If a 6= 0 and c 6= 0 then E1 is 1-dimensional and otherwise E1 is 2-dimensional. The
geometric multiplicity of the eigenvalue 1 is dim(E1 ).
1 4
1 2
4
2
2
1
1
=
, i.e. A
=
=2
and A
=
21. We want A such that A
2 6
2 3
6
3
3
2
2
1
5 2
1 2
1 4
.
=
so A =
6 2
2 3
2 6
The answer is unique.
23. 1 = 2 = 1 and E1 = span(~e1 ), hence there is no eigenbasis. The matrix represents a
shear parallel to the x-axis.
177
Chapter 7
1
25. If is an eigenvalue of A, then E = ker(A I3 ) = ker 0
a
b
0
1 .
c
The second and third columns of the above matrix arent parallel, hence E is always
1-dimensional, i.e., the geometric multiplicity of is 1.
27. By Fact 7.2.4, we have fA () = 2 5 + 6 = ( 3)( 2) so 1 = 2, 2 = 3.
29. Note that r is the number of nonzero diagonal entries of A, since the nonzero columns of
A form a basis of im(A). Therefore, there are n r zeros on the diagonal, so that the
algebraic multiplicity of the eigenvalue 0 is n r. It is true for any n n matrix A that
the geometric multiplicity of the eigenvalue 0 is dim(ker(A)) = n rank(A) = n r.
31. They must be the same. For if they are not, by Fact 7.3.7, the geometric multiplicities
would not add up to n.
33. If S 1 AS = B, then
S 1 (A In )S = S 1 (AS S) = S 1 AS S 1 S = B In .
35. No, since the two matrices have different eigenvalues (see Fact 7.3.6c).
37. a. A~v w
~ = (A~v )T w
~ = (~v T AT )w
~ = (~v T A)w
~ = v T (Aw)
~ = ~v Aw
~
A symmetric
b. Assume A~v = ~v and Aw
~ = w
~ for 6= , then (A~v ) w
~ = (~v ) w
~ = (~v w),
~ and
~v Aw
~ = ~v w
~ = (~v w).
~
By part a, (~v w)
~ = (~v w)
~ i.e., ( )(~v w)
~ = 0.
Since 6= , it must be that ~v w
~ = 0, i.e., ~v and w
~ are perpendicular.
39. a. There are two eigenvalues, 1 = 1 (with E1 = V ) and 2 = 0 (with E0 = V ).
Now geometric multiplicity(1) = dim(E1 ) = dim(V ) = m, and
geometric multiplicity(0) = dim(E0 ) = dim(V ) = n dim(V ) = n m.
Since geometric multiplicity() algebraic multiplicity(), by Fact 7.3.7, and the
algebraic multiplicities cannot add up to more than n, the geometric and algebraic
multiplicities of the eigenvalues are the same here.
b. Analogous to part a: E1 = V and E1 = V .
178
Section 7.3
9
2
1
41. The eigenvalues of A are 1.2, 0.8, 0.4 with eigenvectors 6 , 2 , 2 .
2
1
2
9
2
1
9
2
Since ~x0 = 50 6 +50 2 +50 2 we have ~x(t) = 50(1.2)t 6 +50(0.8)t 2 +
1
2
2
1
2
1
50(0.4)t 2 , so, as t goes to infinity, j(t) : n(t) : a(t) approaches the proportion
2
9 : 6 : 2.
0 1 1
43. a. A = 21 1 0 1
1 1 0
7
7.6660156
b. After 10 rounds, we have A10 11 7.6699219 .
5
7.6640625
7
7.66666666667
After 50 rounds, we have A50 11 7.66666666667 .
5
7.66666666667
1
1
0
c. The eigenvalues of A are 1 and 21 with E1 = span 1 and E 21 = span 1 , 1
2
1
1
0
1
1
t
t
so ~x(t) = 1 + c30 1 + 21 1 + 12 c30 1 .
1
1
2
1001
After 1001 rounds, Alberich will be ahead of Brunnhilde by 12
, so that Carl
needs to beat Alberich to win the game. A straightforward computation shows that
1001
(1 c0 ); Carl wins if this quantity is positive, which is the
c(1001) a(1001) = 12
case if c0 is less than 1.
179
Chapter 7
45. a.
b.
c.
d.
Alternatively, observe that the ranking of the players is reversed in each round: Whoever is first will be last after the next round. Since the total number of rounds is odd
(1001), Carl wants to be last initially to win the game; he wants to choose a smaller
number than both Alberich and Brunnhilde.
0.1 0.2 ~
1
A=
,b =
0.4 0.3
2
A ~b
B=
0 1
1
1
.
and
The eigenvalues of A are 0.5 and 0.1 with associated eigenvectors
1
2
~v
~v
~v
The eigenvalues of B are 0.5, 0.1, and 1. If A~v = ~v then B
=
so
is
0
0
0
an eigenvector of B.
2
Furthermore, 4 is an eigenvector of B corresponding to the eigenvalue 1. Note that
1
(A I2 )1~b
.
this vector is
1
2
1
1
x1 (0)
Write ~y(0) = x2 (0) = c1 2 + c2 1 + c3 4 .
1
0
0
1
Note that c3 = 1.
1
1
2
2
2
t
t
Now ~y(t) = c1 (0.5)t 2 + c2 (0.1)t 1 + 4 4 so that ~x(t)
.
4
0
0
1
1
1 1
0
r(t)
2
4
1
1
1
The eigenvalues of A are 0, 12 , 1 with eigenvectors 2 , 0 , 2 .
1
1
1
1
1
1
1
1
t
Since ~x(0) = 0 = 14 2 + 12 0 + 14 2 , ~x(t) = 12 12 0 +
1
1
1
1
0
t > 0.
180
1
1
2 for
4
1
Section 7.4
7.4
1. Matrix A is diagonal already, so its certainly diagonalizable. Let S = I2 .
1
1
. If we
,
3. Diagonalizable. The eigenvalues are 0,3, with associated eigenvectors
2
1
1 1
0 0
let S =
, then S 1 AS = D =
.
1 2
0 3
5. Fails to be diagonalizable. There is only one eigenvalue, 1, with a one-dimensional
eigenspace.
4
1
7. Diagonalizable. The eigenvalues are 2,3, with associated eigenvectors
,
. If
1
1
4 1
2
0
we let S =
, then S 1 AS = D =
.
1
1
0 3
9. Fails to be diagonalizable. There is only one eigenvalue, 1, with a one-dimensional
eigenspace.
11. Fails to be diagonalizable. The eigenvalues are 1,2,1, and the eigenspace
E1 = ker(A I3 ) = span(~e1 ) is only one-dimensional.
1
1
1
13. Diagonalizable. The eigenvalues are 1,2,3, with associated eigenvectors 0 , 1 , 2 .
0
0
1
1 0 0
1 1 1
If we let S = 0 1 2 , then S 1 AS = D = 0 2 0 .
0 0 3
0 0 1
2
1
0
15. Diagonalizable. The eigenvalues are 1, 1, 1, with associated eigenvectors 1 , 1 , 0 .
0
0
1
2 1 0
1
0 0
If we let S = 1 1 0 , then S 1 AS = D = 0 1 0 .
0 0 1
0
0 1
181
Chapter 7
1
1
1
17. Diagonalizable. The eigenvalues are 0,3,0, with associated eigenvectors 1 , 1 , 0 .
0
1
1
0 0 0
1 1
1
0 , then S 1 AS = D = 0 3 0 .
If we let S = 1 1
0 0 0
0 1 1
19. Fails to be diagonalizable. The eigenvalues are 1,0,1, and the eigenspace E 1 = ker(A I3 )
= span(~e1 ) is only one-dimensional.
21. Diagonalizable for all values of a, since there are always two distinct eigenvalues, 1 and
2. See Fact 7.4.3.
2
23. Diagonalizable for positive
a. The characteristic polynomial is ( 1) a, so that the
eigenvalues are = 1 a. If a is positive, then we have two distinct real eigenvalues,
so that the matrix is diagonalizable. If a is negative, then there are no real eigenvalues.
If a is 0, then 1 is the only eigenvalue, with a one-dimensional eigenspace.
25. Diagonalizable for all values of a, b, and c, since we have three distinct eigenvalues, 1, 2,
and 3.
27. Diagonalizable only if a = b = c = 0. Since 1 is the only eigenvalue, it is required that
E1 = R3 , that is, the matrix must be the identity matrix.
29. Not diagonalizable for any
a. The characteristic polynomial is 3 + a, so that there is
3
only one real eigenvalue, a, for all a. Since the corresponding eigenspace isnt all of R3 ,
the matrix fails to be diagonalizable.
1 2
are 1 and 5,
31. In Example 2 of Section 7.3 we see that the eigenvalues of A =
4 3
1
1
1 1
with associated eigenvectors
and
. If we let S =
, then S 1 AS =
1
2
1
2
1 0
.
D=
0 5
1 1
(1)t 0
2 1
Thus A = SDS 1 and At = SDt S 1 = 31
1 2
0
55
1
1
t
t
t+1
2(1) + 5
(1)
+ 5t
= 13
2(5t ) 2(1)t 2(5t ) + (1)t
2
1 2
and
are 0 and 7, with associated eigenvectors
33. The eigenvalues of A =
1
3 6
1
2 1
0 0
. If we let S =
, then S 1 AS = D =
. Thus A = SDS 1 and
3
1 3
0 7
182
Section 7.4
t
2 1
7
2(7t )
0 0
3 1
1
= 7t1 A. We can
=
7 3(7t ) 6(7t )
1 3
0 7t
1 2
find the same result more directly by observing that A2 = 7A.
1 6
35. Matrix
has the eigenvalues 3 and 2. If ~v and w
~ are associated eigenvectors, and
2 6
1 6
3 0
1 6
if we let S = [~v w],
~ then S 1
S=
, so that matrix
is indeed
2 6
0 2
2 6
3 0
.
similar to
0 2
At = SDt S 1 =
1
7
7 21
they have the same two distinct real eigenvalues
1,2 = 2 . Thus both A and B are
1 0
similar to the diagonal matrix
, by Algorithm 7.4.4. Therefore A is similar to
0 2
B, by parts b and c of Fact 3.4.6.
39. The eigenfunctions with eigenvalue are the nonzero functions f (x) such that T (f (x)) =
f 0 (x) f (x) = f (x), or f 0 (x) = ( + 1)f (x). From calculus we recall that those are the
exponential functions of the form f (x) = Ce(+1)x , where C is a nonzero constant. Thus
all real numbers are eigenvalues of T , and the eigenspace E is one-dimensional, spanned
by e(+1)x .
41. The nonzero symmetric matrices are eigenmatrices with eigenvalue 2, since L(A) = A +
AT = 2A in this case. The nonzero skew-symmetric matrices have eigenvalue 0, since
T
L(A) =
L is diagonalizable, since we have the eigenbasis
A
+ A = A A = 0. Yes,
0 1
0 0
0 1
1 0
(three symmetric matrices, and one skew-symmetric
,
,
,
1 0
0 1
1 0
0 0
one).
43. The nonzero real numbers are eigenvectors with eigenvalue 1, and the nonzero imaginary
numbers (of the form iy) are eigenvectors with eigenvalue 1. Yes, T is diagonalizable,
since we have the eigenbasis 1,i.
45. The nonzero sequence (x0 , x1 , x2 , . . .) is an eigensequence with eigenvalue if
T (x0 , x1 , x2 , . . .) = (0, x0 , x1 , x2 , . . .) = (x0 , x1 , x2 , . . .) = (x0 , x1 , x2 , . . .). This
means that 0 = x0 , x0 = x1 , x1 = x2 , . . . , xn = xn+1 , . . . . If is nonzero, then
these equations imply that x0 = 1 0 = 0, x1 = 1 x0 = 0, x2 = 1 x1 = 0, . . . , so that there
are no eigensequences in this case. If = 0, then we have x0 = x1 = 0, x1 = x2 =
0, x2 = x3 = 0, . . . , so that there arent any eigensequences either. In summary: There
are no eigenvalues and eigensequences for T .
47. The nonzero even functions, of the form f (x) = a+cx2 , are eigenfunctions with eigenvalue
1, and the nonzero odd functions, of the form f (x) = bx, have eigenvalue 1. Yes, T is
diagonalizable, since the standard basis, 1, x, x2 , is an eigenbasis for T .
183
Chapter 7
1 1
1
49. The matrix of T with respect to the standard basis 1, x, x2 is B = 0
3 6 . The
0 0 9
1
1
1
eigenvalues of B are 1, 3, 9, with corresponding eigenvectors 0 , 2 , 4 . The
4
0
0
eigenvalues of T are 1,3,9, with corresponding eigenfunctions 1, 2x 1, 4x2 4x + 1 =
(2x 1)2 . Yes, T is diagonalizable, since the functions 1, 2x 1, (2x 1)2 from an
eigenbasis.
51. The nonzero constant functions f (x) = b are the eigenfunctions with eigenvalue 0. If f (x)
is a polynomial of degree 1, then the degree of f (x) exceeds the degree of f 0 (x) by 1
(by the power rule of calculus), so that f 0 (x) cannot be a scalar multiple of f (x). Thus
0 is the only eigenvalue of T , and the eigenspace E0 consists of the constant functions.
53. Suppose basis D consists of f1 , . . . , fn . We are told that the D-matrix D of T is diagonal;
let 1 , 2 , . . . , n be the diagonal entries of D. By Fact 4.3.3., we know that [T (fi )]D =
(ith column of D) = i~ei , for i = 1, 2, . . . , n, so that T (fi ) = i fi , by definition of
coordinates. Thus f1 , . . . , fn is an eigenbasis for T , as claimed.
1 0
0 1
, for example.
and B =
55. Let A =
0 0
0 0
I
A
Im A
AB 0
= m
57. Modifying the hint in Exercise 56 slightly, we can write
0 In
0 In
B 0
0
0
AB 0
0
0
. Thus matrix M =
is similar to N =
. By Fact 7.3.6a,
B BA
B 0
B BA
matrices M and N have the same characteristic polynomial.
AB Im
0
= ()n det(AB Im ) = ()n fAB (). To
Now fM () = det
B
In
understand the second equality, consider Fact 6.1.8. Likewise, fN ()
Im
0
= ()m fBA ().
= det
B
BA In
It follows that ()n fAB () = ()m fBA (). Thus matrices AB and BA have the same
nonzero eigenvalues, with the same algebraic multiplicities.
If mult(AB) and mult(BA) are the algebraic multiplicities of 0 as an eigenvalue of AB
and BA, respectively, then the equation ()n fAB () = ()m fBA () implies that
n + mult(AB) = m + mult(BA).
59. If ~v is an eigenvector with eigenvalue , then
184
Section 7.4
63. Recall from Exercise 62 that all the eigenspaces are two-dimensional.
185
Chapter 7
a. We need to solve the differential equation f 00 (x) = f (x). As in Example 18 of Section 4.1, we will look for exponential solutions. The function f (x) = ekx is a solution
if k 2 = 1, or k = 1. Thus the eigenspace E1 is the span of functions ex and ex .
b. We need to solve the differential equation f 00 (x) = 0. Integration gives f 0 (x) = C, a
constant. If we integrate again, we find f (x) = Cx + c, where c is another arbitrary
constant. Thus E0 = span(1, x).
c. The solutions of the differential equation f 00 (x) = f (x) are the functions f (x) =
a cos(x) + b sin(x), so that E1 = span(cos x, sin x). See the introductory example of
Section 4.1 and Exercise 4.1.58.
d. Modifying part c, we see that the solutions of the differential equation f 00 (x) = 4f (x)
are the functions f (x) = a cos(2x) + b sin(2x), so that E4 = span(cos(2x), sin(2x)).
w
~ ] , that is, we want
1
~v to be in the eigenspace E5 , and w
~ in E1 . We find that E5 = span
and E1 =
2
1
1
1
1 0
0
1
span
, so that S must be of the form a
b
=a
+b
.
1
2
1
2 0
0 1
1 0
0
1
Thus, a basis of the space V is
,
, and dim(V ) = 2.
2 0
0 1
7.5
1. z = 3 3i so |z| =
32 + (3)2 =
18 and arg(z) = 4 ,
186
Section 7.5
18 cos 4 + i sin 4 .
2k
n ,
+ i sin +2k
, k = 0, 1, 2, . . . , n 1.
5. Let z = r(cos + i sin ) then w = n r cos +2k
n
n
0.7
0.72. See Figure 7.8.
9. |z| = 0.82 + 0.72 = 1.15, arg(z) = arctan 0.8
Chapter 7
19. a. Since A has eigenvalues 1 and 0 associated with V and V respectively and since V is
the eigenspace of = 1, by Fact 7.5.5, tr(A) = m, det(A) = 0.
b. Since B has eigenvalues 1 and 1 associated with V and V respectively and since V is
the eigenspace associated with = 1, tr(A) = m(nm) = 2mn, det B = (1) nm .
1 3i
.
2
a matrix with identical columns by Exercise 30. Therefore, the columns of At become
ij th entry of At
more and more alike as t approaches infinity, in the sense that lim ikth entry of At = 1
t
for all i, j, k.
188
Section 7.5
33. a. C is obtained from B by dividing each column of B by its first component. Thus, the
first row of C will consist of 1s.
b. We observe that the columns of C are almost identical, so that the columns of B are
almost parallel (that is, almost scalar multiples of each other).
c. Let 1 , 2 , . . . , 5 be the eigenvalues. Assume 1 real and positive and 1 > |j | for
2 j 5.
Let ~v1 , . . . , ~v5 be corresponding eigenvectors. For a fixed i, write ~ei =
5
X
cj ~vj ; then
j=1
Chapter 7
39. Figure 7.9 illustrates how Cn acts on the standard basis vectors ~e1 , ~e2 , . . . , ~en of Rn .
a. Based on Figure 7.9, we see that Cnk takes ~ei to ~ei+k modulo n, that is, if i + k
exceeds n then Cnk takes ~ei to ~ei+kn (for k = 1, . . . , n 1).
To put it differently: Cnk is the matrix whose ith column is ~ei+k if i + k n and ~ei+kn
if i + k > n (for k = 1, . . . , n 1).
b. The characteristic polynomial is 1 n , so that the eigenvalues are the n distinct
solutions of the equation n = 1 (the so-called
nth roots of unity), equally spaced
2k
points along the unit circle, k = cos 2k
n +i sin
n , for k = 0, 1, . . . , n1 (compare
with Exercise 5 and Figure 7.7.). For each eigenvalue k ,
kn1
.
..
c. The eigenbasis ~v0 , ~v1 , . . . , ~vn1 for Cn we found in part b is in fact an eigenbasis for
all circulant n n matrices.
41. Substitute =
14
x2
12
x3
1
x
1=0
14x + 12 x3 = 0
x3 14x = 12
190
Section 7.6
Now use the formula derived in Exercise 40 to find x, with p = 14 and q = 12. There
is only one positive solution, x 4.114, so that = x1 0.243.
43. Note that f (z) is not the zero polynomial, since f (i) = det(S1 + iS2 ) = det(S) 6= 0, as
S is invertible. A nonzero polynomial has only finitely many zeros, so that there is a
real number x such that f (x) = det(S1 + xS2 ) 6= 0, that is, S1 + xS2 is invertible. Now
SB = AS or (S1 + iS2 )B = A(S1 + iS2 ). Considering the real and the imaginary part, we
can conclude that S1 B = AS1 and S2 B = AS2 and therefore (S1 + xS2 )B = A(S1 + xS2 ).
Since S1 + xS2 is invertible, we have B = (S1 + xS2 )1 A(S1 + xS2 ), as claimed.
7.6
1. 1 = 0.9, 2 = 0.8, so, by Fact 7.6.2, ~0 is a stable equilibrium.
7. 1,2 = 0.9 (0.5)i so |1 | = |2 | = 0.81 + 0.25 > 1 and ~0 is not a stable equilibrium.
9. 1,2 = 0.8 (0.6)i, 3 = 0.7, so |1 | = |2 | = 1 and ~0 is not a stable equilibrium.
11. 1 = k, 2 = 0.9 so ~0 is a stable equilibrium if |k| < 1.
13. Since 1 = 0.7, 2 = 0.9, ~0 is a stable equilibrium regardless of the value of k.
1
15. 1,2 = 1 10
k
1
If k 0 then 1 = 1 + 10
k 1. If k < 0 then |1 | = |2 | > 1. Thus, the zero state isnt
a stable equilibrium for any real k.
191
Chapter 7
3
2
0.98, so
t sin(0.98t)
1
a
0 1
and ~x(t) 13
=
,
13(cos(0.98)+i sin(0.98)), [w
~ ~v ] =
.
0
b
1
0
cos(0.98t)
21. 1,2 = 4 i, r = 17, = arctan 41 0.245 so
1
a
0 5
=
,
~ ~v ] =
1 17(cos(0.245) + i sin(0.245)), [w
0
b
1 3
192
Section 7.6
and ~x(t)
17
5 sin(0.245t)
cos(0.245t) + 3 sin(0.245t)
is an eigenvalue of A1 and 1 =
1
||
> 1.
Chapter 7
in (I)
in (II)
in (III)
in (IV)
194
Section 7.6
n
X
ci ti~vi and
i=1
k~x(t)k = |
n
X
i=1
ci ti~vi |
n
X
i=1
kci ti~vi k =
n
X
i=1
|i |t kci~vi k
n
X
i=1
1
The last quantity,
n
X
i=1
kci~vi k.
Chapter 7
1 1
k
k+1
represents a shear parallel to the x-axis, with A
=
, so that
0 1
1
1
0
t
~x(t) = At
=
is not bounded. This does not contradict part a, since there is
1
1
no eigenbasis for A.
b. A =
G0
1 , C
b. y(t) = Y (t)
G0
1 , I
=0
G0
1 , c(t)
= C(t)
G0
1 , i(t)
= I(t)
c(t)
=
i(t + 1)
i(t)
0.2 0.2
c. A =
eigenvalues 0.6 0.8i
4 1
not stable
d. A =
, trA = 2, detA = , stable (use Exercise 31)
1
4
(1+)2
0.9 0.2
0.4
0.7
1
1
2
=
.
2
4
2
is a stable equilibrium since the eigenvalues of A are 0.5 and 0.1.
4
196
True or False
8
3
3
8
into
and
into
:
6
4
4
6
1
8 3
3 8
3 8
8 3
=
and A =
=
6
4
4 6
4 6
6
4
1
50
36 73
.
52 36
True or False
1. T, by Fact 7.2.2
1 1
, then eigenvalue 1 has geometric multiplicity 1 and algebraic multiplicity. 2.
3. F; If
0 1
5. T; A = AIn = A[~e1 . . . ~en ] = [1~e1 . . . n~en ] is diagonal.
7. T; Consider a diagonal 5 5 matrix with only two distinct diagonal entries.
9. T, by Summary 7.1.5
1 1
.
11. F; Consider A =
0 1
13. T; If A~v = 3~v, then A2~v = 9~v.
15. T, by Fact 7.5.5
17. T, by Example 6 of Section 7.5
19. T; If S 1 AS = D, then S T AT (S T )1 = D.
0 0
0 1
2
.
, with A =
21. F; Consider A =
0 0
0 0
1 0
23. F; Let A =
, for example.
1 1
25. T; If S 1 AS = D, then S 1 A1 S = D1 is diagonal.
27. T; The sole eigenvalue, 7, must have geometric multiplicity 3.
29. F; Consider the zero matrix.
31. F; Consider the identity matrix.
197
Chapter 7
1
, for example.
0
2 0
1
0
35. F; Let A =
, ~v =
, and w
~=
, for example.
0 3
0
1
33. F; Let A =
1 1
0 1
and ~v =
57. T; Suppose A~vi = i~vi and B~vi = i~vi , and let S = [~v1 . . . ~vn ].
Then ABS = BAS = [1 1~v1 . . . n n~vn ], so that AB = BA.
198