You are on page 1of 2

Well see soon that if a linear operator on an ndimensional space has n distinct eigenvalues, then

its diagonalizable. But first, a preliminary theorem.


Theorem 2. Eigenvectors that are associated to
distinct eigenvalues are independent. That is, if
1 , 2 , . . ., k are different eigenvalues of an operD Joyce, Fall 2015
ator T , and an eigenvector vi is associated to each
Some linear operators T : V V have the nice eigenvalue i , then the set of vectors v1 , v2 , . . ., vk
property that there is some basis for V so that the is linearly independent.
matrix representing T is a diagonal matrix. Well
call those operators diagonalizable operators. Well Proof. Assume by induction that the first k 1 of
call a square matrix A a diagonalizable matrix if it is the eigenvectors are independent. Well show all k
conjugate to a diagonal matrix, that is, there exists of them are. Suppose some linear combination of
an invertible matrix P so that P 1 AP is a diagonal all k of them equals 0:
matrix. Thats the same as saying that under a
c1 v1 + c2 v2 + + ck vk = 0.
change of basis, A becomes a diagonal matrix.
Reflections are examples of diagonalizable operators as are rotations if C is your field of scalars. Take T k I of both sides of that equation. The
Not all linear operators are diagonalizable. The left side simplifies
simplest oneis R2 R2 , (x, y) (y, 0) whose ma(T k I)(c1 v1 + + ck vk )
0 1
trix is A =
. No conjugate of it is diagonal.
= c1 T (v1 ) k c1 v1 + + ck T (vk ) k ck vk
0 0
Its an example of a nilpotent matrix, since some
= c1 (1 k )v1 + + ck (k k )vk
2
power of it, namely A , is the 0-matrix. In general,
= c1 (1 k )v1 + + ck1 (k1 k )vk1
nilpotent matrices arent diagonalizable. There are
many other matrices that arent diagonalizable as and, of course, the right side is 0. That gives us a
well.
linear combination of the first k 1 vectors which
Theorem 1. A linear operator on an n- equals 0, so all their coefficients are 0:
dimensional vector space is diagonalizable if and
c1 (1 k ) = = ck1 (k1 k ) = 0
only if it has a basis of n eigenvectors, in which case
the diagonal entries are the eigenvalues for those
Since k does not equal any of the other i s, thereeigenvectors.
fore all the ci s are 0:
Proof. If its diagonalizable, then theres a basis for
which the matrix representing it is diagonal. The
c1 = = ck1 = 0
transformation therefore acts on the ith basis vector
by multiplying it by the ith diagonal entry, so its an The original equation now says ck vk = 0, and since
eigenvector. Thus, all the vectors in that basis are the eigenvector vk is not 0, therefore ck = 0. Thus
eigenvectors for their associated diagonal entries.
all k eigenvectors are linearly independent. q.e.d.
Conversely, if you have a basis of n eigenvectors,
then the matrix representing the transformation is Corollary 3. If a linear operator on an ndiagonal since each eigenvector is multiplied by its dimensional vector space has n distinct eigenvalues,
associated eigenvalue.
q.e.d. then its diagonalizable.

Diagonalizable operators
Math 130 Linear Algebra

Proof. Take an eigenvector for each eigenvalue. By


the preceding theorem, theyre independent, and
since there are n of them, they form a basis of the
n-dimensional vector space. The matrix representing the transformation with respect to this basis
is diagonal and has the eigenvalues displayed down
the diagonal.
q.e.d.
Math 130 Home Page at
http://math.clarku.edu/~ma130/

You might also like