You are on page 1of 6

PHAGWARA (PUNJAB).

SESSION-2010
TERM PAPER
MATHS
TOPIC
TO
For a given matrix A , prove that trace of the Matrix A is equal
to the sum of of its eigen values .What will be the eigen values of A-kI ?
Also give examples.

SUBMITTED TO: SUBMITTED BY

SWATI AGGARWAL MUNISH GABA


ROLL NO:- 24
REG NO:-11000543
SECTION –E1001

ACKNOWLEDGMENTS
This is a humble effort to express our sincere gratitude
towards those who have guided and helped us to complete this
project.
A project reported is major milestone during the study period
of a student. We could have faced many problems but our teachers’
kind response to our needs and requirement, their patient approach
and their positive criticism helped us in making our project. Very
warm thanks to our project-in-charge “SWATI AGGRAWAL” with
her support and constant encouragement AND LPU LIBRARY it
was not very easy without whose support to finish our project.
With the motivation of our parent it was very easy to finish
our project successfully and satisfactorily in short span of time.

MUNISG GABA

Table of content
1. INTRODUCTION.

2. CHARACTERISTICS POLYNOMIAL

3. POER ITERATION

4. MATRIX EIGHN VALUE

10. ABSTRACT OF THE WORK UNDETAKEN

10. KEYWORDS

11. REFERENCES CITED (BOOKS AND LECTURE NOTES)

INTRODUCTION
In linear algebra, one of the most important problems is designing efficient and stable algorithms for
finding the eigen value of a matrix. These eigenvalue algorithms may also find eigenvector

Characteristic polynomial

Given a square matrix A, an eigenvalue λ and its associated eigenvector v are, by definition, a pair
obeying the relation

where v is nonzero. Equivalently, (A−λ I)v = 0 (where I is the identity matrix), implying det(A−λ I) = 
0. This determinant is a polynomial in λ, known as the characteristic polynomial of A. One common
method for determining the eigenvalues of a small matrix is by finding roots of the characteristic
polynomial.

Unfortunately, this method has some limitations. A general polynomial of order n > 4 cannot be
solved by a finite sequence of arithmetic operations and radicals . There do exist efficient root-finding
algorithms for higher order polynomials. However, finding the roots of the characteristic polynomial
may be an ill-conditioned problem even when the underlying eigenvalue problem is well-conditioned.
For this reason, this method is rarely used.

The above discussion implies a restriction on all eigenvalue algorithms. It can be shown that for any
polynomial, there exists a matrix having that polynomial as its characteristic polynomial (actually,
there are infinitely many). If there did exist a finite sequence of arithmetic operations for exactly
finding the eigenvalues of a general matrix, this would provide a corresponding finite sequence for
general polynomials, in contradiction of the Abel–Ruffini theorem. Therefore, general eigenvalue
algorithms are expected to be iterative.

Power iteration

The basic idea of this method is to choose an (arbitrary) initial vector b and then repeatedly multiply it
by the matrix, iteratively calculating Ab, A²b, A³b,…. Suppose the eigenvalues are ordered by
magnitude, with λ1 being the largest, and with associated eigenvector v1. Then each iteration scales the
component of b in the v1 direction by λ1, and every other direction by a smaller amount (assuming |
λ2| < |λ1|). Except for a set of measure zero, for any initial vector the result will converge to an
eigenvector corresponding to the dominant eigenvalue. In practice, the vector should be normalized
after every iteration.

By itself, power iteration is not very useful. Its convergence is slow except for special cases of
matrices, and without modification, it can only find the largest or dominant eigenvalue (and the
corresponding eigenvector). One such modification to get the other eigenvectors is to iterate the
procedure again with an initial vector b which is taken to be orthogonal to the first eigenvector found.
This should yield a second eigenvector. The procedure can be iterated again, choosing b orthogonal to
both eigenvectors found so far. The procedure can be iterated again and again, each time taking b to
be orthogonal to the eigenvectors found so far.

A few of the more advanced eigenvalue algorithms are variations of power iteration. In addition, some
of the better algorithms for the generalized eigenvalue problem are based on power iteration.

Matrix eigenvalues

In mathematics, and in particular in linear algebra, an important tool for describing eigenvalues of
square matrices is the characteristic polynomial: saying that λ is an eigenvalue of A is equivalent to
stating that the system of linear equations (A - λI) v = 0 (where I is the identity matrix) has a non-zero
solution v (namely an eigenvector), and so it is equivalent to the determinant det (A - λI) being zero.
The function p(λ) = det (A - λI) is a polynomial in λ since determinants are defined as sums of
products. This is the characteristic polynomial of A: the eigenvalues of a matrix are the zeros of its
characteristic polynomial.

It follows that we can compute all the eigenvalues of a matrix A by solving the equation pA(λ) = 0. If
A is an n-by-n matrix, then pA has degree n and A can therefore have at most n eigenvalues.
Conversely, the fundamental theorem of algebra says that this equation has exactly n roots (zeroes),
counted with multiplicity. All real polynomials of odd degree have a real number as a root, so for odd
n, every real matrix has at least one real eigenvalue. In the case of a real matrix, for even and odd n,
the non-real eigenvalues come in conjugate pairs.

An example of a matrix with no real eigenvalues is the 90-degree rotation

whose characteristic polynomial is x2 + 1 and so its eigenvalues are the pair of complex conjugates i,
-i.

The Cayley–Hamilton theorem states that every square matrix satisfies its own characteristic
polynomial, that is, pA(A) = 0.

Types
Eigenvalues of 2×2 matrices

An analytic solution for the eigenvalues of 2×2 matrices can be obtained directly from the quadratic
formula: if

then the characteristic polynomial is

so the solutions are

Notice that the characteristic polynomial of a 2×2 matrix can be written in terms of the trace tr(A) =
a + d and determinant det(A) = ad − bc as

where I2 is the 2×2 identity matrix. The solutions for the eigenvalues of a 2×2 matrix can thus be
written as

Thus, for the very special case where the 2×2 matrix has zero determinant, but non-zero trace, the
eigenvalues are zero and the trace (corresponding to the negative and positive roots, respectively). For
example, the eigenvalues of the following matrix are 0 and (a2 + b2):

It cannot be stressed enough that this formula holds for only a 2×2 matrix.

Eigenvalues of 3×3 matrices

If

then the characteristic polynomial of A is


Alternatively the characteristic polynomial of a 3×3 matrix can be written in terms of the trace tr(A)
and determinant det(A) as

where I3 is the 3×3 identity matrix.

The eigenvalues of the matrix are the roots of this polynomial, which can be found using the method
for solving cubic equations.

A formula for the eigenvalues of a 4×4 matrix could be derived in an analogous way, using the
formulae for the solutions of the quartic equation

You might also like