You are on page 1of 21

1

Linear Algebra
Eigenvalues, Eigenvectors
2
Eigenvalues and Eigenvectors
If A is an n x n matrix and x is a vector in , then
there is usually no general geometric relationship
between the vector x and the vector Ax. However,
there are often certain nonzero vectors x such that x
and Ax are scalar multiples of one another. Such
vectors arise naturally in the study of vibrations,
electrical systems, chemical reactions, quantum
mechanics, mechanical stress, economics and
geometry.
n
R
3
Eigenvalues, Eigenvectors
If A is an n n, then a non zero vector x in R
n
is called an eigenvector of A if Ax is a scalar
multiple of x; that is
for some scalar . The scalar is called an
eigenvalue of A, and x is called the
eigenvector of A corresponding to .
Eigen values are also called the proper
values or characteristic values they are
also called the Latent roots.
A x x =

4
Eigenvalues, Eigenvectors
The set of the eigenvalues is called the
spectrum of A.
The largest of the absolute values of the
eigenvalue of A is called the spectral radius
of A.
The set of all eigenvectors corresponding to
an eigenvalue of A, together with 0,forms a
vector space called the eigenspace of A
corresponding to this eigenvalue.
2
5
Graphical Interpretation
Eigen values and Eigen vectors have also a useful
graphical interpretation in R
2
and R
3
. If is the
eigenvalue of A corresponding to x then Ax=x so
that the multiplication by A dilates x, contracts x, or
reverses the direction of x, depending upon the value
.
Dilation(>1) Contraction(0< <1) Reversal
of
Direction(<0)
x = Ax
x
x
x = Ax
x
x = Ax
6
Finding Eigen values
To find the eigenvalues of nn matrix A we
write Ax=x as
For to be the eigenvalue ,there must be a
nonzero solution of the above equation.
However above equation will have a nonzero
solution if and only if
( ) 0
Ax x
I A x

=
=
det( ) 0 I A =
7
Characteristic Equation
det(I-A)=0 is called the characteristic
equation of A; the scalars satisfying this
equation are called the eigenvalues of A.
When expanded , the determinant det(I-A) is
a polynomial in called the characteristic
polynomial in A.
Example: Find the eigenvalues and
eigenvectors of the matrix
3 2
1 0
A
(
=
(


8
Example 1(Eigenvalues)
Solution:
The characteristic polynomial in A.
Characteristic Equation:
1 0 3 2
0 1 1 0
3 2
1
I A
I A

( (
=
( (


(
=
(

2
3 2
det( ) 3 2
1
I A


= = +
2
3 2 0 + =
3
9
Example 1(Eigenvectors)
The solution of this equation are =1 and =2;
these are the eigenvalues of A.
Since Ax=x
Eigenvector corresponding to =1
Putting =1 gives
1
2
3 2
0
1
x
x

( (
=
( (

1
2
2 2
0
1 1
x
x
( (
=
( (

10
Example 1(Eigenvectors)
The above matrix gives two equations
which gives the general solution in the form
1 2
x x =
1 2
1 2
2 2 0
0
=
+ =
x x
x x
2
1 = x 1
1 = x
Let
1 =
Then
The eigenvector corresponding to is
1
1
(
(

11
Example 1(Eigenvectors)
Similarly
1
2
1
2
1 2
1 2
1 2
2 1
3 2
0
1
P u t t i n g 2
1 2
0 g i v e s e q u a t i o n s
1 2
2 0
2 0
T h e s o l u t i o n o f a b o v e e q u a t i o n s i s
2
1 2
E i g e n v e c t o r c o r r e s p p o n d i n g t o 2 i s
2
1

( (
=
( (

=
( (
=
( (

=
+ =
=
= =
=
(
(

x
x
x
x
x x
x x
x x
l e t x t h e n x
12
Multiple Eigenvalues
The order M

of an eigenvalue as a root of
characteristic polynomial is called the
algebraic multiplicity of .
The number m

of linearly independent
eigenvectors corresponding to is called the
geometric multiplicity of .
Thus m

is the dimension of the eigenspace


corresponding to this .
4
13
Multiple Eigenvalues
The characteristic polynomial has a degree
n, the sum of all algebraic multiplicity must
equal n.
In general m

The defect of is defined as


Example :Find the eigenvalues and
eigenvectors of the matrix
s
M m

A =
2 2 3
2 1 6
1 2 0
(
(
=
(
(

A
14
Example 2(1/7)
Eigenvalues
The characteristic polynomial
have three latent root (eigenvalues) as
2 2 3
2 1 6
1 2
d e t ( ) 0

+ (
(
=
(
(

=
A I
A I
3 2
21 45 0 + =
1 2 3
5 , 3 = = =
15
Example 2(2/7)
The algebraic multiplicity of =-3 is 2
M
-3
=2
and that of =5 is M
5
=1
Eigenvectors: To find the eigenvectors apply
the gauss elimination to the system (A-I)x=0
first =5 and =-3.
Putting =5 gives
1
2
3
7 2 3
( ) 2 4 6 0
1 2 5

( (
( (
= =
( (
( (

x
A I x x
x 16
Example 2(3/7)
Performing Gauss elimination on the matrix
above gives
1 2 5
0 1 2
0 0 0
A I
(
(
=
(
(

1 2 3
2 3
2 5 0
2 0
+ + =
+ =
x x x
x x
5
17
Example 2(4/7)
The above matrix can be written in the
equation form which gives
1 2 3
2 3
3 2 1
1
(2 5 )
2
Let 1 then 2 and 1
The eigen vector corresponding to 5
1
2
1

= +
=
= = =
=
(
(

(
(

x x x
x x
x x x
18
Example 2(5/7)
Putting =-3 and performing Gauss
Elimination gives
1 2 3
3 0 0 0
0 0 0
A I
(
(
+ =
(
(

1 2 3
1 2 3
2 3 0
2 3
x x x
o r
x x x
+ =
= +
19
Example 2(6/7)
Or
So there are two linearly independent vectors
corresponding to
(
(
(

+
(
(
(

=
(
(
(

1
0
3
0
1
2
3 2
3
2
1
x x
x
x
x
3 =
20
Example 2(7/7)
Since there are two Linear Independent
vectors corresponding to =-3 the geometric
multiplicity of
m
-3
=2
m
5
=1
The defect of =-3 is

-3
= M
-3
m
-3

-3
=0

5
=0
6
21
Example:
Find the Eigenvalues and Eigenvectors of the matrix
Solution:
The algebraic multiplicity is
(

=
0 0
1 0
A
(

=
(


0
1
1 0
0 1
0 0
1 0
I A
( ) 0
0
1
det
2
= =

I A
0
2 1
= =
2
0
= = M M
22
Example (Contd)
To find Eigenvector
The solution is
The Eigenvector is
The geometric multiplicity is
Thus
)
`

=
)
`

0
0
0 0
1 0
2
1
x
x
1 2
. 0 x x =
)
`

=
)
`

0
1
1
2
1
x
x
x
1
0
= = m m

0 0
M m <
23
Theorems
If A is an nn triangular matrix (upper
triangular, lower triangular or diagonal) then
the eigenvalues of A are entries on main
diagonal of A.
Example :
1 2 3
1
0 0
2
2
1 0
3
1
5 8
4
T h e E i g e n v a l u e s a r e
1 2 1
, ,
2 3 4

(
(
(
(
=
(
(
(

(

= = =
A
24
Theorems
7
25
Theorems
26
Example
27
Theorems
If k is a positive integer, is an eigenvalue of
a matrix A, and x is a corresponding
eigenvector, then
k
is an eigenvalue of A
k
and x is a corresponding eigenvector.
28
Example:
Eigenvalues of are:
128 2 , 1 1
7
3 2
7
1
= = = = =
1 2 3
0 0 2
If 1 2 1
1 0 3
then find Eigenvaluesfor if eigenvalues
for are 1, 2
(
(
=
(
(

= = =
A
7
A
A
7
A
8
29
Theorems
30
Theorems
If x is an eigenvector of a matrix A to an
eigenvalue ,so is kx with any k 0. =
31
Example
Let
The eigenvalues of A are 3, 0, and 2. The
eigenvalues of B are 4 and 1.
What does it mean for a matrix to have an eigenvalue
of 0 ?
This happens if and only if the equation has
a nontrivial solution. Equivalently, Ax=0 has a
nontrivial soltion if and only if A is not invertible. Thus
0 is an eigenvalue of A if and only if A is not
invertible.
(
(
(

=
(
(
(


=
4 3 5
0 1 2
0 0 4
and
2 0 0
6 0 0
8 6 3
B A
0x x = A
32
Diagonalization
The Eigenvector Problem:
Given an matrix A, does there exist a
basis for consisting of eigenvectors of A?
The Diagonalization Problem:
Given an matrix A, does there exist an
invertible matrix P such that is
a diagonal matrix?
Apparently above two problems are different but
they are equivalent thats why diagonalization
problem is considered in the discussion of
eigenvectors.
n n
n
R
n n
1
= D P AP
9
33
Diagonalizable Matrix
Definition:
A square matrix A is said to be
diagonalizable if there exist an invertible
matrix P such that is a diagonal
matrix.
Theorem:
If A is an matrix then the following are
equivalent:
A is diagonalizable
A has n linearly independent eigenvectors.
1
= D P AP
n n
34
Procedure for Diagonalization
1. Find eigenvalues of A.
2. Find n linearly independent eigenvectors
of A, say,
3. Form matrix P having as
its columns.
4. Construct . The matrix D will be
diagonal with as its
successive diagonal entries, where is the
eigenvalue corresponding to .
1 2 3 4
, , , ,..., .
n
p p p p p
1 2 3 4
, , , ,...,
n
p p p p p
1
D P AP

=
1 2 3 4
, , , ,...,
n
i

, 1, 2,..., =
i
p i n
35
Example 1 (1/10)
Find a matrix P that diagonalizes
0 0 2
1 2 1
1 0 3
A
| |
|
=
|
|
\ .
Step 1
Find the eigenvalues of A
Solution:
1 0 0 0 0 2
0 1 0 1 2 1
0 0 1 1 0 3

( | |
| (
=
| (
|
(
\ .
I A
36
0 0 0 0 2
0 0 1 2 1
0 0 1 0 3
0 2
1 2 1
1 0 3
0 2
det( ) 1 2 1
1 0 3

( | |
| (
=
| (
|
(
\ .
| |
|
=
|
|

\ .
=

I A
Example 1 (2/10)
10
37
The characteristic equation of A is
3 2
5 8 4 0 + =
In factored form
( )( )
2
1 2 0 = Verify!
Thus, the eigenvalues of A are
1 2 3
1, 2 = = =
Example 1 (3/10)
38
Step 2
Find the corresponding eigenvectors
By definition
1
2
3
(
(
=
(
(

x
x
x
x
is an eigenvector of A corresponding to if and
only if x is a non-trivial solution of
that is

( ) 0 I A x =
1
2
3
0 2 0
1 2 1 0
1 0 3 0

( ( (
( ( (
=
( ( (
( ( (

x
x
x
Example 1 (4/10)
39
If , then 2 =
1
2
3
2 0 2 0
1 0 1 0
1 0 1 0
( ( (
( ( (
=
( ( (
( ( (

x
x
x
Solving this system yields
1 3
= x x x
2,
,x
3
(free variables)
The eigenvectors of A corresponding to
are nonzero vectors of the form
2 =
0 1 0
0 0 1
0 1 0
( ( ( ( (
( ( ( ( (
= = + = +
( ( ( ( (
( ( ( ( (

s s
t t s t
s s
x
Example 1 (5/10)
40
As
are linearly independent, so they form the basis
for eigenspace corresponding to 2 =
1 0
0 , 1
1 0
( (
( (
( (
( (

If , then 1 =
1
2
3
1 0 2 0
1 1 1 0
1 0 2 0
( ( (
( ( (
=
( ( (
( ( (

x
x
x
Example 1 (6/10)
11
41
Solving this system yields
1 3
2 = x x x
3
(free variable)
The eigenvectors corresponding to are
nonzero vectors of the form
1 =
2 2
1
1
( (
( (
= =
( (
( (

s
s s
s
x So
2
1
1
(
(
(
(

2 3
= x x
=>
1 2 3
1 0 2
0 , = 1 , 1
1 0 1
( ( (
( ( (
= =
( ( (
( ( (

p p p
Example 1 (7/10)
42
Step 3
It is easy to verify that are
linearly independent so that
diagonalizes A.
Step 4
The resulting diagonal matrix is
1 2 3
{ , , } p p p
1 0 2
0 1 1
1 0 1
P
| |
|
=
|
|
\ .
1
2 0 0
0 2 0
0 0 1

| |
|
= =
|
|
\ .
D P AP
Example 1 (8/10)
43
To verify that
1
2 0 0
0 2 0
0 0 1

| |
|
=
|
|
\ .
A P P
We can compute
= PA DP Verify it for the given case
Example 1 (9/10)
44
Remark:
There is no preferred order for the columns of
P. Since the i
th
diagonal entry of is an
eigenvalue corresponding to i
th
column vector
of P, changing the order of columns of P will
change the order of eigenvalues on the
diagonal of .
1
P AP

1
P AP

1 2 0
0 1 1
1 1 0
| |
|
=
|
|
\ .
P
For example
1
2 0 0
0 1 0
0 0 2

| |
|
= =
|
|
\ .
D P AP
=>
Example 1 (10/10)
12
45
Example 2 (1/3)
Diagonalize
The characteristic equation of A is
Thus, is the only eigenvalue of A, and
corresponding eigenvectors are the solutions of
3 2
2 1
| |
=
|

\ .
A
1 =
( ) 0 = I A x
2
( 1) 0. + =
46
1 2
1 2
2 2 0
2 2 0
1
1
x x
x x
x t
=
=
| |
=
|
\ .
That is
This system has eigenspace consisting of
vectors of the form
1
1
( (
=
( (

t
t
t
Example 2 (2/3)
47
Example (3/3)
Since A does not have two linearly
independent eigenvectors i.e. the eigenspace
is 1-dimensional, therefore, A is not
diagoalizable.
48
Conditions for Diagonalizability
Theorem 1:
If are eigenvectors of A
corresponding to distinct eigenvalues
,then
are linearly independent.
Theorem 2:
If an matrix A has n distinct eigenvalues,
then A is diagonalizable.
1 2 3 4
, , , ,...,
n
v v v v v
n n
1 2 3 4
, , , ,...,
n 1 2 3
, , ,...,
n
v v v v
13
49
Example
Whether A is
diagonalizable or not?
0 1 0
0 0 1
4 17 8
A
(
(
=
(
(

The eigenvalues of A are
1 2 3
4, 2 3, 2 3 = = + =
Since A has three distinct eigenvalues, so A
can be diagonalized. We can verify that
1
4 0 0
0 2 3 0
0 0 2 3
D P AP

(
(
= = +
(
(


50
LU-Factorization
With Gaussian elimination and Gauss-Jordan
elimination, a linear system is solved by
operating systematically on the augmented
matrix.
Another approach is based on factoring the
coefficient matrix into a product of lower (L)
and upper (U) triangular matrix (LU-
Decomposition).
LU-decomposition speeds up the solution of
= Ax b
51
Solving Linear Systems by
LU-Factorization
If an matrix can be factored into the product of L
and U matrices, then the linear system can be solved as:
Step 1
Rewrite the system as
Step 2
Define a new matrix y by
n n
= Ax b
) (1 LUx = b
1 n
( ) 2 Ux = y
52
Step 3
Use (2) to rewrite (1) as and solve for y
Step 4
Substitute y in (2) and solve for x
Ly = b
x b
y
Multiplication by A
M
u
lt
ip
lic
a
t
io
n

b
y
L
M
u
lt
ip
lic
a
t
io
n

b
y
U
Solving Linear Systems by
LU-Factorization
14
53
LU Factorization
The LU factorization is motivated by the fairly
common industrial and business problem of solving a
sequence of equations, all with the same coefficient
matrix A:
(1)
When A is invertible, one could compute and then
compute and so on. However, it is more
efficient to solve the first equation in (1) by row
reduction and obtain an LU factorization of A at the
same time. Thereafter, the remaining equations in (1)
are solved with LU factorization.
n 2 1
b x , , b x , b x = = = A A A
1
A
, ,
2
1
1
1
b A b A

54
Example Solving Linear
Systems by LU-Factorization
Solve the system
1
2
3
2 6 2 2
-3 -8 0 2
4 9 2 3
x
x
x
( ( (
( ( (
=
( ( (
( ( (

Given
2 6 2 2 0 0 1 3 1
-3 -8 0 3 1 0 0 1 3
4 9 2 4 3 7 0 0 1
( ( (
( ( (
=
( ( (
( ( (

A = LU
55
1
2
3
2 0 0 1 3 1 2
3 1 0 0 1 3 2
4 3 7 0 0 1 3
x
x
x
( ( ( (
( ( ( (
=
( ( ( (
( ( ( (

Step 1
Rewriting the system as
Step 2
Defining vector y as y=[y
1
, y
2
, y
3
]
T
1 1
2 2
3 3
1 3 1
0 1 3
0 0 1
x y
x y
x y
( ( (
( ( (
=
( ( (
( ( (

Example Solving Linear
Systems by LU-Factorization
56
Equivalently
1
1 2
1 2 3
2 2
3 2
4 3 7 3
y
y y
y y y
=
+ =
+ =
Solving using forward substitution, we get
1 2 3
1, 5, 2 y y y = = =
Step 3
1
2
3
1 3 1 1
0 1 3 5
0 0 1 2
x
x
x
( ( (
( ( (
=
( ( (
( ( (

Example Solving Linear
Systems by LU-Factorization
15
57
Equivalently
1 2 3
2 3
3
3 1
3 5
2
x x x
x x
x
+ + =
+ =
=
Solving using back substitution, we get
1 2 3
2, x 1, x 2 x = = =
Step 4
Example Solving Linear
Systems by LU-Factorization
58
LU decomposition
Theorem:
If A is a square matrix that can be reduced to
a row-echelon form U without using row
interchanges, then A can be factored as
A = LU, where L is a lower triangular matrix.
59
Procedure for LU decomposition
Step 1 : Reduce A to a row-echelon form U without
row interchanges, keeping track of the multipliers
used to introduce the leading 1s and the multipliers
used to introduce the zeros below the leading 1s.
Step 2 : In each position along the main diagonal of
L, place the reciprocal of the multiplier that introduced
the leading 1 in that position in U.
Step 3 : In each position below the main diagonal of
L, place the negative of the multiplier used to
introduce the zero in that position in U.
Step 4: Form the decomposition A=L U
60
LU Decomposition
Example
Find an LU-decomposition of
Solution: We begin by reducing A to row-echelon form,
keeping track of all multipliers.
(
(
(

=
5 7 3
1 1 9
0 2 6
A
16
61
LU Decomposition
Example
(
(
(

5 7 3
1 1 9
0 2 6
(
(
(

5 7 3
1 1 9
0
3
1
1
6
1
Multiplier =
(
(
(


5 8 0
1 2 0
0
3
1
1
3 Multiplier
9 Multiplier
=
=
(
(
(
(


5 8 0
2
1
1 0
0
3
1
1
2
1
Multiplier =
(
(
(
(


1 0 0
2
1
1 0
0
3
1
1
8 Multiplier =
(
(
(
(


1 0 0
2
1
1 0
0
3
1
1
1 Multiplier =
62
LU Decomposition
Example
Constructing L from the multipliers yields the LU-
decomposition
(
(
(
(


(
(
(

= =
1 0 0
2
1
1 0
0
3
1
1
1 8 3
0 2 9
0 0 6
LU A
63
Iterative Solution of Linear
Systems
Although Gaussian elimination or Gauss-
Jordan elimination is generally the method of
choice for solving a linear system of n
equations in n unknowns, there are other
approaches to solving linear systems, called
iterative or indirect methods, that are better in
certain situations.
The Gauss-Seidel method is the most
commonly used iterative method.
64
Gauss-Seidel Method
Basic Procedure:
Algebraically solve each linear equation for x
i
Assume an initial guess
Solve for each x
i
and repeat
Use absolute relative approximate error after each
iteration to check if error is within a prespecified
tolerance.
17
65
A set of n equations in n unknowns:
1 1 3 13 2 12 1 11
... b x a x a x a x a
n n
= + + + +
2 2 3 23 2 22 1 21
b x a ... x a x a x a
n n
= + + + +
n n nn n n n
b x a x a x a x a = + + + + ...
3 3 2 2 1 1
. .
. .
. .
Gauss-Seidel Method (Algorithm)
The system Ax=b is reshaped by solving the first
equation for x
1
, the second equation for x
2
, and the
third for x
3
, and n
th
equation for x
n
.
66
Rewriting each equation
11
1 3 13 2 12 1
1
a
x a x a x a b
x
n n

=

nn
n n,n n n n
n
,n n
n ,n n n ,n n , n , n n
n
n n
a
x a x a x a b
x
a
x a x a x a x a b
x
a
x a x a x a b
x
1 1 2 2 1 1
1 1
1 2 2 1 2 2 1 1 1 1 1
1
22
2 3 23 1 21 2
2



=

=

=




Gauss-Seidel Method (Algorithm)
67
Solve for the unknowns
Assume an initial guess for x
(
(
(
(
(
(

n
- n
2
x
x
x
x
1
1

Use rewritten equations to


solve for each value of xi.
Important: Remember to
use the most recent value
of x
i
. Which means to
apply values calculated to
the calculations remaining
in the current iteration.
Gauss-Seidel Method (Algorithm)
68
Calculate the Absolute Relative Approximate Error
100
new old
i i
x new i
i
x x
x


=
So when has the answer been found?
The iterations are stopped when the absolute relative
approximate error is less than a pre-specified tolerance for all
unknowns.
Gauss-Seidel Method (Algorithm)
18
69
Gauss-Seidel Method: Pitfall
2 5.81 34
45 43 1
123 16 1
A
(
(
=
(
(

Diagonally dominant: The coefficient on the diagonal
must be at least equal to the sum of the other
coefficients in that row and at least one row with a
diagonal coefficient greater than the sum of the other
coefficients in that row.
124 34 56
23 53 5
96 34 129
B
(
(
=
(
(

Which coefficient matrix is diagonally dominant?
Most physical systems do result in simultaneous linear
equations that have diagonally dominant coefficient
matrices.
70
Gauss-Seidel Method: Example 1
Given the system of equations
1 5 3 12
3 2 1
x - x x = +
28 3 5
3 2 1
x x x = + +
76 13 7 3
3 2 1
= + + x x x
(
(
(

=
(
(
(

1
0
1
3
2
1
x
x
x
With an initial guess of
The coefficient matrix is:
12 3 5
1 5 3
3 7 13
A
(
(
=
(
(

Will the solution converge
using the Gauss-Seidel
method?
71
Gauss-Seidel Method: Example 1
12 3 5
1 5 3
3 7 13
A
(
(
=
(
(

Checking if the coefficient matrix is diagonally dominant
4 3 1 5 5
23 21 22
= + = + > = = a a a
10 7 3 13 13
32 31 33
= + = + > = = a a a
8 5 3 12 12
13 12 11
= + = + > = = a a a
The inequalities are all true and hence the matrix A is
strictly diagonally dominant.
Therefore: The solution should converge using the Gauss-
Seidel Method
72
Gauss-Seidel Method: Example 1
1
2
3
12 3 5 1
1 5 3 28
3 7 13 76
x
x
x
( ( (
( ( (
=
( ( (
( ( (

Rewriting each equation
12
5 3 1
3 2
1
x x
x
+
=
5
3 28
3 1
2
x x
x

=
13
7 3 76
2 1
3
x x
x

=
With an initial guess of
(
(
(

=
(
(
(

1
0
1
3
2
1
x
x
x
( ) ( )
50000 . 0
12
1 5 0 3 1
1
=
+
= x
( ) ( )
9000 . 4
5
1 3 5 . 0 28
2
=

= x
( ) ( )
0923 . 3
13
9000 . 4 7 50000 . 0 3 76
3
=

= x
19
73
Gauss-Seidel Method: Example 1
The absolute relative approximate error
1
0.50000 1.0000
100 67.662%
0.50000
x

e = =
2
4.9000 0
100 100.00%
4.9000
x

e = =
3
3.0923 1.0000
100 67.662%
3.0923
x

e = =
The maximum absolute relative error after the first
iteration is 100%
74
Gauss-Seidel Method: Example 1
(
(
(

=
(
(
(

8118 . 3
7153 . 3
14679 . 0
3
2
1
x
x
x
After Iteration #1
( ) ( )
14679 . 0
12
0923 . 3 5 9000 . 4 3 1
1
=
+
= x
( ) ( )
7153 . 3
5
0923 . 3 3 14679 . 0 28
2
=

= x
( ) ( )
8118 . 3
13
900 . 4 7 14679 . 0 3 76
3
=

= x
Substituting the x values into the equations
After Iteration #2
(
(
(

=
(
(
(

0923 . 3
9000 . 4
5000 . 0
3
2
1
x
x
x
75
Gauss-Seidel Method: Example 1
Iteration #2 absolute relative approximate error
x 1
0.14679 0.50000
100 240.62%
0.14679

e = =
x 2
3.7153 4.9000
100 31.887%
3.7153

e = =
x 3
3.8118 3.0923
100 18.876%
3.8118

e = =
The maximum absolute relative error after the first iteration
is 240.62%
This is much larger than the maximum absolute relative
error obtained in iteration #1. Is this a problem?
76
Gauss-Seidel Method: Example 1
Repeating more iterations, the following values are obtained
1 x

2 x

3 x

67.662
18.876
4.0042
0.65798
0.07499
0.00000
3.0923
3.8118
3.9708
3.9971
4.0001
4.0001
100.00
31.887
17.409
4.5012
0.82240
0.11000
4.900
3.7153
3.1644
3.0281
3.0034
3.0001
67.662
240.62
80.23
21.547
4.5394
0.74260
0.50000
0.14679
0.74275
0.94675
0.99177
0.99919
1
2
3
4
5
6
x
3
x
2
x
1
Iteration
(
(
(

=
(
(
(

4
3
1
3
2
1
x
x
x (
(
(

=
(
(
(

0001 . 4
0001 . 3
99919 . 0
3
2
1
x
x
x
The solution obtained
is close to the exact solution of
20
77
Gauss-Seidel Method: Example 2
Solve the following linear system by Gauss-
Seidel method.
1 2 3
1 2 3
1 2 3
3 2
6 4 11 1
5 2 2 9
x x x
x x x
x x x
+ =
+ + =
=
Solution
We can verify that the system is not strictly
diagonally dominant, so the Gauss-Seidel
method does not guaranteed to converge.
1 3 1
6 4 11
5 2 2
A
(
(
=
(
(

78
Definiteness of Matrices
A matrix is Positive Definite iff x
T
Ax>0 for all
x.
A matrix is Negative Definite iff x
T
Ax<0 for all
x.
A matrix is Indefinite iff x
T
Ax>0 for some x
and x
T
Ax<0 for some other x.
79
Checking for Definiteness
There are different methods to check the
definiteness of a matrix. One method is based
on Eigenvalues of the given matrix. If all
eigenvalues of the matrix are positive, then
the matrix would be positive definite. If all the
eigenvalues are negative, then the matrix
would be negative definite.
80
Checking for Definiteness
Another simpler method is to convert the
matrix to U by applying the Gauss elimination
method. If all the diagonal elements are +ve ,
the matrix is +ve definite ; if all diagonal
elements are ve, the matrix is negative
definite. Otherwise it is indefinite.
21
81
Checking for Definiteness -
Example
Check the definiteness of the matrix.
12 2
2 10
A
(
=
(


Solution
12 2
2 10
(
(


12 2
29
0
3
(
(
(

1 2
1
6
E E +
Since the two diagonal elements are +ve,
therefore matrix A is +ve definite.

You might also like