You are on page 1of 38

Numerical Methods (14B11MA312)

System of Linear equations


A system of n linear equation in n unknowns (variables)
form
+

+ +
+

+ +

,..

are equations of the

=
=

..
+
The numbers

+ +

are known coefficients and

are known constants.

This system of equations can be represented as


=

or

= 1 to

(1)

Matrix representation:

Or


.
= .
. .

(2)

Here
A = coefficient matrix of system of linear equations
B = column vector of constants
X = column vector of unknowns
A solution of system (1) is a set of numbers

,..

that satisfies all equations.

Geometrical interpretation:
Dr. B.R. Gupta

The system of equations in two variables


+

Represents the equation of lines in 2-dimensoinal space.

The intersection point ( , ) = ( , ) lies on both lines i.e. satisfies both equations so
= is a solution.

&

Consider the system of equations in three variables


+

Dr. B.R. Gupta

A system of n linear equation in n unknowns has unique solution if A is non-singular matrix (


| | 0).
If coefficient matrix A is singular, The equations may have infinite number of solutions, or no
solutions at all depending on the constant vector.
Definitions: A real matrix A is said to be
Non-singular if | | 0
Symmetric if

Skew-symmetric if
=

Orthogonal if
Diagonal if

= 0 when

Upper triangular if

= 0 when >

Lower triangular if

= 0 when >
0
0

Solution X can be obtained by direct or Iterative methods


(A) Direct methods:

Gauss elimination method


Gauss-Jordan method
Factorization method

(B) Iterative methods:

Jacobis iteration method


Gauss-Seidel method

Gauss elimination method:


Consider the system of equations
2 + 4 6 = 4

.(1)

+ 5 + 3 = 10

.(2)

+3 +2 =5

.(3)

Dr. B.R. Gupta

In this method, the unknowns are eliminated such that the elimination process leads to an upper
triangular system and the unknowns are obtained by back substitution.
Step1. Eliminate x from eq (2) and (3):
Multiply eq (2) & (3) by -2 and adding with eq (1).
2 + 4 6 = 4

.(4)

0 6 12 = 24

.(5)

0 2 10 = 14

.(6)

Step2. Eliminate y from eq (6):


Multiply eq (6) by -3 and adding with eq (5)
2 + 4 6 = 4

.(7)

0 6 12 = 24

.(8)

0 0 + 18 = 18

.(9)

Step3. Evaluation of the unknowns by back substitution:


=1
=2
= 3
Hence the solution is (x,y,z)=(-3,2,1). It is also a point of intersection of planes represented by eq
(1), (2) & (3).
Elementary row operations:
(1)
(2)
(3)

Interchange of any two row (equation)

Multiplication of a row by a non-zero constant

Addition of a row with constant multiple of another row

Augmented Matrix [AB] or of system of linear equations is obtained by combining matrix A


with the column B. i.e.

[A|B] = .
.

Ex: Solve the following system of equations by Gauss elimination method:


Dr. B.R. Gupta

2 + 4 6 = 4

.(1)

+ 5 + 3 = 10

.(2)

+3 +2 =5

.(3)

Sol: Writing the equations in matrix form


2
1
1

4 6
5
3
3
2

4
= 10
5
.

The augmented matrix of above system is


2
[A|B] = 1
1

4 6 4
5
3 10
3
2
5

We shall convert the matrix A in upper triangular matrix by applying elementary row operations.
2

+
2 4 6 4
0 6 12 24
0 2 10 14
3

2 4 6 4
0 6 12 24
0 0
18 18
Thus the equivalent system of equations are
2 + 4 6 = 4
6 12 = 24
18 = 18

.(4)
.(5)
.(6)

By back substitution
=1
=2
= 3
Ex: Solve the following system of equations by Gauss elimination method:
Dr. B.R. Gupta

=2

+4

= 11

+2

=0

+2

.(1)
.(2)
.(3)

=2

.(4)

Sol: Writing in matrix form


1 1 1 1
4 4
1 1
11 1 2
2 1
22
.

2
11
=
0
2
=

The augmented matrix of above system is

[A|B] =

1 1 1 1
4 4
1 1
11 1 2
2 1
22

2
11
0
2

We shall convert the matrix A in upper triangular matrix by applying elementary row operations.
=0
= 1
=2
=1
Gauss Jordan Method:
In this method, the system of equations
= are converted in a diagonal set
= by
applying elementary row operations on both A & B such that A reduced to the identity matrix I.
Then solution is obtained without necessity of back substitution.
Ex: Solve the following system of equations by Gauss-Jordan method:
2 = 4
5 + = 9
4 3 = 10

.(1)
.(2)
.(3)

Sol: Writing the equations in matrix form


Dr. B.R. Gupta

1 2
0
0 5
1
4
0 3

4
= 9
10
.

We shall convert the equation

into .

=
= .

The augmented matrix of above system is


1 2
0
[A|B] = 0 5
1
4
0 3

1 2
0
0 5
1
0
8 3

1 2
0
0

0
1
1
5
8 3

+2

2
5
1
1
5
7
0
5
0

0
0

4
9
6

1
5

4
9
10

4
9
5
6

5
9
5
42

5

5
7

2
5
1
1
5
0
1

5
9
5
6

Dr. B.R. Gupta

1
0
0

0
1
0

0
0
1

2
3
6

Thus the equivalent system of equations are


+0+0 =2
0+

+0=3

0+0+ =6

= 2,

= 3,

=6

Ex: Solve the following system of equations by Gauss-Jordan method:


+

+ =9

.(1)

2 3 + 4 = 13

.(2)

3 + 4 + 5 = 40

.(3)

Sol:
.
1
0
0

0
1
0

0
0
1

1
3
5

Thus the equivalent system of equations are


+0+0 =1
0+

+0=3

0+0+ =5

= 1,

= 3,

=5

LU Decomposition method:
In this method, the matrix A is decomposed or factorized as the product of a lower triangular
matrix L and upper triangular matrix U . i. e. = .
Where

Dr. B.R. Gupta

0
0 0

0 0

=
. 0

.
.
0
.
0
0
.
=

0
0
0.

Note that the LU decomposition is not unique. Any matrix A with all non-zero diagonal
elements can be factored in infinite number of ways.
Unique decomposition can be obtained if
(i)

= 1,

(ii)

= 1,

= 1, 2, (The method is called Doolittle Method)


OR
= 1, 2, (The method is called Crouts Method)
OR

(iii)

=
i.e.
Method)

= .

for symmetric matrix A (The method is called Choleskys


=

We write system of equations

as
. .

Let

.(2)

Then

....(3)

(1)

First, we find Z from eq. (3) then we can evaluate X from eq. (2)
=

Ex: Solve the system of equations

4
= 1
3

by crouts method, where


1
1
4 2 ,
2 4

4
= 4
6

0 1
0 . 0
0

Sol:
Decomposing matrix A into L & U.
4
= 1
3

1
1
4 2 =
2 4

1
0

1
Dr. B.R. Gupta

+
+

4
= 1
3

+
+

1
1
4 2
2 4

Comparing the elements of resultant matrix with matrix A.


=4

=1

=1

=4

=2

=3

=1
+

= 2

= 4

= 4

Thus
4 0
0
15

1
0
=
4

3 4 4
And
1

=
0
0
Now the system of equations

.(2)

Then

....(3)

1
0

4
3

5
1

can be written as
. .

Let

1
4

(1)

First, we find Z from eq. (3)


Dr. B.R. Gupta

4 0
0
15

1
0
4

3 4 4

4


= 4

6

=1
4
=
5
1
=
2

Now we find X by solving eq (2)

1
4

4
3

5
1

1
0

1
4

= 5
1
2

By back substitution

Ex: Solve the system of equations

2
= 1
1

1
2
1
=
2
=1

by Doolittles method, where


4 6
5
3 ,
3
2

4
= 10
5

Sol:
Decomposing matrix A into L & U.
1
= .

0
1

0
0 . 0
0
1

+
+
2
= 1
1

0
+
+

4 6
5
3
3
2

Dr. B.R. Gupta

1
1

= 2
1
2
2
= 0
0

0
1
3

4
3
0

6
6
3

=1
=2
= 3

Ex: Solve the system of equations


1
= 2
3

by Cholesky method, where


2
3
8 22 ,
22 82

5
= 6
10

Sol:
Decomposing matrix A into L & LT.
0
=

0
0 . 0
0

+
+

1
= 2
3

2
3
8 22
22 82

Comparing the corresponding elements of resultant matrix with matrix A.

=1

=1

= 2

=2

=3

=3

=8
+

=2

= 22

=8

= 82

=3

Thus
Dr. B.R. Gupta

1 0
= 2 2
3 8

Now the system of equations

can be written as
.

Let

.(2)

Then

....(3)

0
0
3

(1)

First, we find Z from eq. (3)


1 0
2 2
3 8

0
0
3

5
6
10

=5
= 2
= 3

Now we find X by solving eq (2)

=
1 2
0 2
0 0

3
8
3

5
= 2
3

By back substitution

Ex: Solve the system of equations

4
= 1
0

= 1
=3
=2

by Cholesky method, where


1 0
4 1 ,
1
4

1
= 0
0

Sol:
Decomposing matrix A into L & LT.
..
Iteration method:

Dr. B.R. Gupta

A method in which we start from an initial guess solution


(which may be poor) and compute
step by step (in general better & better) approximations
,

. of an unknown
solution of problem.

Iteration method for system of linear equations (AX=B)


If the coefficient matrix A of a given equations is diagonally dominant ( . . |

, , for all ) then convergence to exact solution is guaranteed.


(i)

Gauss-Jacobi method:

It is a method of simultaneous corrections because we use all components of approximation X m


in the next iteration Xm+1 simultaneously.
We can simply explain this method in terms of an example.
Ex: Perform three iterations of the Gauss-Jacobi method for solving the system of equations
+ 5 + = 14

. . (1)

4 +

+ 3 = 17

. . (2)

+ 8 = 12

. . (3)

Sol:
Rearrange the equations in such a way that all the diagonal terms are dominant.
4 +

+ 3 = 17

+ 5 + = 14
2

+ 8 = 12

We solve the equations for unknowns on the diagonal. That is


Dr. B.R. Gupta

1
= (17
4

3 )

1
= (14
5

1
(12 2 + )
8

If we assume the initial value of x, y & z to be 0, then we get first iteration


1
= (17 0 0)
4
1
= (14 0 0)
5
1
= (12 0 + 0)
8
The iterative equations can be written as
=

1
4

3
4

17
4

1
5

1
5

14
(5)
5

1
4

1
8

Computing second iteration by putting values of


=
=

+
,

3
2

(4)

(6)
&

, we get

7 9 17 97
+
=
10 8 4
40

3 17 14 33

+
=
10 20 5
20

17 7 3 63
+
+ =
16 20 2 80

Computing third iteration


=
=

33 189 17

+
= 3.25
80 320 4

97
63 14

+
= 2.16
200 400 5

Dr. B.R. Gupta

Repeating this process, we get next iterations.

Writing iterations in a table


Iterations

Initial

II

III

IV

..

exact
solution

value
0
0
0

4.25
2.8
1.5

2.42
1.6
0.78

3.25
2.16
1.1

2.88
1.93
0.96

..
..
..

3
2
1

Matrix form of Gauss Jacobi Method:


Given a square system of n linear equations

Where

Then A can be written as the sum of a diagonal component D, and the remainder R:
A = D+R

Thus, eq. AX=B can be written as


Dr. B.R. Gupta

(D+R)X = B

DX = B RX
X = D-1(B RX)

The solution is then obtained iteratively via


X(k+1) = D-1(B RX(k))
Where X(k+1) is k+1 iteration (approximation) of initial solution.
Ex: Perform four iterations of the Gauss-Jacobi method for solving the system of equations
2 +8 =5

. . (1)

2 + 10 + = 51 . . (2)
15 + 3 2 = 85 . . (3)
Sol: .

Writing iterations in a table


Iterations

Initial

II

III

IV

..

exact
solution

value
0
0
0

5.667
5.1
0.625

4.73
3.904
1.192

5.045
4.035
1.010

4.994
3.99
1.003

..
..
..

5
4
1

Note: Choice of initial value will not affect the solution (we get same solution) but will affect the
number of iterations for convergence.
Ex: For the system of equations

4
1
= 1
5
2 1
(i)
(ii)

where
3
1 ,
8

17
= 14 .
12

Set up the Gauss Jacobi iteration scheme in matrix form.


Starting with initial approximation ( ) = 0, iterate three times.

Sol: The given system of eqs are diagonally dominant hence sequence of approximate solutions
will converge.
Let

A=D+R
Dr. B.R. Gupta

Where
4
= 0
0

0
5
0

0
0 ,
8

0
1
= 1
0
2 1

3
1
0

The Gauss Jacobi iteration equations in matrix form is given by


(

( )

(1)

Putting values of matrices in Jacobi iteration eq (1)


(

14 0
0
= 0
15
0
0
0 18

14 0
0
= 0
15
0
0
0 18

Putting initial approximation

( )

17
0
1
0
14 1
12
2 1
+3
17
+
14
2
12

3
1
0
(2)

0
= 0 in eq (2), we get first approximation (iteration)
0

( )

174
= 145
32

Computing second iteration using X(1) .


( )

145 + 92
17
14 174 + 32
12
172 145

14 0
0 9.7
= 0
15
0 5.75
0
0 18 5.7
2.42
= 1.15 =
0.71
Computing third iteration using X(2) .
( )

17
14

12

Dr. B.R. Gupta

3.25
= 2.16 =
1.10
Ex: For the system of equations
2 +8 =5

. . (1)

2 + 10 + = 51 . . (2)
15 + 3 2 = 85 . . (3)
(i)
(ii)

Set up the Gauss Jacobi iteration scheme in matrix form.


Starting with initial approximation ( ) = 0, iterate four times.

Sol:
Making the system of equations diagonally dominant .
15 + 3 2 = 85
2 + 10 + = 51
2 +8 =5
Writing in matrix form

AX=B
15 3 2
2
10
1
1 2
8

Let

85
= 51 .
5

A=D+R

Where
15 0
= 0
10
0
0

0
0 ,
8

0
3 2
= 2
0
1
1 2
0

The Gauss Jacobi iteration equations in matrix form is given by


(

( )

(4)

Putting values of matrices in Jacobi iteration eq (4)


(

115 0
0
= 0
110
0
0
0 18

85
0
3 2
0
1
51 2
1 2
0
5

Dr. B.R. Gupta

115 0
0
= 0
110
0
0
0 18

Putting initial approximation

( )

3 2
85
51 2 +
2
5

(5)

0
= 0 in eq (5), we get first approximation (iteration)
0

( )

8515
= 5110
58

Computing second iteration using X(1) .


( )

3 2
85
51 2 +
2
5

..
4.73
= 3.904 =
1.192
Computing third iteration using X(2) .
( )

==

..
85
51 . .
..
5

5.045
= 4.035 =
1.010
Ex: Perform four iterations of the Gauss-Jacobi method for solving the system of equations
3 +2 +6 =7
+3 +

=4

. . (1)
. . (2)

4 + 2 + = 4 . . (3)
Sol:
Rearrange the equations in such a way that all the diagonal terms are dominant.
..
Gauss Seidel Method:
Dr. B.R. Gupta

The iterations (approximations) from Gauss Seidel method converges at least two times faster
than Jacobi method.
Ex: Perform three iterations of the Gauss-Seidel method for solving the system of equations
8 +2 2 =8
8 + 3 = 4
2 +

+ 9 = 12

Sol:
The system of equations are diagonally dominant.
Keeping diagonal terms in LHS and shifting remaining terms in RHS.

= (0 2 + 2 + 8)
=(

+ 0 + 3 + 4)

= (2

1
(1)
8

1
8

+ 0 + 12)

(2)
1
9

. . (3)

Let initial approximation be x0 = y0 = z0 = 0.


Iteration 1: First improvement x1 , y1 , z1 are obtained as
= (0 + 0 + 0 + 8)

1
=1
8

1 5
= ( 1 + 0 + 0 + 4) =
8 8
= 2

5
1 75
+ 0 + 12 =
8
9 72

x1 , y1 , z1 are obtained from eqs (1), (2), (3) by substituting on the RHS the most recent
approximation for each unknown.
Thus the Gauss Seidel iteration equations can be written as
+2

+ 8)

1
(4)
8

+0+3

+ 4)

1
8

= (0 2
= (

(5)
Dr. B.R. Gupta

= (2

+ 0 + 12)

1
(6)
9

Iteration 2: second improvement x2 , y2 , z2 are obtained as


=

5 25
0 +
+8
4 12

53
25
1 395
+0+
+4 =
= 1.029
48
8
8 384

1 53
=
= 1.104
8 48

53 395
1 3365

+ 0 + 12 =
= 0.974
24 384
9 3456

Iteration 3: Third improvement x3 , y3 , z3 are obtained as


1
= (0 2 1.029 + 2 0.974 + 8) = 0.986
8
= ( 0.986 + 0 + 3 0.974 + 4 )

1
= 0.989
8

1
= (2 0.986 0.989 + 0 + 12) = 1.004
9
Writing iterations in a table
Iterations

Initial value
0
0
0

I
1
0.625
1.042

II
1.104
1.029
0.974

III
0.986
0.989
1.004

IV
1.004
1.002
0.999

..
..
..
..

exact Solution
1
1
1

Note: The Gauss-Seidel iteration method is a method of successive corrections because we


replace approximations by corresponding new data as soon as the latter have been computed.
Ex: Perform three iterations of the Gauss-Seidel method for solving the system of equations
2 +4 =5

. . (1)

+ 4 2 = 1 . . (2)
4

= 12 . . (3)

Sol: ..
Writing iterations in a table
Iterations

Initial
value
0

II

III

2.937

2.978

IV

..
..

exact
solution
3

Dr. B.R. Gupta

0
0

0.5
0.75

0.859
0.945

0.967
0.989

..
..

1
1

Matrix form of Gauss-Seidel Method:


Given a square system of linear equations
=
Where

Then A can be written as the sum of a diagonal component D, Lower triangular matrix L and
upper triangular matrix U:
A =L+ D+U
0 0 0
0
0
0 0
=
, =
0 0
0
0 0 0

0
0

0
0
0

0
0
,
0
0

0
0 0
=
0 0
0 0

0
0

Thus, eq. AX=B can be written as


(D+L+U)X = B

DX = LX UX + B
X = D-1( LX UX + B)

The Gauss Seidel iteration equations can be written as


X(k+1) = D-1( LX(k+1) UX(k) + B)
Where X(k+1) is k+1 iteration (approximation) of initial solution.
Eigen value problem:
Let A be an n n matrix. A non-zero vector X (column matrix) is said to be an Eigen vector of
matrix A if
AX=X

(1)

That is X is a solution of equation


(A-I).X = 0

(2)
Dr. B.R. Gupta

where the unknown scalar is called Eigen value.


X = 0 is always a solution of homogeneous equation (2). The non-zero solution (more than one
solution X 0) can be obtained if
coefficient matrix (A-I) of X is singular. That is
A-I = 0

..(3)

Or

=0

Expanding, we get

+ (1)

= 0 . (4)

Thus LHS is polynomial of degree n in and called characteristic polynomial of A denoted by


( ).
The characteristic equation eq (3) may have at most n roots (Eigen values) 1, 2, n. The
set of all Eigen values {1, 2, n} is called spectrum of A. The spectral radius of matrix A
{ , , }.
is defined by ( ) =

Ex: Find the Eigen values and corresponding Eigen vectors of the diagonal matrix
4
= 0
0

0
1
0

0
0 .
5

Sol: The characteristic eq. of matrix A is


A-I = 0
Dr. B.R. Gupta

4
0
0 1
0
0

0
0
5

=0

Or
(4 )(1 )(5 ) = 0

Spectral radius

( )=

= 4, 1, 5

{4, 1, 5}
=5
= 4 can be obtained by solving homogeneous eq.

The Eigen vector X1 corresponding to

0 0 0
0 3 0
0 0 1

=0
0
= 0
0

By back substitution
1.

=0

3.
0.

=0

=0

=0

+ 0.0 + 0.0 = 0 0.

= 0 (1)

For any constant value


0.
So

=0

is a solution of eq(1). Thus


= 0 =
0

Is an Eigen vector corresponding to

1
0
0

= 4.
= 1 can be obtained by solving homogeneous eq.

The Eigen vector X2 corresponding to

[
3
0
0

0
0
0

]
0
0
4

=0
0
= 0
0

Solving matrix equation, we get


Dr. B.R. Gupta

0.0 + 0.

3.

=0

=0

4.

=0

=0

+ 0.0 = 0 0.

= 0 (2)

For any constant value


0.
So

=0

is a solution of eq(2). Thus


0
=

=
0

is an Eigen vector corresponding to

0
1
0

= 1.

Similarly
0
= 0 =
is an Eigen vector corresponding to

0
0
1

= 5.

Observations:
(i) Eigen values of diagonal matrix A are diagonal elements of A.
(ii). X1 X2 X3. If , , are distinct then there corresponding Eigen vectors X1 , X2 , X3
will be independent (distinct).
(iii).

AX=X

A(cX) = (cX)

i.e. If X is an Eigen vector corresponding to then cX will also be eigen vector of corresponding
to .
Ex: Find the Eigen values and corresponding Eigen vectors of the triangular matrix
3
= 0
0

1
2
0

4
6 .
5

Sol: The characteristic eq. of matrix A is


A-I = 0
The Eigen vectors of A corresponding to Eigen values
= 3,

= 2,

=5

are
Dr. B.R. Gupta

1
0
0

= 0 =
0

1
1
0

3
2
1

0
3
= 2
Eigen value problem:

A non-zero vector X (column matrix) is said to be an Eigen vector of matrix A if


AX=X

(1)

Eigen values by iteration (Power method)


Power method provides the largest eigen value and corresponding eigen vector X of a matrix
A. In this method we start from initial vector X0 (0) and compute successively
=
,

,
.

Then the largest Eigen value

And the error

=
=

Let

in the eigen value

,,

is | |

The Eigen vector corresponding to is

Ex: Find the absolutely largest Eigen value and corresponding Eigen vector of the matrix
2
0 1
= 1 1
1
2
2
0
Find the error in the Eigen value at 9th iteration.
Sol:

Choose a non-zero initial vector

1
= 1 , then
1

2
0 1 1
3
= 1 1
1 1 = 1
2
2
0 1
4

Dr. B.R. Gupta

2
0 1 3
2
=
1 1
1
1
0
2
2
0
4
4

2
0 1
2
0
=
1 1
1
0
2
2
2
0 4
4

124
126 ,
4

=
,

Now we shall calculate

=
252
= 254
4

,
124
126 = 31268
4

= [124 126 4]

252
= [124 126 4] 254 = 63268
4

= [252 254

252
]
4 254 = 128036
4

Then the largest Eigen value at 9th iteration.


=

The Eigen vector corresponding to is


And the error

in the eigen value

| |

63268
= 2.0234
31268

252
= 254 .
4

is

128036
(2.0234)
31268

= 0.0241
Ex: Find the absolutely largest Eigen value and corresponding Eigen vector of the matrix
2
= 1
1

2
3
2

1
1
2

Find the error in the Eigen value at 7th iteration.


Dr. B.R. Gupta

Sol:
1
= 1
1

Choose a non-zero initial vector

15625
= 15625 ,
15625

78125
= 78125
78125

=5

| |=0
Determination of Smallest Eigen Value:
=

Thus if is an eigen value of A then the reciprocal 1/ is the eigen value of A-1.
Procedure:
Step1: Determine A-1.
Step2: Calculate largest eigen value * of A-1.
Step3:

is the Smallest Eigen Value of A.

Ex: Find the absolutely smallest Eigen value of the matrix


2 1
0
= 1
2 1 .
0 1
2
By power method at 6th iteration.
Sol:
The largest eigen value of

is the smallest eigen value of A. Now computing


=

We shall find largest eigen value of

1 3
2
4
1

2
4
2

1
2
3

.
Dr. B.R. Gupta

1
= 1 , then
1

Choose initial value

1 3
2
4
1

1 1 3
2
2 4
1

2
4
2

1 1
1 3
2 1 = 4
2
3 1
3
2
4
2

1 3
1 5
=
2 4
7
2
3 3
5

..
=

1 198
280 ,
16
198

1 169
239
8
169

Then the largest Eigen value of B is


Where
=

198 1
1
1
= [198 280 198] 280

=
(39202)
16 16
64
198

169 1 1
1
= [198 280 198] 239
=
(66922)
8 16
64
169

Thus
=

66922
= 1.7071
39202

Hence the smallest Eigen value of A will be


1

1
= 0.5858
1.7071

Similar matrix
A matrix A is said to be similar to B if there exists a non-singular matrix P such that

This transformation of A to B is known as similarity transformation.


Invariant Eigen values
Dr. B.R. Gupta

1. Similar matrices A and B have same Eigen values


=

2. Further if X is eigen vector of A then

is an eigen vector of the matrix B

Proof: (1)
=

Suppose B is similar to A i.e.,

Consider the characteristic polynomial of B


|
=|
=|
=|
Since|

| | |= |

|=|
|

(
=|
|| || |

) |

|=| |=

Thus A and B have the same characteristic polynomial and therefore have the same Eigen values.
Proof (2):
Let X be an Eigen vector of A Then AX=X.
Consider
B=
Post multiplying by
)(
)=
=(
)
=(
Post multiply by X
B(
)=
= (
)
=
= (
)
Thus
is an eigen vector of B corresponding to the eigen value .
Orthogonal matrix:
A matrix is said to be orthogonal if the dot product of any two column vector is zero (i.e. column
vectors are perpendicular and unit vector).
If A is orthogonal then
.
Or

=
=

Consider the matrix


Dr. B.R. Gupta

1
0
=
0
0

0
0
1
0

0
0

This is an orthogonal matrix


Jacobis method:
In this method, a real symmetric matrix A is transformed into a diagonal matrix D by similarity
transformation using orthogonal matrices , = 1, 2, .
=

This is called Jacobi diagonalization process.


First we make zero the largest element
(non-diagonal) of matrix A by similarity
transformation using orthogonal matrix P
1
0
=
0
0

0
0
1
0

0
0

Where
2

1
tan
= 2

if

if

Ex: Using Jacobis method find all the Eigen values and the eigen vectors of the matrix
1 2 2
= 2 3 2
2 2 1
Sol:
We shall perform Jacobi transformation. First we eliminate largest non-diagonal element of A
which is
=

= 2.

Constructing orthogonal matrix P1 as follows

Dr. B.R. Gupta

Since

= 1, we have

0
1
0

= . Thus
1

2
= 0
1
2

1
0

2
0
1
2

First similarity transformation of A:


=

Let

2
= 0
1
2

2 1 2 2 2
0 2 3 2 0
1 2 2 1 1
2
2

0
1
0

3
= 2
0

0
1
0

2
0
1
2

2 0
3 0
0 1

Now the largest non-diagonal element of B is


=

= 2. Hence orthogonal matrix P2 to be

2
=1
2
0
Let

1
2
1
2
0

Then
1

2
= 1
2
0

1
2
1
2
0

0
3
2
0 0

2 0 2
3 0 1
0 1 2
0

1
2
1
2
0

Dr. B.R. Gupta

5
= 0
0

0 0
1 0
0 1

Thus
=

Where

The Eigen values of D are similar to A. i.e.


= 5,

= 1,

= 1 are Eigen values of A.

The column vectors of matrix

are Eigen vectors of A.

2
1
=
2
1
2

1
2

1
2

1
1
2

That is


= ,
= ,



= 1 respectively.

= 0 are eigen vectors of A corresponding to

= 5,

= 1,

Theorem: Let D be a diagonal matrix obtained by similarity transformation of Matrix A using


non-singular matrix P. That is
=

Then
(1)
(2)

Eigen values of D and A are similar.


Column vectors of nonsingular matrix P are the Eigen vectors of matrix A.

Ex: Using Jacobis method, find all the eigen values and the eigen vectors of the matrix
1 2 4
= 2
5 2
4 2 1
Dr. B.R. Gupta

Sol:.
5 22 0
= 22 5
0
0
0 3

Thus Eigen values of A are


Vectors are

= 5 22,

1 1
2 ,
2
1

5 22
0
0
5 + 22
0
0
= 5 + 22,

1 1
2 ,
2
1

0
0
3

= 3 and corresponding Eigen

1 2
0
2
2

Tridiagonal Matix:

A=

A matrix having all its nonzero entries on the main diagonal and in the positions immediately
adjacent to the main diagonal are called Tridiagonal Matix.
Let

be column vector
0
=

Calculate

(1)

(2)

Householders Tridiagonalization method:


This method is used to reduce a symmetric matrix A into tridiagonal matrix B by similarity
transformation using Householder matrix . A matrix A of order n n needs (n-2) transformation
to become tridiagonal. i.e.
=

Dr. B.R. Gupta

Formation of H:
The Householder matrix

is an orthogonal symmetric matrix given by


= 2

Where

is a column matrix of the form


0


= ,
..

Elements of vector

0


=

0
0

= ,
..

0
0
..
0
,

are given by

=0

Where

All elements
Thus

|
1
1+
2

sign(
2

+ ..+

|
)

, = 3, 4, 5

are taken from 1st column of matrix A.

= 2

Ex: Reduce the following matrix to tridiagonal matrix using householder method:
1
= 4
3

4 3
1 2 .
2 1

Sol: Constructing householder matrix


= 2
Dr. B.R. Gupta

0
Here

=
1
0
=
0

0
12
2
1

0
=

Now

0
2
12

0
0
4
3


5
5
3
4

5
5

=
1 5

73
5
=
25
14

0 25

0
14

25
23

25

Note: Formation of H2:


Let

( )

=
= 2

Then we construct

Elements of vector

Where

All elements

+
( )

0
0

=

by the same procedure using entries of new matrix


=

( )

=0
( )

are given by

+ ..+

1+
(

( )

, = 4, 5, 6

are taken from 2st column of matrix

( )

Ex: Use the Householders method to reduce the given matrix A into the tridiagonal form

= 1

1
2

2
1
1
4
1

2
4
1

Dr. B.R. Gupta

Ex: Use the Householders method to reduce the given matrix A into the tridiagonal form
4
1
=
2
2

1
4
1
2

2
1
4
1

2
2
1
4

Sol:

Dr. B.R. Gupta

You might also like