You are on page 1of 6

Based on Strangs Introduction to Applied Mathematics

Theory of Iterative Methods


The Iterative Idea
To solve Ax = b, write
M x(k+1) = (M A)x(k) + b, k = 0, 1, 2, . . .
Then the error e(k) x(k) x satisfies
M e(k+1) = (M A)e(k) ,

e(k+1) = Be(k)

where the iteration matrix B = M 1 (M A). Now






(k)


e = B k e(0) k 0 iff < 1

where = maximum of |eigenvalues of B| is the spectral radius of B.

Convergence of Iterative Methods


Convergence. Does x(k) x? If x(k) x , then
M x = (M A)x + b
and Ax = b. The rate of convergence is governed by powers of B =
M 1 (M A). Since e(k) = B k e(0) , e(k) 0 and x(k) x (convergence)
iff B k 0 (stability).
Theorem. B k 0 iff every eigenvalue of B satisfies |i | < 1. The rate of
convergence is governed by the spectral radius of B:
= max |i |.
i

Proof. Expand the initial error in terms of the eigenvectors of B (assuming


a complete set of eigenvectors)
e(0) = c1 v1 + c2 v2 + + cn vn

where Bvi = i vi . Then as k , the error at iteration k is


e(k) = B k e(0) = c1 k1 v1 + c2 k2 v2 + + cn kn vn 0 iff |i | < 1, i = 1, . . . , n
and







(k)
e cj kj vj k .

In matrix form,


B k = SS 1



SS 1 SS 1 = Sk S 1 0 iff |i | < 1, i = 1, . . . , n

where = diag{1 , 2 , . . . , n }. If B does not have a complete set of


eigenvectors, the matrix form of the proof simply involves the Jordan form
J of B instead of .

Choice of M
M x(k+1) = (M A)x(k) + b should be easy to solve. Diagonal or lower
triangular M s are good choices.
M should be close to A, so that the eigenvalues of B = M 1 (M A) =
I M 1 A are as small in magnitude as possible (must be inside the
unit circle in the complex plane).
For A = L+D+U , Jacobi takes M = D while Gauss-Seidel takes M = D+L.
SOR (successive over-relaxation) introduces a relaxation factor 1 < < 2
in Gauss-Seidel which is adjusted to make the spectral radius as small as
possible.
For a wide class of finite-difference matrices, Youngs formula relates the
eigenvalues of Jacobi and the eigenvalues of SOR:
( + 1)2 = 2 2 .
Minimizing SOR = max{||} = 1 (using the quadratic formula) gives

opt =

2 1

2J

2J

, SOR = opt 1.

For the model Laplace problem on an (N + 1) (N + 1) grid with h =


1/(N + 1),
J = cos(h), GS = 2J , SOR =

1 sin h
.
1 + sin h

Model Laplace Problem Spectral Radii


To show that the Jacobi spectral radius J = cos(h) for Laplaces equation
on the unit square with second-order accurate central differences, first consider a 1D problem. In 1D, set A = tridiag[1 2 1]. Then the iteration
matrix B = 21 tridiag[1 0 1]. Then show that Bv = cos(h) v where the 1D
eigenvector
v = [sin(h), sin(2h), , sin(N h)].

Note that here h = 1/(N +1). The other eigenvectors of B replace with 2,
3, . . . , N in v, with eigenvalues cos(2h), cos(3h), . . ., cos(N h).
In 2D, the eigenvector = [sin(ih) sin(jh)] =
[sin(h) sin(h), sin(h) sin(2h), , sin(h) sin(N h),
sin(2h) sin(h), , sin(2h) sin(N h), , sin(N h) sin(N h)]

with the same eigenvalue, so J = cos(h).

Using J in Youngs formula yields the SOR spectral radius


SOR =

1 sin h
.
1 + sin h

Iteration Matrices
Solve Ax = b iteratively. Decompose A = L + D + U and define r(k) =
Ax(k) b.
Jacobi Iteration M = D
Dx(k+1) = (L + U )x(k) + b = Dx(k) r(k)
x(k+1) = x(k) D1 r(k)
3

BJ = D1 (L + U )
Gauss-Seidel Iteration M = L + D
(D + L)x(k+1) = U x(k) + b
Dx(k+1) = Dx(k) (Lx(k+1) + Dx(k) + U x(k) b) Dx(k) r(k)
x(k+1) = x(k) D1 r(k)
BGS = (L + D)1 U

SOR/SUR Iteration M =

+L

x(k+1) = x(k) D1 r(k)


D (k+1) D (k)
x
= x (Lx(k+1) + Dx(k) + U x(k) b)





D
D
(k+1)
+L x
=
D U x(k) + b

BSOR =

D
+L

1 

D
D U = (D + L)1 ((1 )D U )

Note that for SOR/SUR, det{B} = (1)n , 0 < < 2, and SOR = opt 1,
SU R = 1 opt .
2 2 Example
det{A} = 1 2 n , Tr{A} = 1 + 2 + + n
A=
MJ =
MGS =

"

"

2 0
0 2

2 0
1 2

"

, BJ =

, BGS =

"

2 1
1
2

"

1
2

0
1
2

0
1
2
1
4

0
0

1
1
, = , J =
2
2

1
1
, 1 = 0, 2 = , GS =
4
4

MSOR =
opt

"


2
, BSOR =

1 2
(1

)
1

2
2

= 4(2 3), 1 = 2 = opt 1 = SOR 0.0718

3 3 Examples
(i) Example where Jacobi converges but Gauss-Seidel diverges

0 2
2
1 0 0
1 2 2

0 1
1
A= 1 1

, MJ = 0 1 0 , BJ = 1
2 2
0
0 0 1
2 2
1

1 = 2 = 3 = 0, J = 0, Jacobi converges. Note that here B 3 = 0 and


e3 = 0, which is an unusual situation! See the Cayley-Hamilton Theorem in
the web notes Conjugate Gradients.
MGS

0 2
2
1 0 0

2 3
= 1 1 0 , BGS = 0

0
0
2
2 2 1

1 = 0, 2 = 3 = 2, GS = 2, Gauss-Seidel diverges.

(ii) Example where Jacobi diverges but Gauss-Seidel converges

2 0 0
2 1 1
0 1 1
1

0 1
A = 1 2 1 , MJ = 0 2 0 , BJ = 1

2
0 0 2
1 1 2
1 1
0

1 = 1, 2 = 3 = 0.5, J = 1, Jacobi diverges.


MGS

2 0 0
0 4 4
1

2 2
= 1 2 0 , BGS = 0

8
1 1 2
0
1
3

1 = 0, = 0.3125 0.1654i, GS = 0.3536, Gauss-Seidel converges.

Chebyshev SOR
By dynamically adjusting , the Chebyshev SOR method insures that the
norm of the error always decreases. Make two half sweeps on even/odd
5

(black/white) meshes. Odd (even) points depend only on even (odd) mesh
values. Define ( is the Jacobi spectral radius):
(0) = 1
1

( 2 ) =
1

(n+ 2 ) =

2 (n)
4

1
2
1 2

3
1
, n = , 1, , . . .
2
2

Then () = opt and ||e(k) || always decreases.

You might also like