You are on page 1of 10

TERM PAPER REVIEW OF NUMERICAL ANALYSIS

MTH 204
TOPIC
COMPARISON OF RATE OF CONVERGENCE OF DIFFERENT
ITERATIVE METHODS

Submitted to: Submitted by:

MS.ARSHI MERAJ AMEER ULLAH


ROLL NO: B38
SECTION: B5801
ACKNOWLEDGEMENT

The successful completion of any task would be incomplete without mentioning the people who
have made it possible. So it`s with the gratitude that I acknowledge the help, which crowned my
efforts with success.

Life is a process of accumulating and discharging debts, not all of those can be measured. We
cannot hope to discharge them with simple words of thanks but we can certainly acknowledge
them.

I owe my gratitude to miss. ARSHI MERAJ, LSE for completing my term paper. Last but
not the least I am very much indebted to my family and friends for their warm encouragement
and moral support in conducting this project work.

AMEER ULLAH
TABLE OF CONTENTS:

 INTRODUCTION.

 BISECTION METHOD.

 RATE OF CONVERGENCE FOR BISECTION METHOD.

 NEWTON RAPHSON METHOD.

 RATE OF CONVERGENCE OF NEWTON RAPHSON METHOD.

 REGULA FALSI METHOD.

 RATE OF CONVERGENCE OF REGULA FALSI METHOD.

 BIBLIOGRAPHY.
RATE OF CONVERGENCE:-
In numerical analysis, the speed at which a convergent sequence approaches its limit is called
the rate of convergence. Although strictly speaking, a limit does not give information about any
finite first part of the sequence, this concept is of practical importance if we deal with a sequence of
successive approximations for an iterative method, as then typically fewer iterations are needed to
yield a useful approximation if the rate of convergence is higher. This may even make the
difference between needing ten or a million iterations.

Similar concepts are used for discretization methods. The solution of the discretized problem
converges to the solution of the continuous problem as the grid size goes to zero, and the speed of
convergence is one of the factors of the efficiency of the method. However, the terminology in this
case is different from the terminology for iterative methods.

Series acceleration is a collection of techniques for improving the rate of convergence of a series.
Such acceleration is commonly accomplished with sequence transformations.

Suppose that the sequence {xk} converges to the number L.

We say that this sequence converges linearly to L, if there exists a number μ ∈ (0, 1) such that

The number μ is called the rate of convergence.

If the above holds with μ = 0, then the sequence is said to converge superlinearly. One says that the
sequence converges sublinearly if it converges, but μ=1.

The next definition is used to distinguish superlinear rates of convergence. We say that the
sequence converges with order q for q > 1 to L if

In particular, convergence with order 2 is called quadratic convergence, and convergence with order
3 is called cubic convergence.
This is sometimes called Q-linear convergence, Q-quadratic convergence, etc., to distinguish it
from the definition below. The Q stands for "quotient," because the definition uses the quotient
between two successive terms.

The drawback of the above definitions is that these do not catch some sequences which still
converge reasonably fast, but whose "speed" is variable, such as the sequence {bk} below.
Therefore, the definition of rate of convergence is sometimes extended as follows.

Under the new definition, the sequence {xk} converges with at least order q if there exists a
sequence {εk} such that

and the sequence {εk} converges to zero with order q according to the above "simple" definition. To
distinguish it from that definition, this is sometimes called R-linear convergence, R-quadratic
convergence, etc. (with the R standing for "root").

Examples:

Consider the following sequences:

The sequence {ak} converges linearly to 0 with rate 1/2. More generally, the
sequence Cμk converges linearly with rate μ if |μ| < 1. The sequence {bk} also converges linearly to
0 with rate 1/2 under the extended definition, but not under the simple definition. The sequence {ck}
converges superlinearly. In fact, it is quadratically convergent. Finally, the sequence {dk} converges
sublinearly.
Convergence speed for discretization methods:-

A similar situation exists for discretization methods. Here, the important parameter is not the
iteration number k but the number of grid points, here denoted n. In the simplest situation (a
uniform one-dimensional grid), the number of grid points is inversely proportional to the grid
spacing.

In this case, a sequence xn is said to converge to L with order p if there exists a constant C such that

| xn − L | < Cn − p for all n.

This is written as |xn - L| = O(n-p) using the big O notation.

This is the relevant definition when discussing methods for numerical quadrature or the solution of
ordinary differential equations

BISECTION METHOD RATE OF CONVERGENCE:-

Suppose f(x) is continuous on an interval [a,b], such that

f(a).f(b)<0(3)

Then f(x) changes sign on [a,b], and f(x)=0 has at least one root on the interval. Bisection method
repeatedly halve the interval [a,b], keeping the half on which f(x) chages sign. It guaranteed to
converge to a root.

More precisely, Suppose that we are given an interval [a,b] satisfying Equation 3 and an error
tolerance ϵ>0. Then the bisection method is consists of the following steps:

[B1.] Compute c=(a+b)/2

[B2.] If b−c≤ϵ, then accept c as the root and stop the procedure.

[B3.] If f(a).f(c)≤0, then set b=c else, set a=c. Go to step B1.
NEWTONS METHOD:-
In numerical analysis, Newton's method (also known as the Newton–Raphson method), named
after Isaac Newton and Joseph Raphson, is perhaps the best known method for finding successively
better approximations to the zeroes (or roots) of a real-valued function. Newton's method can often
converge remarkably quickly, especially if the iteration begins "sufficiently near" the desired root.
Just how near "sufficiently near" needs to be, and just how quickly "remarkably quickly" can be,
depends on the problem (detailed below). Unfortunately, when iteration begins far from the desired
root, Newton's method can fail to converge with little warning; thus, implementations often include
a routine that attempts to detect and overcome possible convergence failures.
Given a function ƒ(x) and its derivative ƒ '(x), we begin with a first guess x0. Provided the function
is reasonably well-behaved a better approximation x1 is

The process is repeated until a sufficiently accurate value is reached:

An important and somewhat surprising application is Newton–Raphson division, which can be used
to quickly find the reciprocal of a number using only multiplication and subtraction.
FALSE POSITION METHOD:-
In numerical analysis, the false position method or regula falsi method is a root-finding
algorithm that combines features from the bisection method and the secant method.

The first two iterations of the false position method. The red curve shows the function f and the blue
lines are the secants.

Like the bisection method, the false position method starts with two points a0 and b0 such that f(a0)
and f(b0) are of opposite signs, which implies by the intermediate value theorem that the
function f has a root in the interval [a0, b0], assuming continuity of the function f. The method
proceeds by producing a sequence of shrinking intervals [ak, bk] that all contain a root of f.

At iteration number k, the number

is computed. As explained below, ck is the root of the secant line through (ak, f(ak)) and (bk, f(bk)). If
f(ak) and f(ck) have the same sign, then we set ak+1 = ck and bk+1 = bk, otherwise we
set ak+1 = ak and bk+1 = ck. This process is repeated until the root is approximated sufficiently well.

The above formula is also used in the secant method, but the secant method always retains the last
two computed points, while the false position method retains two points which certainly bracket a
root. On the other hand, the only difference between the false position method and the bisection
method is that the latter uses ck = (ak + bk) / 2.
If the initial end-points a0 and b0 are chosen such that f(a0) and f(b0) are of opposite signs, then one
of the end-points will converge to a root of f. Asymptotically, the other end-point will remain fixed
for all subsequent iterations while the converging endpoint becomes updated. As a result, unlike
the bisection method, the width of the bracket does not tend to zero. As a consequence, the linear
approximation to f(x), which is used to pick the false position, does not improve in its quality.

One example of this phenomenon is the function

f(x) = 2x3 − 4x2 + 3x

on the initial bracket [−1,1]. The left end, −1, is never replaced and thus the width of the bracket
never falls below 1. Hence, the right endpoint approaches 0 at a linear rate (with a rate of
convergence of 2/3).

While it is a misunderstanding to think that the method of false position is a good method, it is
equally a mistake to think that it is unsalvageable. The failure mode is easy to detect (the same end-
point is retained twice in a row) and easily remedied by next picking a modified false position, such
as

or

down-weighting one of the endpoint values to force the next ck to occur on that side of the function.
The factor of 2 above looks like a hack, but it guarantees superlinear convergence (asymptotically,
the algorithm will perform two regular steps after any modified step). There are other ways to pick
the rescaling which give even better superlinear convergence rates.

The above adjustment, and other similar modifications to Regula Falsi are called the Illinois
Algorithm. Ford (1995) summarizes and analyzes the superlinear variants of the modified method
of false position. Judging from the bibliography, modified regula falsi methods were well known in
the 1970s and have been subsequently forgotten or misremembered in current textbooks.
BIBLIOGRAPHY:

You might also like