You are on page 1of 13

Simulation of Models Exhibiting Runge’s Phenomenon (A Few Remarks)

by Erum Dost
Consider the function f (x) = 1 (1 + 25x 2 ).
If the function f (x) is interpolated at equidistant points x i between -1 and 1 such that
2i
xi = −1, i ∈ {0,1,…,n } with a polynomial Pn (x) of degree ≤ n, the resulting
n
interpolation oscillates towards the edges of the interval.
Background
The error En (x) between the generating function and the interpolating polynomial of
order n is given by
f (n+1)(ξ) n+1
En (x) = f (x)− Pn (x) = ∏(x − xi )
(n + 1)! i=1
for some ξ in (-1,1). Therefore,
f (n+1)(x) n
max f (x)− Pn (x) ≤ max max ∏ x − x i .
−1≤x≤1 −1≤x≤1 (n + 1)! −1≤x≤1
i=0

This problem occurs with large data when constructing a polynomial interpolation of
higher degrees at equidistant points. For equally spaced grid points, the upper bound of
the error En (x) can be found from a maximization of product ∏i=1 (x − x i ).
n+1

% Runge_function
figure(1); hold on;
for n = 4:4:12
x = linspace(x-1,1,n+1);
y = 1./(1+25*x.^2);
c = polyfit(x,y,n);
xInt = linspace(-1,1,1000);
yInt = polyval(c,xInt);
plot(xInt,yInt,x,y,'^');
end
yExact = 1./(1+25*xInt.^2);
plot(xInt,yExact,':r'); hold off;
figure(2); hold on;
for n = 4:4:20
x = linspace(-0.1,0.1,n+1);
y = 1./(1+25*x.^2);
c = polyfit(x,y,n);
xInt = linspace(-0.1,0.1,1000);
yInt = polyval(c,xInt);
plot(xInt,yInt,x,y,'^');
end
yExact = 1./(1+25*xInt.^2);
plot(xInt,yExact,':r'); hold off;
(a) Running a script in MATLAB to simulate the Runge phenomenon.
Erum Dost

Error Analysis
Note that each of the two multipliers in the upper bound for the approximation error
grows to infinity with n. In fact, the interpolation error increases without bound when the
degree of the polynomial is increased.
Thus,
(
lim max f (x)− Pn (x) = +∞.
n→∞ −1≤x≤1
)
Definition (Bernstein Polynomials)
For each n ∈ !, the n th Bernstein polynomial of a function f ∈ C ([0,1], !) is defined as
n ⎛ ⎞ ⎛ ⎞
n k
pn (x) = ∑ ⎜⎜⎜ ⎟⎟⎟ f ⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k ,
⎜ ⎟
k=0 ⎝k ⎠ ⎝ ⎠ n ⎟
where
⎛n ⎞⎟ n!
⎜⎜ ⎟ =
⎜⎜⎝k ⎟⎟⎠ k !(n − k)!
is the binomial coefficient.

Weierstrass Approximation Theorem


The polynomials are dense in C([a,b]). That is, given any f ∈ C([a,b]) and an ε > 0, ∃ a
polynomial p ∈ C([a,b]) such that f − p ∞ < ε.

Proof. We will first consider f ∈ C([0,1]) and proceed to describe a polynomial that is
close to f with respect to the supremum norm. In order to prove that pn − f ∞
< ε for
sufficiently large n, we recall the binomial theorem:
n ⎛ ⎞
n
(x + y)n = ∑ ⎜⎜⎜ ⎟⎟⎟ x ky n−k .
⎝k ⎟⎠
k=0 ⎜

Differentiating both sides by x and then multiplying by x, we get


n ⎛ ⎞
n
nx(x + y)n−1 = ∑ ⎜⎜⎜ ⎟⎟⎟ kx ky n−k .
⎝k ⎟⎠
k=0 ⎜

Then differentiating twice and multiplying both sides by x 2 , we obtain


n ⎛ ⎞
n
n(n −1)x 2 (x + y)n−2 = ∑ ⎜⎜⎜ ⎟⎟⎟ k(k −1)x ky n−k .
⎝k ⎟⎠
k=0 ⎜

Equations [1]-[3] with y = 1 − x are then


n ⎛ ⎞
n
1 = ∑ ⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k ,
⎝k ⎟⎠
k=0 ⎜

2
Erum Dost

n ⎛ ⎞
n
nx = ∑ ⎜⎜⎜ ⎟⎟⎟ kx k (1 − x)n−k ,
⎝k ⎟⎠
k=0 ⎜

and
n ⎛ ⎞
n
n(n − 1)x 2 = ∑ ⎜⎜⎜ ⎟⎟⎟ k(k − 1)x k (1 − x)n−k .
⎝k ⎟⎠
k=0 ⎜

Therefore,
n ⎛n ⎞ n ⎛n ⎞
∑ (k − nx) ⎜⎜⎜⎜⎝k ⎟⎟⎟⎟⎠x (1 − x)
k=0
2 k n−k
= ∑ k 2 ⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k
k=0 ⎜⎝k ⎟⎠
n ⎛n ⎞
−2∑ nkx ⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k
k=0 ⎜⎝k ⎟⎠
n ⎛n ⎞
+∑ n 2x 2 ⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k
k=0 ⎜⎝k ⎟⎠
= [nx + n(n − 1)x 2 ] − 2nx ⋅nx + n 2x 2
= nx(1 − x).

Now, let M > 0 such that f (x) ≤ M for all x ∈ [0,1]. Since f is continuous on [a,b], it is
uniformly continuous, thus ∃δ > 0 such that f (x)− f (y) < ε whenever x − y < δ. Then,

n ⎛ k ⎞⎛n ⎞
f (x)− pn (x) = f (x)− ∑ f ⎜⎜ ⎟⎟⎟⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k
⎜ ⎟⎝k ⎟⎠
k=0 ⎝ n ⎠⎜

n ⎛ ⎛ k ⎞⎞⎛n ⎞
= ∑ ⎜⎜⎜⎜⎝ f (x)− f ⎜⎜⎜⎝n ⎟⎟⎟⎟⎠⎟⎟⎟⎟⎠⎜⎜⎜⎜⎝k ⎟⎟⎟⎟⎠x (1 − x)
k=0
k n−k

⎛ ⎞ ⎛ ⎞
⎜⎜ f (x)− f ⎛⎜⎜ k ⎞⎟⎟⎟⎟⎛⎜⎜n ⎞⎟⎟ x k (1 − x)n−k + ⎜⎜ f (x)− f ⎛⎜⎜ k ⎞⎟⎟⎟⎟⎛⎜⎜n ⎞⎟⎟ x k (1 − x)n−k
= ∑ ⎜⎝
|k−nx|<δn ⎜
⎜⎝ n ⎟⎟⎠⎟⎟⎠⎜⎜⎝k ⎟⎟⎠ ∑ ⎜⎝
|k−nx|≥δ ⎜
⎜⎝ n ⎟⎟⎠⎟⎟⎠⎜⎜⎝k ⎟⎟⎠
⎛ ⎛ k ⎞⎞⎛n ⎞ ⎛ ⎛ k ⎞⎞⎛n ⎞
≤ ∑ ⎜⎜⎜ f (x)− f ⎜⎜ ⎟⎟⎟⎟⎟⎟⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k + ∑ ⎜⎜⎜ f (x)− f ⎜⎜⎜ ⎟⎟⎟⎟⎟⎟⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k .
|k−nx|<δn ⎜
⎝ ⎜⎝ n ⎟⎠⎟⎠⎜⎝k ⎟⎠ |k−nx|≥δ ⎜⎝ ⎝ n ⎟⎠⎟⎠⎜⎝k ⎟⎠

Note that if | k − nx |< δn, then x − nk < δ, so that f (x)− f ( nk ) < ε.


Then,
⎛ ⎞
⎜⎜ f (x)− f ⎛⎜⎜ k ⎞⎟⎟⎟⎟⎛⎜⎜n ⎞⎟⎟ x k (1 − x)n−k ≤ ⎛ k ⎞ ⎛n ⎞
f (x)− f ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k

|k−nx|<δn ⎝
⎜⎜ ⎜ ⎟ ⎟ ⎜
⎝ n ⎟⎠⎟⎠⎜⎝k ⎟⎠
⎟ ∑
|k−nx|<δn ⎝ n ⎟⎠ ⎜⎝k ⎟⎠
⎛ ⎛n ⎞ ⎞⎟
< ε ⋅ ⎜⎜⎜ ∑ ⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k ⎟⎟
⎜⎝|k−nx|<δn ⎜⎝k ⎟⎠ ⎟⎟⎠
⎛ n ⎛n ⎞ ⎞⎟
≤ ε ⋅ ⎜⎜⎜∑ ⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k ⎟⎟
⎜⎝ k=0 ⎜⎝k ⎟⎠ ⎟⎟⎠
= ε.

3
Erum Dost

If | k − nx | ≥ δn, then
⎛ ⎞
⎜⎜ f (x)− f ⎛⎜⎜ k ⎞⎟⎟⎟⎟⎛⎜⎜n ⎞⎟⎟ x k (1 − x)n−k ≤ ⎛ k ⎞⎟ ⎛⎜n ⎞⎟ k
⎜⎜ ⎟ ⎜ ⎟ x (1 − x)n−k

|k−nx|≥δn ⎝
⎜⎜ ⎟ ⎟
⎜⎝ n ⎟⎟⎠⎟⎠⎜⎜⎝k ⎟⎠ ∑
|k−nx|≥δn
f (x)− f ⎜⎝ n ⎟⎟⎠ ⎜⎜⎝k ⎟⎟⎠
⎛ ⎛ k ⎞ ⎞⎟⎛n ⎞
≤ ∑ ⎜⎜⎜ f (x) + f ⎜⎜⎜ ⎟⎟⎟ ⎟⎟⎟⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k
|k−nx|≥δn ⎜⎝ ⎝ n ⎠⎟ ⎟⎠⎜⎝k ⎟⎠
⎛ ⎛n ⎞ ⎞⎟
≤ 2M ⋅ ⎜⎜⎜ ∑ ⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k ⎟⎟
⎜⎝|k−nx|≥δn ⎜⎝k ⎟⎠ ⎟⎟⎠
⎛ n 2M ⎞
⋅ ⎜⎜⎜∑ (k − nx)2 x k (1 − x)n−k ⎟⎟⎟,

⎝ k=0 n 2δ 2 ⎟⎠
k −nx 2M ⎛ n ⎞ 2M
since ≥ 1. By [7], we have 2 2 ⋅ ⎜⎜⎜∑ (k −nx)2 x k (1 − x)n−k ⎟⎟⎟ = 2 2 nx(1 − x).
nδ n δ ⎝ k=0 ⎟⎠ n δ
2M M
Note that x(1 − x) ≤ 1 4 for each value of x ∈ [0,1], so that 2
x(1 − x) ≤ .
nδ 2nδ 2
Therefore,
⎛ ⎞⎛ ⎞
⎜⎜ f (x)− f ⎛⎜⎜ k ⎞⎟⎟⎟⎟⎜⎜n ⎟⎟ x k (1 − x)n−k
f (x)− pn (x) ≤ ∑ ⎜⎝
|k−nx|<δn ⎜
⎜⎝ n ⎟⎟⎠⎟⎟⎠⎜⎜⎝k ⎟⎟⎠
⎛ ⎛ k ⎞⎞⎛n ⎞
+ ∑ ⎜⎜⎜ f (x)− f ⎜⎜ ⎟⎟⎟⎟⎟⎟⎜⎜⎜ ⎟⎟⎟ x k (1 − x)n−k
|k−nx|≥δn⎜⎝ ⎜⎝ n ⎟⎠⎟⎠⎜⎝k ⎟⎠
M
<ε+ .
2nδ 2
M M
Note that if n > , then f − pn <ε+ = 2ε.
2nδ 2 ∞
2 2δM2ε δ 2

This proves the Weierstrass approximation theorem for the interval [0,1]. We will now
extend the argument to an arbitrary interval [a,b].
Consider any f ∈ C([a,b]), let ε > 0 be arbitrary and a < b where a,b ∈ !. Define
g(x) = f (x(b −a) + a), x ∈ [0,1]. Note that g ∈ C([0,1]), g(0) = f (a), and g(1) = f (b). Then
there exists a Bernstein polynomial p ∈ C([0,1]) such that p − g ∞
< ε. Define

⎛ x −a ⎞⎟
q(x) = p ⎜⎜ ⎟, x ∈ [a,b].
⎜⎝ b −a ⎟⎟⎠

Note that q is a polynomial in C([a,b]) where q(a) = p(0) and q(b) = p(1).
Then, we have
⎛ x −a ⎞⎟ ⎛ x −a ⎞⎟
f (x)−q(x) = g ⎜⎜ ⎟⎟ − p ⎜⎜ ⎟
⎜⎝ b −a ⎟⎟⎠ ,
⎜⎝ b −a ⎟⎠

4
Erum Dost

and thus,
f −q

{
= sup f (x)−q(x)
x∈[a,b ]
}

⎪ ⎛ x −a ⎞⎟ ⎛ x −a ⎞⎟ ⎫⎪
= sup ⎪ ⎨ g ⎜⎜⎜ ⎟⎟ − p ⎜⎜

⎟⎟ ⎪

x∈[a,b ] ⎪ ⎝ b −a ⎟ ⎠ ⎝ b −a ⎟⎠ ⎪

⎩ ⎪

{
= sup g(x)− p(x)
x∈[a,b ]
}
= g−p < ε.

This completes the proof. !


Executing script (a) in MATLAB:

Figure 1. The function f (x) is interpolated at equidistant points and the resulting
interpolation oscillates towards the edges of the interval [-1,1].

Mitigations to the Problem


According to the Weierstrass Approximation Theorem, it is possible to approximate any
continuous functions on a closed interval with a single polynomial, such that the
maximum difference between the function and its approximation is arbitrarily small.
Thus, if f (x) is a continuous function on [a,b], then for any given ε > 0, there exists a
polynomial pn such that

f (x)− pn (x) < ε for all x ∈ [a,b].

5
Erum Dost

Using Bernstein polynomials, we are able to uniformly approximate every continuous


function in a closed interval. However, this method is computationally expensive.

Figure 2. A smooth polynomial interpolation of the same function on [-0.1,0.1].

Chebyshev Nodes
Let z = eiθ be a point on the unit circle. The associated x coordinate is x = cos θ where
x ∈ [−1,1]. Define the nth degree Chebyshev polynomial to be Tn (x) = cosnθ. Note that
Tn (x) = cosnθ = 0 when nθ = (2k + 1) π2 , k = 0,…,n −1. Thus, the roots of the
Chebyshev polynomial minimize ℓ ∞ norm where
⎡ (2k + 1)π ⎤
x k = cos ⎢ ⎥
⎢ 2n ⎥
⎣ ⎦

If we choose {x k } to be the Chebyshev points then f (x)− pn (x) ∞ is the smallest for all
polynomials of degree n. In this case, the error is more uniformly distributed over the
interval [-1,1]. Note that if (N + 1) sample points for the interpolation polynomial pN (x)
are selected at the roots of the Chebyshev polynomials x k , as defined above, then the
error
f (N +1)(ξ) f (N +1)(ξ) TN +1(x)
eN (x) = f (x)− pN (x) = (x − x 0 )!(x − x N ) = .
(N + 1)! (N + 1)! 2N

6
Erum Dost

Taking the absolute value of both sides, we obtain

f (N +1)(ξ) TN +1(x)
eN (x) =
(N + 1)! 2N
f (N +1) TN +1(x)
≤ ∞
(N + 1)! 2N
f (N +1)
≤ N

.
2 (N + 1)!

Thus the error decreases exponentially with N (also referred to as spectral convergence).

Figure 3. The first few Chebyshev polynomials on the interval [-1,1].

Legendre Polynomials
For x ∈ [−1,1], the Legendre polynomials Pn (x) are defined as

(−1)n d n
Pn (x) = n n
(1 − x 2 )n .
n !2 dx

The roots of the Legendre polynomial minimize ℓ 2 norm so that ∀f (x) ∈ C[a,b], we have
b
f (x)− pn (x) :=
2 ∫ a
[f (x)− pn (x)]2 dx .

7
Erum Dost

Figure 4. The first few Legendre polynomials on the interval [-1,1].

Figure 5. Comparison between Legendre polynomials and Chebyshev nodes.

Remark: Several other methods exist to mitigate this problem including the use of
piecewise polynomials, which limits the oscillations of high degree polynomials by
stringing together lower degree polynomial interpolants. A problem similar to the Runge
phenomenon occurs for approximations of discontinuous functions by Fourier series,
which is known as the Gibbs phenomenon.

8
Erum Dost

Definition
Recall that the Fourier series are an infinite set of trigonometric series, i.e.
1 1 1
f (x) = sin(πx) + sin(3πx) + sin(5πx) + sin(7πx) + !
3 5 7
As the function takes on more terms, the combined signal resembles a periodic waveform
similar to a square wave function, which consists of instantaneous transitions between
two levels.

% Gibbs phenomenon
J=500
x=linspace(0,2*pi,J);
f=sign(x);
kp=0.*x;
t=150
for k=1:2:t
kp=kp+(1/2)*sin(k*x)/k;
end
plot(x,kp)
(b) Running a script in MATLAB to simulate the Gibbs phenomenon.

Figure 6. The square wave function.

9
Erum Dost

Executing script (b) in MATLAB:

Figure 6. The Fourier series function f (x) with coefficient = 1.

Figure 7. The Fourier series function f (x) with coefficient = 5.

10
Erum Dost

Figure 8. The Fourier series function f (x) with coefficient = 25.

Figure 9. The Fourier series function f (x) with coefficient = 50.

11
Erum Dost

Figure 10. The Fourier series function f (x) with coefficient = 100.

Figure 11. The Fourier series function f (x) with coefficient = 200.

As the function assumes more terms, the oscillation persists and moves closer to the
discontinuity, creating the Gibbs phenomenon. This problem can be resolved using post
processing, which involves computing the Gegenbauer polynomials and creating an
approximation.

12
Erum Dost

Recall that a family of orthogonal polynomials p0 , p1, p2 ! on a finite real interval


[a,b] ⊂ ! is a set of polynomials of degrees 0,1,2… that are orthogonal with respect to an
inner product of the form
b
f ,g := ∫ f (t)g(t)w(t)dt,
a

where w(t) is a continuous, nonnegative weight function on [a,b]. The Gegenbauer


polynomials C n(α)(t) can be obtained using the Gram-Schmidt orthogonalisation process
for polynomials in the domain (-1,1) with the weight factor (1 − x 2 )α−1 2 where α > −1 2.
Then C n(0)(x) is defined as lim C n(α)(x) α, and
α→0

Γ(2α + n)Γ(α + 1 2) (α−1 2,α−1 2)


C n(α)(x) := P (x).
Γ(2α)Γ(α + n + 1 2) n

Note that the Legendre polynomials Pn (x) (as discussed earlier) are equal to C n(1 2)(x).
Thus, the Gegenbauer polynomial C n(0)(x) = 2Tn (x) n where Tn (x) is the Chebyshev
polynomial of the first kind.

Figure 12. 10 coefficients Figure 13. 50 coefficients

The Fourier approximation (blue) and the Gegenbauer approximation (red) are illustrated
in Figures 12 and 13. Note that the Gegenbauer approximation is significantly more
accurate than the Fourier approximation.

13

You might also like