You are on page 1of 22

Math 151a Homework 1 Charlie Marshak

Contents
Homework 1 1

Homework 2 3

Homework 3 3

Homework 4 5

Homework 5 8

Homework 6 13

Homework 7 22

Homework 1
1. (a) One can use Matlab or Mathmatica/Wolframalpha to compute the Taylor Series. If you wanted use
only pen and paper, one could simply write out the taylor series for cos(2x), expand (x 2)2 and do
the expected operations to get the desired Taylor Series:
4 + 6x x2 4x3
Then plug in numbers.
(b) So to do this problem completely rigorously, one must first find the fourth order derivative, check
that its maximum occurs at .4 (in our case the fourth order derivative function is in fact increasing),
and lastly make the desired computations at the right endpoint. I will include the mathematical
code for the last bit:

This is the upper bound. To get the actual error, you must subtract the answers you got in (a).
2. One could compute the Taylor polynomial, but one should realize this is the geometric series formula. In
particular, the Taylor Polynomial is simply:
Pn (x) = 1 + x + x2 + . . . + xn

1
Math 151a Homework 1 Charlie Marshak

Hence, the n necessary to approximate f within 106 on 0.5 is seen to satisfy:

(.5)n 106

Solve for n above and take the ceiling of that. In Matlab, that would be:

\ceil( \log (10^(-6))/\log (.5))

We get 20. Why do we plug in .5 to get a bound on the remainder?

3. (a) Let |f (p)| = > 0 (this is true by hypothesis). By continuity, we can choose a depending on such
that for all x with |x p| < :

|f (p) f (x)| <

By the triangle inequality:


|f (p)| |f (x)| < |f (x) f (p)|
Combining and doing some algebra, we see that for |x p| < that:

|f (x)| > 0

In otherwords, for such x, f (x) 6= 0.


(b) Literally apply the definition of continuity and substitute with k and f (p) with 0.
Definition. f is continuous at p in the domain of f if for any there is , depending on , such
that:
|x p| < = |f (x) f (p)| <

4. Use the formulae found at the bottom of page 20 of your textbook.

5. To use matlab for such computations, see this section in the documentation:
http://www.mathworks.com/help/symbolic/digits.html
Here is an example:

>> digits(3)
>> vpa(pi)

ans =

3.14

Digits sets the number of significant digits and vpa stands for variable point arithmetic.

6. For (a), use LHospitals rule. The other parts are computations similar to problems 1 and 5. Here is the
solution copied and pasted from the manual:

2
Math 151a Homework 3 Charlie Marshak

Homework
Page 1 2
1. (a) .6412
(b) Same as (a)
(c) They are actually both 17.

2. Use the bisection method as outlined.

3. Use the bisection as outlined and it should be approximately 1.378.

4. You should use the bisection method for the function:


32.17 e e
 
f () = sin() 1.7
2 2 2

and you must begin your iterations using a negative number since < 0. For instance, let [a, b] be
[1, .001] if you like.

5. Set g1 (x) = x and then get rid of radicals.

6. (a) Plug into Matlab.


(b) Can look at (a) or if one was curious, one could do a lot of calculus and determine which had the
smallest derivative upper bound because that gives us an error estimate by Corollary 2.5.

Homework 3
1. (a) Using the grapher application on Mac:

3
Math 151a Homework 3 Charlie Marshak

2.5

-5 0 5 10 15 20 25

-2.5

-5

(b) f ([0, 2]) = [ln(2), ln(4)] [.693, 1.386]


(c) We see that f ([0, 2]) [0, 2] and hence a fixed point exists.
d 1
(d) Well, dx ln(x + 2) = x+2 and on [0, 2], this is bounded by 12 . Hence, our algorithm converges.
(e) function x=fixedpoint(g,start,TOL,MAX_IT)

input= start;
x = zeros(1, 2);
for i=1:MAX_IT
output=g(input);
x(1, 1)=output;
x(1, 2) = i;
if abs(output- input)<TOL
return
end
input = output;
end

error(Too Many Iterations!);


To get the error, one can replace a with i in the first line of code. I got 8 iterations when starting at
0.

2. Used the fixed point method. The reason for faster convergence has to do with the upper bound of the
derivative of each function. The smaller the bound, the faster it should converge. The last one converges
the fastest.

3. (Thanks to former TA William Feldman) The hardest part of this problem is finding the function by
which to apply the fixed point method:

g(x) = x (x3 25)/30


3
Its simple to check that 25 is its fixed point and the function has a derivative strictly bounded by 1.
In this case, bisection on [2.5, 3] was much slower than fixed point starting at 2.5. The former required
13 steps, while the latter only 6.

4
Math 151a Homework 4 Charlie Marshak

4. One iteration on Matlab yields -.88 starting at -1. You cannot start at p0 = 0 because the function has
zero derivative there.

5. function p=newton(f,df,start,TOL, maximum_iterations)

g = @(t) t - f(t) / df(t);


p = fixedpoint(g, start, TOL, maximum_iterations);
We start x = 1 for the first point and x = 4 for the second. We get 7 and 6 iterations respectively.

6. (a) This provides three digits of precision.


(b) This yields two digits of precision.
(c) This yields either two digits of precision or is terribly off depending on which point you choose to
draw your secant from. If you fix the leftmost endpoint the error will be more extreme because the
slope will be so gradual that it jumps out of the interval under inspection and goes wild. The right
endpoint is more well-behaved.

7. One solves:
1000
f (i) = [1 (1 + (i/12))360 ] 135, 000
(i/12)
using any method to get approximately 8.1%. Note that the hardest part of this problem is translating
the scenario into mathematics. The key is to remember that we must have i/12 instead of just i because
the formula requires us to use interest per period, which in this case is per month.

Homework 4
1. Let f (x) = (x 1)2 (x 3) on the interval x [0, 2].

(a) Plot f (x) with Matlab. Where is the zero using the diagram?
(b) How many iterations are needed with Newtons method to find this zero with accuracy 1010 .
(c) How many iterations are needed with modified version of Newtons method with the same setting?

Solution:

(a) EDU>> f = @(x) (x-1)^2*(x-3);


EDU>> x = 0:.01:2;
EDU>> f = @(x) (x-1).^2.*(x-3);
EDU>> y = f(x);
EDU>> plot(x, y)

5
Math 151a Homework 4 Charlie Marshak

0.5

0.5

1.5

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

Did you really even need to graph it? Still, some of you got this wrong. Always, remember to take
a step back from the problem you are asked yourself, Whats the easiest possible way to do this?
(b + c) Using newtons method, we required 34 iterations as opposed to only 5 for modified Newtons method
when starting at 0. While this seems relatively small for a computer that makes such computations in
milliseconds, often times in computational problems you have to perform Newtons method thousands
(or even millions) of times, so this kind of improvement becomes much more significant in such
Student Version of MATLAB
scenarios.

2. Use Newtons method and its modified form to approximate the solution:

1 4x cos x + 2x2 + cos 2x = 0 x [0, 1]

to within 106 compare the two methods of performance.


Solution:
The two methods are comparable as the derivative does not vanish (they both require 840 iterations when
beginning at x = 0).

3. Assuming one local extrema, find the maxima of the function:

2 cos(x)
f (x) = xex +
10
within 108 on [0, 3] and discuss your method(s).
Solution:
Well,
df 2 2 sin(x)
= ex 2x2 ex
dx 10
and we set this to zero and apply newtons method. We need to also compute the second derivative:

d2 f 2 2 cos(x)
= 6xex + 4x3 ex
dx2 10

6
Math 151a Homework 4 Charlie Marshak

-5 -4 -3 -2 -1 0 1 2 3 4 5

-1

-2

-3

Note that the second derivative at x = 0 vanishes, so we must start at a positive point (I chose .5).
Secondly, we note (graphically) that the derivative and the second derivative do not vanish in the same
places, so we can use Newtons method. We obtain approximately .6717. Notice this must be a maximum
using the sign of the nearby derivative.

4. Suppose p is a zero of multiplicity m of f where f (m) is continuous on an open interval containing p. Show
that the following modified fixed point method converges of order 2:
f (pn )
pn+1 = pn m
f 0 (pn )
Solution:
Let us recall the statement of quadratic convergence of the fixed point iteration:

Theorem (2.9). Suppose:

p is a solution of the equation x = g(x).


g 0 (p) = 0
g 00 (x) is continuous with |g 00 (x)| < M on an open I containing p (if the second derivative is continuous
on some bounded interval you get boundedness, though you do not necessarily know what M is).

Then, there exists a > 0 such that, for p0 [p , p + ], the sequence defined by pn = g(pn1 ), when
n 1, converges at least quadratically to p. Moreover, for sufficiently large n:
M
|pn+1 p| < |pn p|2
2

We look to verify the hypotheses of the above and we obtain quadratic convergence. In our case,
f (x)
g(x) := x m
f 0 (x)

By assumption, we assume that f (p) = 0 (its a zero), hence g(p) = p.


Assuming that p is a zero of multiplicity m for f (x), we can write f (x) = (x p)m q(x), where
limxp q(x) 6= 0. We write g(x) in terms of this new expression and reduce:

m(x p)q(x)
g(x) = x
mq(x) + (x p)q 0 (x)

7
Math 151a Homework 5 Charlie Marshak

From this, we see:


0 [mq(x) + (x p)q 0 (x)] [m(x p)q 0 (x) + mq(x)] [mq 0 (x) + (x p)q 00 (x) + q 0 (x)][m(x p)q(x)]
g (x) = 1
(mq(x) + (x p)q 0 (x))2

which is easily seen to vanish when x = p.


The last hypothesis is satisfied because we can assume that f is smooth close enough to p.

Homework 5
1. Given three points (0, 1), (-1, 2), (1, 3), use Lagranges formula to find the polynomial of degree at most
2 passing through them.
Solution:
3 2
2x+ 12 x + 1. This is really a symbolic calculation and should be done by hand. Matlab does have
symbolic packages, but they are not installed on PIC computers.

2. (a) Given the data:


x 0 1 2 4 6
f (x) 1 9 23 93 259
Construct the divided difference table.
(b) Using Newtons interpolation polynomial, find an approximation to f (4.2).
Solution:

(a) The code implemented was:

function DDtable=divideddifference(x,f)

%Divided Difference for matlab. Outputs the divided difference table per
%Burden and Faires algorithm 3.2. The diagonal contains the coefficients for
%the polynomial of the form a_0 + a_1(x - x_0) + a_2 (x- x_1)(x - x_2)...

n=length(x)-1; %degree of polynomial


DDtable=zeros(n+1,n+1); %initializing divided difference matrix

% Determine if f is a function-handle or data and then initializes the


% first column accordingly.

if isa(f, function_handle)
DDtable(:,1)=f(:);
else
DDtable(1:n+1, 1) = f(:);
end

%Divided difference using algorithm from


for i=1:n
for j=1:i
DDtable(i+1,j+1)=(DDtable(i+1,j)-DDtable(i,j))/(x(i+1)-x(i-j+1));
end
end

8
Math 151a Homework 5 Charlie Marshak

The output is:


DU>> divideddifference([0, 1, 2, 4, 6], [1, 9, 23, 93, 259])

ans =

1 0 0 0 0
9 8 0 0 0
23 14 3 0 0
93 35 7 1 0
259 83 12 1 0
(b) The polynomial is:

p(x) = 1 + 8(x) + 3(x)(x 1) + (x)(x 1)(x 2) = x3 + 7x + 1

Hence, p(4.2) = 104.488.

3. Recall the forward difference operator P (x) = P (x + 1) P (x). Suppose that P is a 4th degree
polynomial, such that 4 P (0) = 24, 3 P (0) = 6, and 2 P (0) = 0. Compute 2 P (10).
Solution 1: The first solution does not make use the formula expounded in the original problem statement.
We make some observations that are easily checked:

: C 0 (R R) C 0 (R R) is linear, where C 0 (R R) denotes the set of continuous function


from R R.
(xn ) = polynomial of degree n 1. Easily, (xn ) = (x + 1)n xn = nxn1 + . . . + 1.

The above two observations imply that if Pn is a polynomial of degree n, then Pn is a polynomial of
degree at most n 1. In our case P is degree 4, so:

2 P (x) = Ax2 + Bx + C
3 P (x) = 2 P (x + 1) 2 P (x) = A(x + 1)2 + B(x + 1) + C (Ax2 + Bx + C) = 2Ax + A + B
4 P (x) = 3 P (x + 1) P (x) = 2A

Plugging in the desired values to the above equations imply:

4 P (0) = 24 = 2A = A = 12
3 P (0) = 6 = A + B = B = 6
2 (0) = 0 = C = C = 0

Hence,

2 P (x) = 12x2 6x
= 2 P (10) = 1140.

Solution 2:
We can use the formula:
1
f [x0 , x1 , . . . , xk ] = k f (x0 )
k!hk

9
Math 151a Homework 5 Charlie Marshak

Hence,
1 1
f [0, 1, 2, 3, 4] = 4
4 P (0) = 24 = 1
4! 1 4!
1 1
f [0, 1, 2, 3] = 3
3 P (0) = 6 = 1
3!1 3!
1 2
f [0, 1, 2] = P (0) = 0
2! 12

Now,

dont know coefficients


z }| {
P (x) = f [0] + f [0, 1] x +
f [0, 1, 2](x)(x 1) + f [0, 1, 2, 3](x)(x 1)(x 2) + f [0, 1, 2, 3, 4](x)(x 1)(x 2)(x 3)
| {z }
coefficients computed above

A small calculation shows that 2 P (x) the coefficients we do not know vanish. Hence,

P 2 (10) = P (11) P (10)


= P (12) P (11) [P (11) P (10)]
= P (12) 2P (11) + P (10)
= (12)(12 1)(12 2) + (12)(12 1)(12 2)(12 3)
2[(11)(11 1)(11 2) + (11)(11 1)(11 2)(11 3)]
+ (10)(10 1)(10 2) + (10)(10 1)(10 2)(10 3)
= 1140

4. Consider the function f (x) = ex . We are going to approximate it with a polynomial of degree 2 and see
how good the approximation is.

(a) Find a polynomial p2 (x) = a0 + a1 x + a2 x2 , which approximates f (x) on the interval [1, 1]. Choose
the polynomial so that p2 (1) = f (1), p2 (0) = f (0), p2 (1) = f (1). Use Lagranges formula.
(b) Plot both f and p2 on Matlab on the interval from [2, 2].
(c) Write a Matlab code that approximates the distance between the graphs of f and p.
(d) Find a C such that:
E(x) C|x 1||x||x + 1|

(e) Sketch (x + 1)(x)(x 1) and find M such that E(x) M for all x [1, 1].
(f) How do c) and e) compare?
(g) Use Taylors theorem to approximate f (x) of degree 2. Find C such that E(x) C(x a)3 for
x [1, 1] and similarly, find M such that E(x) M as before.
(h) Which of the two processes yields the best approximations?

Solution:

10
Math 151a Homework 5 Charlie Marshak

(a) Use Lagrange to get the coefficients:

a0 = 1
e 1
a1 = +
2 2e
e 1
a2 = + 1
2 2e
EDU>> x = -1: .01: 1;
EDU>> a0 = 1;
EDU>> a1 = -exp(1)/2 + 1/(2*exp(1));
EDU>> a2 = exp(1)/2 -1 + 1/(2*exp(1));
EDU>> p2= @(x) a0 + a1*x + a2*x.^2;
EDU>> y1 = p2(x);
EDU>> y2 = exp(-x);
EDU>> plot(x, y1, :, x, y2);
EDU>> axis([-2 2 0 3])
EDU>> figure(1)
(b) The dotted line is the polynomial and the solid is f (x) = ex :

2.5

1.5

0.5

0
2 1.5 1 0.5 0 0.5 1 1.5 2

(c) Continuing from the same workspace as before, we can find the L2 norm or simply the sup-norm of
the difference of the two vectors as evidence by:
EDU>> norm(y1 - y2, 2)

ans =
Student Version of MATLAB

0.7021

EDU>> max(abs(y1-y2))

ans =

11
Math 151a Homework 5 Charlie Marshak

0.0785
f (3) (x)
(d) Applying Theorem 3.3 of the text, we see that C will be least upper bound for 3! = ex /6. As
such, C = e/6.
(e) Below we see the graph of g(x) = |x 1| |x| |x + 1|. Since g(x) = g(x), we easily see we only need

to inspect [0, 1] and there g(x) = x3 + x, so g 0 (x) = 3x2 + 1 Hence, the maximum is at x = 33

2 3 2e 3
and reaches 9 . Hence, M = 54 .17.

1.6

1.2

0.8

0.4

-2.4 -2 -1.6 -1.2 -0.8 -0.4 0 0.4 0.8 1.2 1.6 2 2.4

-0.4

-0.8

-1.2

-1.6

(f) The latter is slightly larger being the bound of the error (the theory isnt busted).
(g) Expanding the Taylor series about a = 0, we have the following degree 2 approximation:

x2
T2 (x) = 1 x +
2
e|x|
and hence the theoretical error is bounded by the third order term given by 6 6e , when x [1, 1].
(h) An identical computation as above shows us the sup-norm of the difference of the actual function and
its second order Taylor approximation is about .218. Moreover the theoretical error is .453. Hence,
in our case, the Lagrange polynomial provides a more accurate approximation for our scenario.

5. We want to show that the interpolation error with n + 1 equally spaced nodes in [a, b] has the following
form:
1
|f (x) p(x)| hn+1 M
4(n + 1)
where |f (n+1) (x)| M for f C n+1 ([a, b] R) and h = (b a)/n.

h2
(a) Fix x and select j so that xj x xj+1 . Show that |x xj ||x xj+1 | 4
(b) Show that ni=0 |x xi | 14 hn+1 (j + 1)!(n j)!
Q

(c) Show that if 0 j n 1, then (j + 1)!(n j)! n!


(d) Complete the proof by showing ni=0 |x xi | 14 n!hn+1 for any x [a, b]
Q

Solution:

(a) Maximize the function:

g(x) = |x xj ||x xj+1 | = (x xj )(xj+1 x)

12
Math 151a Homework 6 Charlie Marshak

Then:
xj + xj+1
0 = g 0 (xmax ) = xj + xj+1 2xmax = x =
2
(xj+1 xj )2 h2
Hence, g(xmax ) = 4 = 4 .
(b) The idea is that after we have fixed j such that xj x xj+1 , each element |x xk can be crudely
bounded by some multiple of h that depends on the distance of xk from xj . For instance:

|x xj1 | |x xj | + |xj xj1 | 2h


|x xj2 | |x xj | + |xj xj1 | + |xj1 xj2 | 3h
..
.
|x xjk | |x xj | + . . . + |xjk | (k + 1)h
..
.
|x xjk | |x xj | + . . . + |x0 | (j + 1)h

Multiplying these altogether, yields:


Y
|x xk | hj (j + 1)!
k<j

An identical computation reveals that:


Y
|x xk | hnj1 (n j)!
k>j

Multiplying the above with (a), we obtain the desired result:


n
Y 1
|x xi | hn+1 (j + 1)!(n j)!
4
i=0

(c) Well,

j terms j terms
z }| { z }| {
n! = n (n 1) (n j + 1) (n j) 3 2 1 ((j + 1) (j) 3 2) (n j) 3 2 1
| {z } | {z }
nj terms nj terms

= (j + 1)!(n j)!

(d) Combining the above with Lagranges remainder formula yields the desired result!

Homework 6
1. (a) Use the most accurate three-point formula to determine each missing entry in the following table:
x f (x) f 0 (x)
1.1 1.52918
1.2 1.64024
1.3 1.70470
1.4 1.71277

13
Math 151a Homework 6 Charlie Marshak

(b) The data above were taken from the function f (x) = x sin(x) + x2 cos x Compute the actual errors
and find the error bounds using the suggested formulae.
(c) Repeat (a) using four digit rounding arithmetic, and compare the errors to (b).
Solution:

(a) The most accurate three point method is the three-point midpoint formula. However, this only works
when we have points sandwiching the point of the derivative we want to approximate. For endpoints,
we have to use a different formula. We will write Matlab code that performs the desired operations,
though this is HIGHLY unnecessary. It also formats the formulae in a HIGHLY unsavory way. Still,
here they are, but it uses the mildly cool isa function (as in is a thing) which sorts out various
types of classes of matlab.

function df0 = ThreePointRightEPF( f, x0, h)


%Computes the Three point Endpoint formula at x0 with stepsize h from the Right.
%Input can be either the function handle f or two data points in an array labeled f.
%The program sorts out what data type f is and then goes from there. If f
%is a vector, then we assume that the data points are order with respect to
%the positive x-axis, i.e. if x1<x2, then f(x1) comes before f(x2) in the vector.

if isa(f, function_handle)
df0 = 1/(2*h)*(3*f(x0) - 4*f(x0 - h) + f(x0 - 2*h));
else
df0 = 1/(2*h)*(3*f(3) - 4*f(2) +f(1));

end

function df0 = ThreePointMPF( f, x0, h)


%Computes the Three point formula at x0 with stepsize h assuming x0 is a MP.
%Input can be either the function handle f or two data points in an array labeled f.
%The program sorts out what data type f is and then goes from there. If f
%is a vector, then we assume that the data points are order with respect to
%the positive x-axis, i.e. if x1<x2, then f(x1) comes before f(x2) in the vector.
if isa(f, function_handle)
df0 = ( f(x0 + h) - f(x0- h))/(2*h);
else
df0 = (f(2) - f(1) )/(2*h);

end

function df0 = ThreePointLeftEPF( f, x0, h)


%Computes the Three point Endpoint formula at x0 with stepsize h from the Left
%Input can be either the function handle f or two data points in an array labeled f.
%The program sorts out what data type f is and then goes from there. If f
%is a vector, then we assume that the data points are order with respect to
%the positive x-axis, i.e. if x1<x2, then f(x1) comes before f(x2) in the vector.
if isa(f, function_handle)
df0 = 1/(2*h)*(-3*f(x0) + 4*f(x0 + h) - f(x0 + 2*h));
else
df0 = 1/(2*h)*(-3*f(1) + 4*f(2) - f(3));

14
Math 151a Homework 6 Charlie Marshak

end

(b) This is a computation. The important take-aways will be to know the error bounds for the various
methods. They can found on 176-177 of the book.
(c) This was long, and required patience. The error jumps around from both computations. This
demonstrates the instability of numerical differentiation.

2. Let f (x) = 3xex cos(x). Use the following data to approximate f 00 (1.3) with h = 0.1 and with h = 0.01:

x 1.20 1.29 1.30 1.31 1.40


f (x) 11.59006 13.78176 14.04276 14.30741 16.86187

Compare your results to the exact value of f 00 (1.3)

15
Math 151a Homework 6 Charlie Marshak

Solution:
Use the second difference method, which is:
f (x + h) 2f (x) + f (x h)
f 00 (x)
h2

Note that the h = .1 approximation is better than the .01 approximation due to round-off instability
(Thanks Will Feldman!) Lets show why. Assuming the computer stores the value of f (x) as fe(x), then
we can assume that |fe(x) f (x)| for all x. In particular:

fe(x + h) 2fe(x) + fe(x h)
00
f (x) =

h2



fe(x + h) 2fe(x) + fe(x h) f (x + h) + 2f (x) f (x h) f (x + h) 2f (x) + f (x h)
00
+ f (x)

2 2

h h

4 f (x + h) 2f (x) + f (x h)

00
+ f (x)
h2 h2
2 00 3 4 (4) 2 3 4 (4)

4 [f (x) + h f (x) + 2 f (x)) + h6 f 000 (x) + h
0 h f ((h)] 2f (x) + [f (x) h f 0 (x) + h2 f 00 (x) h6 f 000 (x) + h f ((h))]

24 24 00
= + f (x)
h 2 h 2

4 M h2
+
h2 12

where M is chosen so that |f (4) (x)| M for each x [1.2, 1.4]. Well, f (4) (x) = 3xex + 12ex cos(x) and
for x [1.2, 1.4], we have:
|f (4) | 3 (1.4)e1.4 + 12e1.4 + 1 67
Moreover, if we want to find the optimal h, then:
1/4
4 M h2
  
d 48
0= + = h =
dh h2 12 M

Now, since we have 5 significant digits assuming we have rounded, we know that = 5 106 . Hence,
the optimal h is h .43, which is ostensibly bigger than 0.01.

16
Math 151a Homework 6 Charlie Marshak

3. Let f (x) = ex . Define the errors made by approximating f 0 (2) with the forward and centered difference
formula:

0 f (2 + h) f (2)
Ef (h) = f (2)
h

0 f (2 + h) f (2 h)
Ec (h) = f (2)

h

(a) Write code which:


Plots log(Ef (h)) versus log(h) so that you can see data points
Same for Ec (h)
(b) Use basic fitting and linear fitting to approximate. What are the meaning of the slopes?
(c) Repeat question (a), but this time with h = 101 , . . . , 1010 . Determine the optimal h for each
numerical method and compare the graphs.

Solution:

(a) The script below is for (a) and (c)

%Homework 6 Problem 3 Script

E_f = @(h) abs(exp(2) - (exp(2+h) - exp(2)).*h.^(-1) );


E_c = @(h) abs(exp(2) - (exp(2+h) - exp(2-h)).*(2*h).^(-1));

x = -1: -1:-5; %Create an array that stores the exponents


h = 10.^x; %Create an array that stores the desired h values

Error_forward = E_f(h);
Error_centered = E_c(h);

figure(1);
plot(log10(h), log10(Error_forward), o);
title(Error of the Forward Distance Method);
xlabel(Log(h));
ylabel(Log(E_f(h)));
figure(2);
plot(log10(h), log10(Error_centered), o);
title(Error of the Centered Distance Method)
xlabel(Log(h));
ylabel(Log(E_f(h)));

%%%%%% part (c)

x = -1: -1:-10; %Create an array that stores the exponents


h = 10.^x; %Create an array that stores the desired h values

Error_forward = E_f(h);
Error_centered = E_c(h);

figure(3);

17
Math 151a Homework 6 Charlie Marshak

plot(log10(h), log10(Error_forward), o);


title(Error of the Forward Distance Method);
xlabel(Log(h));
ylabel(Log(E_f(h)));
figure(4);
plot(log10(h), log10(Error_centered), o);
title(Error of the Centered Distance Method)
xlabel(Log(h));
ylabel(Log(E_f(h)));

Remark. Using the loglog command in Matlab causes the basic fitting to act a little bit
strangely, so its best to manually adjust for logarithms for t-values and x-values.
(b) Check this website for directions on how to do this (step-by-step):
http://www.swarthmore.edu/NatSci/echeeve1/Ref/Matlab/CurveFit/LinearCurveFit.html

Error of the Forward Distance Method


Error of the Centered Distance Method 0
1 data 1
data 1 linear
0.5 y = 1.003*x + 0.58
y = 1.964*x + 0.01925 linear
2

1
3

1.5
4

2
Log(Ef(h))

5
Log(Ef(h))

6 2.5

7 3

8 3.5

9 4

10
5 4.5 4 3.5 3 2.5 2 1.5 1 4.5
5 4.5 4 3.5 3 2.5 2 1.5 1
Log(h)
Log(h)

Error of the Centered Distance Method


Error of the Forward Distance Method 1
0

2
1
3

2
4

Student Version of MATLAB


3 5 Student Version of MATLAB
Log(Ef(h))
Log(Ef(h))

6
4

7
5

6
9

7
10

8 11
10 9 8 7 6 5 4 3 2 1 10 9 8 7 6 5 4 3 2 1
Log(h) Log(h)

Note the slopes of the first two graphs are precisely the order of the method!
(c) Using an identical process to problem 2 (see that problem for notation), we have that:

Student Version of MATLAB Student Version of MATLAB

18
Math 151a Homework 6 Charlie Marshak


f (x + h) fe(x + h) + f (x) fe(x)
f (x + h) f (x) 0
f (x + h) f (x) 0

f (x) + f (x)
h h h
2 M h
= +
h 2
where M is a bound on the second derivative of f in an appropriate interval. p Taking the derivative,
setting it equal to zero, then solving for h yields that the optimal h is 4/M

r
3
4.1947 1006
3
Centered difference method:
M
r
2 4
Forward difference method: 1.9841 108
M

where we have used M = e2 .2 as this is much larger estimate for the second derivative than need.

4. Apply the extrapolation process wot obtain the approximation N4 (h), an approximation f 0 (x0 ) for the
following function and step size:
f (x) = 2x sin(x), x0 = 1.05, h = 0.4

Solution: We will use the centered difference formula since then our extrapolation process is quicker since
the asymptotic expansion misses odd order terms (see page 189-190 of the textbook).
We apply the same code as in exercise 5 except with the use of the function handle instead of a vector.

N_1= @(h) (2^(1.05+ h)*sin(1.05 + h) - 2^(1.05- h)*sin(1.05 -h))/(2*h)

N_1 =

@(h)(2^(1.05+h)*sin(1.05+h)-2^(1.05-h)*sin(1.05-h))/(2*h)

EDU>> RichExtraEven(N_1, .4, 4)

ans =

Columns 1 through 3

2.203165697391559 0 0
2.257237243413356 2.275261092087288 0
2.270674167306347 2.275153141937344 2.275145945260681
2.274028266459540 2.275146299510605 2.275145843348822

Column 4

0
0
0
2.275145841731173

19
Math 151a Homework 6 Charlie Marshak

One can check this agrees with the exact value fairly well.

5. The following data can be used to approximate the integral:


Z 3/2
M= cos(x)dx
0

N1 (h) = 2.356194,
 
h
N1 = 0.4879837,
2
 
h
N1 = 0.8815732,
4
 
h
N1 = 0.9709157
8

Assuming the asymptotic error of N1 (h) is given by:

M N1 (h) = K1 h2 K2 h4 + . . .

determine N4 (h).
Solution: Here is some code that performs Richard Extrapolation. We are using the formula at the top
of pg. 188 because the centered difference has an even powered asymptotic expansion:

function N_nTable = RichExtraEven(N_1, h, n)

%Performs Richardson Extrapolation for Centered Difference or any numerical


%method with even power asymptotic expansion. N_1 is the method inputted
%as a vector or a function handle. Note that N_n(h) = N_nTable(n, n). See
%the Table 4.6 on pg. 188 to see exactly what the table is reading when n=4.

N_nTable = zeros(n , n); %array to hold values computed

if isa(N_1, function_handle)

for i = 1: n
N_nTable(i, 1) = N_1(h/(2^(i-1)));
end

for j = 2: n
for k = j: n
N_nTable(k, j) = N_nTable(k, j-1) +(N_nTable(k, j-1)-N_nTable (k-1, j-1))/(4^(j-1)-1
end
end
return
else
N_nTable(:, 1) = N_1;

for j = 2: n
for k = j: n
N_nTable(k, j) = N_nTable(k, j-1) +(N_nTable(k, j-1)-N_nTable (k-1, j-1))/(4^(j-1)-1

20
Math 151a Homework 6 Charlie Marshak

end
end
return
end

end

Here is how we can call the function:

EDU>> RichExtraEven([2.356194, -.4879837, -0.8815732, -0.9709167], .4, 4)

ans =

Columns 1 through 3

2.356194000000000 0 0
-0.487983700000000 -1.436042933333333 0
-0.881573200000000 -1.012769700000000 -0.984551484444444
-0.970916700000000 -1.000697866666667 -0.999893077777778

Column 4

0
0
0
-1.000136595132275

6. The forward difference can be expressed as:


1 h h2
f 0 (x0 ) =
[f (x0 + h) f (x0 )] f 00 (x0 ) f 000 (x0 ) + O(h3 )
h 2 6
3 0
Use extrapolation to derive an O(h ) formula for f (x0 ).
Solution:
As you know, extrapolation is the process by which we use a numerical method with a known asymp-
totic expansion and substitute various timesteps to obtain a new numerical method that has even faster
convergence.
You can read about the full generality on wikipedia, but we are going to assume that our method is first
order and has all powers of h in this expansion as well:

f 0 (x0 ) = N1 (h) + K1 h + K2 h2 + K3 h3 + . . .
We note that the formula in the book on the top of page 188 assumes that all the odd powers cancel and
hence the expansion only has even powers. We have:
f 0 (x0 ) = N1 (h) + K1 h + K2 h + K3 h3 + . . .
     2  3
0 h h h h
f (x0 ) = N1 + K1 + K2 + K3 + ...
2 2 2 2

21
Math 151a Homework 6 Charlie Marshak

Multiplying the second equation by 2 and then subtracting the former from the latter shows:
   
0 h e 2 h2 + K
e 3 h3 + . . .
f (x0 ) = 2N1 N1 (h) + K
2

We could proceed identically, but I think it is favorable, now to go to slightly more generality to get the
general recurrence relation. Suppose that we are inspecting:

f 0 (x0 ) = Nn (h) + Kn hn + Kn+1 hn+1 + . . .

Now, we are going to everything with time steps h/2. Again, we could do it with a different timestep,
but there is no real advantage because we can just shrink h sufficiently small whenever performing such
an algorithm. Again, you can look at wikipedia for the full generality. We have:

f 0 (x0 ) = Nn (h) + Kn hn + Kn+1 hn+1 + . . .


   n  n+1
0 h h h
f (x0 ) = Nn + Kn + Kn+1 + ...
2 2 2

Multiplying the latter equation by 2n and then subtracting the latter from the former, we have:

   
n 0 n h
2 f (x0 ) f (x0 ) = 2 Nn Nn (h) + O(hn+1 )
2
2 Nn h2 Nn (h)
 n  
0
= f (x0 ) = +O(hn+1 )
2n 1
| {z }
Nn+1 (h)

In our case,
h h h h
[22 N2 [22 2N1
      
2 N2 (h)] 4 N1 2 2N1 2 N1 (h)
N3 (h) = =
22 1 22 1
Now,
f (x + h) f (x)
N1 (h) =
h
Doing some algebra yields the result:
h h
 
f (x0 + h) 12f0 x0 + 2 + 32f x0 + 4 21f (x0 )
N3 (h) =
3h

22

You might also like