Professional Documents
Culture Documents
Contents
Homework 1 1
Homework 2 3
Homework 3 3
Homework 4 5
Homework 5 8
Homework 6 13
Homework 7 22
Homework 1
1. (a) One can use Matlab or Mathmatica/Wolframalpha to compute the Taylor Series. If you wanted use
only pen and paper, one could simply write out the taylor series for cos(2x), expand (x 2)2 and do
the expected operations to get the desired Taylor Series:
4 + 6x x2 4x3
Then plug in numbers.
(b) So to do this problem completely rigorously, one must first find the fourth order derivative, check
that its maximum occurs at .4 (in our case the fourth order derivative function is in fact increasing),
and lastly make the desired computations at the right endpoint. I will include the mathematical
code for the last bit:
This is the upper bound. To get the actual error, you must subtract the answers you got in (a).
2. One could compute the Taylor polynomial, but one should realize this is the geometric series formula. In
particular, the Taylor Polynomial is simply:
Pn (x) = 1 + x + x2 + . . . + xn
1
Math 151a Homework 1 Charlie Marshak
(.5)n 106
Solve for n above and take the ceiling of that. In Matlab, that would be:
3. (a) Let |f (p)| = > 0 (this is true by hypothesis). By continuity, we can choose a depending on such
that for all x with |x p| < :
|f (x)| > 0
5. To use matlab for such computations, see this section in the documentation:
http://www.mathworks.com/help/symbolic/digits.html
Here is an example:
>> digits(3)
>> vpa(pi)
ans =
3.14
Digits sets the number of significant digits and vpa stands for variable point arithmetic.
6. For (a), use LHospitals rule. The other parts are computations similar to problems 1 and 5. Here is the
solution copied and pasted from the manual:
2
Math 151a Homework 3 Charlie Marshak
Homework
Page 1 2
1. (a) .6412
(b) Same as (a)
(c) They are actually both 17.
and you must begin your iterations using a negative number since < 0. For instance, let [a, b] be
[1, .001] if you like.
Homework 3
1. (a) Using the grapher application on Mac:
3
Math 151a Homework 3 Charlie Marshak
2.5
-5 0 5 10 15 20 25
-2.5
-5
input= start;
x = zeros(1, 2);
for i=1:MAX_IT
output=g(input);
x(1, 1)=output;
x(1, 2) = i;
if abs(output- input)<TOL
return
end
input = output;
end
2. Used the fixed point method. The reason for faster convergence has to do with the upper bound of the
derivative of each function. The smaller the bound, the faster it should converge. The last one converges
the fastest.
3. (Thanks to former TA William Feldman) The hardest part of this problem is finding the function by
which to apply the fixed point method:
3
Its simple to check that 25 is its fixed point and the function has a derivative strictly bounded by 1.
In this case, bisection on [2.5, 3] was much slower than fixed point starting at 2.5. The former required
13 steps, while the latter only 6.
4
Math 151a Homework 4 Charlie Marshak
4. One iteration on Matlab yields -.88 starting at -1. You cannot start at p0 = 0 because the function has
zero derivative there.
7. One solves:
1000
f (i) = [1 (1 + (i/12))360 ] 135, 000
(i/12)
using any method to get approximately 8.1%. Note that the hardest part of this problem is translating
the scenario into mathematics. The key is to remember that we must have i/12 instead of just i because
the formula requires us to use interest per period, which in this case is per month.
Homework 4
1. Let f (x) = (x 1)2 (x 3) on the interval x [0, 2].
(a) Plot f (x) with Matlab. Where is the zero using the diagram?
(b) How many iterations are needed with Newtons method to find this zero with accuracy 1010 .
(c) How many iterations are needed with modified version of Newtons method with the same setting?
Solution:
5
Math 151a Homework 4 Charlie Marshak
0.5
0.5
1.5
Did you really even need to graph it? Still, some of you got this wrong. Always, remember to take
a step back from the problem you are asked yourself, Whats the easiest possible way to do this?
(b + c) Using newtons method, we required 34 iterations as opposed to only 5 for modified Newtons method
when starting at 0. While this seems relatively small for a computer that makes such computations in
milliseconds, often times in computational problems you have to perform Newtons method thousands
(or even millions) of times, so this kind of improvement becomes much more significant in such
Student Version of MATLAB
scenarios.
2. Use Newtons method and its modified form to approximate the solution:
2 cos(x)
f (x) = xex +
10
within 108 on [0, 3] and discuss your method(s).
Solution:
Well,
df 2 2 sin(x)
= ex 2x2 ex
dx 10
and we set this to zero and apply newtons method. We need to also compute the second derivative:
d2 f 2 2 cos(x)
= 6xex + 4x3 ex
dx2 10
6
Math 151a Homework 4 Charlie Marshak
-5 -4 -3 -2 -1 0 1 2 3 4 5
-1
-2
-3
Note that the second derivative at x = 0 vanishes, so we must start at a positive point (I chose .5).
Secondly, we note (graphically) that the derivative and the second derivative do not vanish in the same
places, so we can use Newtons method. We obtain approximately .6717. Notice this must be a maximum
using the sign of the nearby derivative.
4. Suppose p is a zero of multiplicity m of f where f (m) is continuous on an open interval containing p. Show
that the following modified fixed point method converges of order 2:
f (pn )
pn+1 = pn m
f 0 (pn )
Solution:
Let us recall the statement of quadratic convergence of the fixed point iteration:
Then, there exists a > 0 such that, for p0 [p , p + ], the sequence defined by pn = g(pn1 ), when
n 1, converges at least quadratically to p. Moreover, for sufficiently large n:
M
|pn+1 p| < |pn p|2
2
We look to verify the hypotheses of the above and we obtain quadratic convergence. In our case,
f (x)
g(x) := x m
f 0 (x)
m(x p)q(x)
g(x) = x
mq(x) + (x p)q 0 (x)
7
Math 151a Homework 5 Charlie Marshak
Homework 5
1. Given three points (0, 1), (-1, 2), (1, 3), use Lagranges formula to find the polynomial of degree at most
2 passing through them.
Solution:
3 2
2x+ 12 x + 1. This is really a symbolic calculation and should be done by hand. Matlab does have
symbolic packages, but they are not installed on PIC computers.
function DDtable=divideddifference(x,f)
%Divided Difference for matlab. Outputs the divided difference table per
%Burden and Faires algorithm 3.2. The diagonal contains the coefficients for
%the polynomial of the form a_0 + a_1(x - x_0) + a_2 (x- x_1)(x - x_2)...
if isa(f, function_handle)
DDtable(:,1)=f(:);
else
DDtable(1:n+1, 1) = f(:);
end
8
Math 151a Homework 5 Charlie Marshak
ans =
1 0 0 0 0
9 8 0 0 0
23 14 3 0 0
93 35 7 1 0
259 83 12 1 0
(b) The polynomial is:
3. Recall the forward difference operator P (x) = P (x + 1) P (x). Suppose that P is a 4th degree
polynomial, such that 4 P (0) = 24, 3 P (0) = 6, and 2 P (0) = 0. Compute 2 P (10).
Solution 1: The first solution does not make use the formula expounded in the original problem statement.
We make some observations that are easily checked:
The above two observations imply that if Pn is a polynomial of degree n, then Pn is a polynomial of
degree at most n 1. In our case P is degree 4, so:
2 P (x) = Ax2 + Bx + C
3 P (x) = 2 P (x + 1) 2 P (x) = A(x + 1)2 + B(x + 1) + C (Ax2 + Bx + C) = 2Ax + A + B
4 P (x) = 3 P (x + 1) P (x) = 2A
4 P (0) = 24 = 2A = A = 12
3 P (0) = 6 = A + B = B = 6
2 (0) = 0 = C = C = 0
Hence,
2 P (x) = 12x2 6x
= 2 P (10) = 1140.
Solution 2:
We can use the formula:
1
f [x0 , x1 , . . . , xk ] = k f (x0 )
k!hk
9
Math 151a Homework 5 Charlie Marshak
Hence,
1 1
f [0, 1, 2, 3, 4] = 4
4 P (0) = 24 = 1
4! 1 4!
1 1
f [0, 1, 2, 3] = 3
3 P (0) = 6 = 1
3!1 3!
1 2
f [0, 1, 2] = P (0) = 0
2! 12
Now,
A small calculation shows that 2 P (x) the coefficients we do not know vanish. Hence,
4. Consider the function f (x) = ex . We are going to approximate it with a polynomial of degree 2 and see
how good the approximation is.
(a) Find a polynomial p2 (x) = a0 + a1 x + a2 x2 , which approximates f (x) on the interval [1, 1]. Choose
the polynomial so that p2 (1) = f (1), p2 (0) = f (0), p2 (1) = f (1). Use Lagranges formula.
(b) Plot both f and p2 on Matlab on the interval from [2, 2].
(c) Write a Matlab code that approximates the distance between the graphs of f and p.
(d) Find a C such that:
E(x) C|x 1||x||x + 1|
(e) Sketch (x + 1)(x)(x 1) and find M such that E(x) M for all x [1, 1].
(f) How do c) and e) compare?
(g) Use Taylors theorem to approximate f (x) of degree 2. Find C such that E(x) C(x a)3 for
x [1, 1] and similarly, find M such that E(x) M as before.
(h) Which of the two processes yields the best approximations?
Solution:
10
Math 151a Homework 5 Charlie Marshak
a0 = 1
e 1
a1 = +
2 2e
e 1
a2 = + 1
2 2e
EDU>> x = -1: .01: 1;
EDU>> a0 = 1;
EDU>> a1 = -exp(1)/2 + 1/(2*exp(1));
EDU>> a2 = exp(1)/2 -1 + 1/(2*exp(1));
EDU>> p2= @(x) a0 + a1*x + a2*x.^2;
EDU>> y1 = p2(x);
EDU>> y2 = exp(-x);
EDU>> plot(x, y1, :, x, y2);
EDU>> axis([-2 2 0 3])
EDU>> figure(1)
(b) The dotted line is the polynomial and the solid is f (x) = ex :
2.5
1.5
0.5
0
2 1.5 1 0.5 0 0.5 1 1.5 2
(c) Continuing from the same workspace as before, we can find the L2 norm or simply the sup-norm of
the difference of the two vectors as evidence by:
EDU>> norm(y1 - y2, 2)
ans =
Student Version of MATLAB
0.7021
EDU>> max(abs(y1-y2))
ans =
11
Math 151a Homework 5 Charlie Marshak
0.0785
f (3) (x)
(d) Applying Theorem 3.3 of the text, we see that C will be least upper bound for 3! = ex /6. As
such, C = e/6.
(e) Below we see the graph of g(x) = |x 1| |x| |x + 1|. Since g(x) = g(x), we easily see we only need
to inspect [0, 1] and there g(x) = x3 + x, so g 0 (x) = 3x2 + 1 Hence, the maximum is at x = 33
2 3 2e 3
and reaches 9 . Hence, M = 54 .17.
1.6
1.2
0.8
0.4
-2.4 -2 -1.6 -1.2 -0.8 -0.4 0 0.4 0.8 1.2 1.6 2 2.4
-0.4
-0.8
-1.2
-1.6
(f) The latter is slightly larger being the bound of the error (the theory isnt busted).
(g) Expanding the Taylor series about a = 0, we have the following degree 2 approximation:
x2
T2 (x) = 1 x +
2
e|x|
and hence the theoretical error is bounded by the third order term given by 6 6e , when x [1, 1].
(h) An identical computation as above shows us the sup-norm of the difference of the actual function and
its second order Taylor approximation is about .218. Moreover the theoretical error is .453. Hence,
in our case, the Lagrange polynomial provides a more accurate approximation for our scenario.
5. We want to show that the interpolation error with n + 1 equally spaced nodes in [a, b] has the following
form:
1
|f (x) p(x)| hn+1 M
4(n + 1)
where |f (n+1) (x)| M for f C n+1 ([a, b] R) and h = (b a)/n.
h2
(a) Fix x and select j so that xj x xj+1 . Show that |x xj ||x xj+1 | 4
(b) Show that ni=0 |x xi | 14 hn+1 (j + 1)!(n j)!
Q
Solution:
12
Math 151a Homework 6 Charlie Marshak
Then:
xj + xj+1
0 = g 0 (xmax ) = xj + xj+1 2xmax = x =
2
(xj+1 xj )2 h2
Hence, g(xmax ) = 4 = 4 .
(b) The idea is that after we have fixed j such that xj x xj+1 , each element |x xk can be crudely
bounded by some multiple of h that depends on the distance of xk from xj . For instance:
(c) Well,
j terms j terms
z }| { z }| {
n! = n (n 1) (n j + 1) (n j) 3 2 1 ((j + 1) (j) 3 2) (n j) 3 2 1
| {z } | {z }
nj terms nj terms
= (j + 1)!(n j)!
(d) Combining the above with Lagranges remainder formula yields the desired result!
Homework 6
1. (a) Use the most accurate three-point formula to determine each missing entry in the following table:
x f (x) f 0 (x)
1.1 1.52918
1.2 1.64024
1.3 1.70470
1.4 1.71277
13
Math 151a Homework 6 Charlie Marshak
(b) The data above were taken from the function f (x) = x sin(x) + x2 cos x Compute the actual errors
and find the error bounds using the suggested formulae.
(c) Repeat (a) using four digit rounding arithmetic, and compare the errors to (b).
Solution:
(a) The most accurate three point method is the three-point midpoint formula. However, this only works
when we have points sandwiching the point of the derivative we want to approximate. For endpoints,
we have to use a different formula. We will write Matlab code that performs the desired operations,
though this is HIGHLY unnecessary. It also formats the formulae in a HIGHLY unsavory way. Still,
here they are, but it uses the mildly cool isa function (as in is a thing) which sorts out various
types of classes of matlab.
if isa(f, function_handle)
df0 = 1/(2*h)*(3*f(x0) - 4*f(x0 - h) + f(x0 - 2*h));
else
df0 = 1/(2*h)*(3*f(3) - 4*f(2) +f(1));
end
end
14
Math 151a Homework 6 Charlie Marshak
end
(b) This is a computation. The important take-aways will be to know the error bounds for the various
methods. They can found on 176-177 of the book.
(c) This was long, and required patience. The error jumps around from both computations. This
demonstrates the instability of numerical differentiation.
2. Let f (x) = 3xex cos(x). Use the following data to approximate f 00 (1.3) with h = 0.1 and with h = 0.01:
15
Math 151a Homework 6 Charlie Marshak
Solution:
Use the second difference method, which is:
f (x + h) 2f (x) + f (x h)
f 00 (x)
h2
Note that the h = .1 approximation is better than the .01 approximation due to round-off instability
(Thanks Will Feldman!) Lets show why. Assuming the computer stores the value of f (x) as fe(x), then
we can assume that |fe(x) f (x)| for all x. In particular:
fe(x + h) 2fe(x) + fe(x h)
00
f (x) =
h2
fe(x + h) 2fe(x) + fe(x h) f (x + h) + 2f (x) f (x h) f (x + h) 2f (x) + f (x h)
00
+ f (x)
2 2
h h
4 f (x + h) 2f (x) + f (x h)
00
+ f (x)
h2 h2
2 00 3 4 (4) 2 3 4 (4)
4 [f (x) + h f (x) + 2 f (x)) + h6 f 000 (x) + h
0 h f ((h)] 2f (x) + [f (x) h f 0 (x) + h2 f 00 (x) h6 f 000 (x) + h f ((h))]
24 24 00
= + f (x)
h 2 h 2
4 M h2
+
h2 12
where M is chosen so that |f (4) (x)| M for each x [1.2, 1.4]. Well, f (4) (x) = 3xex + 12ex cos(x) and
for x [1.2, 1.4], we have:
|f (4) | 3 (1.4)e1.4 + 12e1.4 + 1 67
Moreover, if we want to find the optimal h, then:
1/4
4 M h2
d 48
0= + = h =
dh h2 12 M
Now, since we have 5 significant digits assuming we have rounded, we know that = 5 106 . Hence,
the optimal h is h .43, which is ostensibly bigger than 0.01.
16
Math 151a Homework 6 Charlie Marshak
3. Let f (x) = ex . Define the errors made by approximating f 0 (2) with the forward and centered difference
formula:
0 f (2 + h) f (2)
Ef (h) = f (2)
h
0 f (2 + h) f (2 h)
Ec (h) = f (2)
h
Solution:
Error_forward = E_f(h);
Error_centered = E_c(h);
figure(1);
plot(log10(h), log10(Error_forward), o);
title(Error of the Forward Distance Method);
xlabel(Log(h));
ylabel(Log(E_f(h)));
figure(2);
plot(log10(h), log10(Error_centered), o);
title(Error of the Centered Distance Method)
xlabel(Log(h));
ylabel(Log(E_f(h)));
Error_forward = E_f(h);
Error_centered = E_c(h);
figure(3);
17
Math 151a Homework 6 Charlie Marshak
Remark. Using the loglog command in Matlab causes the basic fitting to act a little bit
strangely, so its best to manually adjust for logarithms for t-values and x-values.
(b) Check this website for directions on how to do this (step-by-step):
http://www.swarthmore.edu/NatSci/echeeve1/Ref/Matlab/CurveFit/LinearCurveFit.html
1
3
1.5
4
2
Log(Ef(h))
5
Log(Ef(h))
6 2.5
7 3
8 3.5
9 4
10
5 4.5 4 3.5 3 2.5 2 1.5 1 4.5
5 4.5 4 3.5 3 2.5 2 1.5 1
Log(h)
Log(h)
2
1
3
2
4
6
4
7
5
6
9
7
10
8 11
10 9 8 7 6 5 4 3 2 1 10 9 8 7 6 5 4 3 2 1
Log(h) Log(h)
Note the slopes of the first two graphs are precisely the order of the method!
(c) Using an identical process to problem 2 (see that problem for notation), we have that:
18
Math 151a Homework 6 Charlie Marshak
f (x + h) fe(x + h) + f (x) fe(x)
f (x + h) f (x) 0
f (x + h) f (x) 0
f (x) + f (x)
h h h
2 M h
= +
h 2
where M is a bound on the second derivative of f in an appropriate interval. p Taking the derivative,
setting it equal to zero, then solving for h yields that the optimal h is 4/M
r
3
4.1947 1006
3
Centered difference method:
M
r
2 4
Forward difference method: 1.9841 108
M
where we have used M = e2 .2 as this is much larger estimate for the second derivative than need.
4. Apply the extrapolation process wot obtain the approximation N4 (h), an approximation f 0 (x0 ) for the
following function and step size:
f (x) = 2x sin(x), x0 = 1.05, h = 0.4
Solution: We will use the centered difference formula since then our extrapolation process is quicker since
the asymptotic expansion misses odd order terms (see page 189-190 of the textbook).
We apply the same code as in exercise 5 except with the use of the function handle instead of a vector.
N_1 =
@(h)(2^(1.05+h)*sin(1.05+h)-2^(1.05-h)*sin(1.05-h))/(2*h)
ans =
Columns 1 through 3
2.203165697391559 0 0
2.257237243413356 2.275261092087288 0
2.270674167306347 2.275153141937344 2.275145945260681
2.274028266459540 2.275146299510605 2.275145843348822
Column 4
0
0
0
2.275145841731173
19
Math 151a Homework 6 Charlie Marshak
One can check this agrees with the exact value fairly well.
N1 (h) = 2.356194,
h
N1 = 0.4879837,
2
h
N1 = 0.8815732,
4
h
N1 = 0.9709157
8
M N1 (h) = K1 h2 K2 h4 + . . .
determine N4 (h).
Solution: Here is some code that performs Richard Extrapolation. We are using the formula at the top
of pg. 188 because the centered difference has an even powered asymptotic expansion:
if isa(N_1, function_handle)
for i = 1: n
N_nTable(i, 1) = N_1(h/(2^(i-1)));
end
for j = 2: n
for k = j: n
N_nTable(k, j) = N_nTable(k, j-1) +(N_nTable(k, j-1)-N_nTable (k-1, j-1))/(4^(j-1)-1
end
end
return
else
N_nTable(:, 1) = N_1;
for j = 2: n
for k = j: n
N_nTable(k, j) = N_nTable(k, j-1) +(N_nTable(k, j-1)-N_nTable (k-1, j-1))/(4^(j-1)-1
20
Math 151a Homework 6 Charlie Marshak
end
end
return
end
end
ans =
Columns 1 through 3
2.356194000000000 0 0
-0.487983700000000 -1.436042933333333 0
-0.881573200000000 -1.012769700000000 -0.984551484444444
-0.970916700000000 -1.000697866666667 -0.999893077777778
Column 4
0
0
0
-1.000136595132275
f 0 (x0 ) = N1 (h) + K1 h + K2 h2 + K3 h3 + . . .
We note that the formula in the book on the top of page 188 assumes that all the odd powers cancel and
hence the expansion only has even powers. We have:
f 0 (x0 ) = N1 (h) + K1 h + K2 h + K3 h3 + . . .
2 3
0 h h h h
f (x0 ) = N1 + K1 + K2 + K3 + ...
2 2 2 2
21
Math 151a Homework 6 Charlie Marshak
Multiplying the second equation by 2 and then subtracting the former from the latter shows:
0 h e 2 h2 + K
e 3 h3 + . . .
f (x0 ) = 2N1 N1 (h) + K
2
We could proceed identically, but I think it is favorable, now to go to slightly more generality to get the
general recurrence relation. Suppose that we are inspecting:
Now, we are going to everything with time steps h/2. Again, we could do it with a different timestep,
but there is no real advantage because we can just shrink h sufficiently small whenever performing such
an algorithm. Again, you can look at wikipedia for the full generality. We have:
Multiplying the latter equation by 2n and then subtracting the latter from the former, we have:
n 0 n h
2 f (x0 ) f (x0 ) = 2 Nn Nn (h) + O(hn+1 )
2
2 Nn h2 Nn (h)
n
0
= f (x0 ) = +O(hn+1 )
2n 1
| {z }
Nn+1 (h)
In our case,
h h h h
[22 N2 [22 2N1
2 N2 (h)] 4 N1 2 2N1 2 N1 (h)
N3 (h) = =
22 1 22 1
Now,
f (x + h) f (x)
N1 (h) =
h
Doing some algebra yields the result:
h h
f (x0 + h) 12f0 x0 + 2 + 32f x0 + 4 21f (x0 )
N3 (h) =
3h
22