You are on page 1of 26

INTRODUCTION TO SCIENTIFIC COMPUTING

AND ITS APPLICATIONS


1. INTRODUCTION AND SCILAB
Instructors:
Jean Ponce
Email: Jean.Ponce@ens.fr
Web: http://www.di.ens.fr/
~
ponce
Romain Brette
Email: Romain.Brette@ens.fr
Cours : Mercredis, 8h45-10h45, salle R
TPs : Vendredis, 17h30-18h30, salle S
Class web site:
http://www.di.ens.fr/
~
brette/calculscientifique/index.htm
1
Linear Equations
_

_
a
11
x
1
+ a
12
x
2
+ . . . + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ . . . + a
2n
x
n
= b
2
. . .
a
m1
x
1
+ a
m2
x
2
+ . . . + a
mn
x
n
= b
m
_

_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
. . . . . . . . . . . .
a
m1
a
m2
. . . a
mn
_

_
_

_
x
1
x
2
.
.
.
x
n
_

_
=
_

_
b
1
b
2
.
.
.
b
m
_

_
Ax =

b
2
Sample Problems
Solving Algebraic Equations
ax = b
Ax =

b
ax
2
+ bx + c = 0
ax
5
+ bx
4
+ cx
3
+ dx
2
+ ex + f = 0
_

_
ax
2
+ bxy + cy
2
+ dx + ey + f = 0
a

x
2
+ b

xy + c

y
2
+ d

x + e

y + f

= 0
And also: polynomial interpolation, least squares problems, opti-
mization, dierential equations, etc.
3
Contents: This course is an introduction to classical numerical methods of scientic com-
puting. It uses a general methodology inherited from elementary linear algebra to reduce the
great majority of scientic computations to the numerical resolution of n linear equations
in n unknowns. This methodology is illustrated by a large number of algorithms and their
application to concrete problems in domains such as statistical data analysis, computer aided
design (CAD), robotics, dynamic simulation, and image processing.
The course is intended for students in scientic disciplines such as physics, chemistry, or
biology, who are familiar with elementary notions of linear algebra, calculus, and program-
ming as taught in undergraduate science and engineering curricula. SCILAB, a computer
language dedicated to scientic computing, is taught at the beginning of the course, and it
is used for all programming exercises.
Course outline:
1. General introduction and programming in SCILAB.
2. Polynomial interpolation and piecewise-polynomial interpolation: Vandermonde ap-
proach; Lagrange, Newton, and Hermite polynomials; Bezier curves and splines. Ap-
plications to computer-aided design.
3. Resolution of systems of linear equations: diagonal and triangular systems; LU decom-
position; sparse and banded matrices; homogeneous and non-homogeneous linear least
squares. Applications to statistical data analysis.
4. Resolution of systems of non-linear equations: Newtons method; non-linear least
squares Newton, Gauss-Newton, and Levenberg-Marquardt methods. Applications
to image processing.
5. Resolution of systems of polynomial equations: Laguerres method; Sturm sequences;
resultants; homotopy methods. Applications to robotics.
6. Integration of ordinary dierential equations: Eulers method; Runge-Kutta method;
implicit methods. Applications to dynamic simulation.
Two exceptional classes:
March 12, Monique Teillaud, Geometric computations.
TBA, Francis Bach, Optimization and machine learning.
Reference:
Introduction to Scientic Computing a Matrix-Vector Approach Using MATLAB by Charles
F. Van Loan, Prentice-Hall (second edition, 1997).
SCILAB documentation:
ftp://ftp.inria.fr/INRIA/Scilab/documentation/pdf/intro.pdf
4
5
6
7
8
Matrices, Vectors, and Numbers
An m n matrix is a rectangular array made of m rows and
n columns of numbers.
A =
_

_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
. . . . . . . . . . . .
a
m1
a
m2
. . . a
mn
_

_
A (column) vector a IR
n
can be identied with an n 1
matrix.
a =
_

_
a
1
.
.
.
a
n
_

_
A (row) vector of dimension n is a 1 n matrix.
a
T
= [ a
1
. . . a
n
]
A number can be identied with a 1 1 matrix.
a = [a]
Note: The usual arithmetic operations work under this represen-
tation.
a + b = [a] + [b] = [][a] + [][b]
a +

b =
_

_
a
1
.
.
.
a
n
_

_
+
_

_
b
1
.
.
.
b
n
_

_
=
_
_
_
_
_
_
a
1
+ b
1
.
.
.
a
n
+ b
n
_
_
_
_
_
_
9
Matrix products
Let A be and m n matrix, and B be an n p matrix. Their
product C = AB is the mp matrix dened by
_

_
c
11
c
12
. . . c
1p
c
21
c
22
. . . c
2p
. . . . . . . . . . . .
c
m1
c
m2
. . . c
mp
_

_
=
_

_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
. . . . . . . . . . . .
a
m1
a
m2
. . . a
mn
_

_
_

_
b
11
b
12
. . . b
1p
b
21
b
22
. . . b
2p
. . . . . . . . . . . .
b
n1
b
n2
. . . b
np
_

_
,
with
c
ij
=
n

k=1
a
ik
b
kj
= (row#i) (col#j) = [ a
i1
a
i2
. . . a
in
]
_

_
b
1j
b
2j
.
.
.
b
nj
_

_
.
Example I:
_
_
a
11
a
12
a
21
a
22
_
_
_
_
b
11
b
12
b
21
b
22
_
_
=
_
_
a
11
b
11
+ a
12
b
21
a
11
b
12
+ a
12
b
22
a
21
b
11
+ a
22
b
21
a
21
b
12
+ a
22
b
22
_
_
.
Example II:
_

_
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
23
_

_
_

_
b
1
b
2
b
3
_

_
=
_

_
a
11
b
1
+ a
12
b
2
+ a
13
b
3
a
21
b
1
+ a
22
b
2
+ a
23
b
3
a
31
b
1
+ a
32
b
2
+ a
33
b
3
_

_
.
10
Matrix products II
Example III: Inner Product ( Dot Product)
[ a
1
a
2
. . . a
n
]
_

_
b
1
b
2
.
.
.
b
n
_

_
Example IV: Outer Product
_

_
a
1
a
2
.
.
.
a
n
_

_
[ b
1
b
2
. . . b
n
]
The matrix product is associative but not commutative, i.e., in
general,
A(BC) = (AB)C = ABC but AB = BA.
11
Linear Equations
_

_
a
11
x
1
+ a
12
x
2
+ . . . + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ . . . + a
2n
x
n
= b
2
. . .
a
m1
x
1
+ a
m2
x
2
+ . . . + a
mn
x
n
= b
m

_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
. . . . . . . . . . . .
a
m1
a
m2
. . . a
mn
_

_
_

_
x
1
x
2
.
.
.
x
n
_

_
=
_

_
b
1
b
2
.
.
.
b
m
_

_
Ax =

b
When an nn matrix A is nonsingular, i.e., when its determinant
is nonzero, it admits an inverse n n matrix A
1
such that
AA
1
= A
1
A = Id,
where Id is the n n identity matrix, a diagonal matrix with ones
on its diagonal.
Given a nonsingular n n matrix A and a vector

b IR
n
there
exists a unique x IR
n
such that
Ax =

b,
and it is given by
x = A
1

b.
12
Modeling errors
Taylor expansion of a dierentiable function:
f(x + h) = f(x) + hf

(x) + . . . +
h
n
n!
f
(n)
(x) +
h
n+1
(n + 1)!
f
(n+1)
()
for some between x and x + h.
Problem: estimate the derivative of a function from sampled data.
First order solution:
f

(x) =
1
h
(f(x + h) f(x))
h
2
f

().
Second order solution:
f

(x) =
1
2h
(f(x + h) f(x h))
h
2
12
f

().
When the absolute values of the derivatives are bounded above,
bounds on the approximation error are available.
// Successive first and second order approximations
// of the first derivative (derive.sce).
x=1.0
for h=10^(-1:-1:-5)
d1=(exp(x+h)-exp(x))/h;
d2=(exp(x+h)-exp(x-h))/(2*h);
sprintf(h=%e, d1-exp(x)=%e, d2-exp(x)=%e,...
h,abs(d1-exp(x)),abs(d2-exp(x)))
end
13
Floating-Point Errors: Representation
Assume oating-point arithmetic with numbers of the form
x = . b
1
b
2
. . . b
r

e
with
_

_
L e U
0 b
i
1
and dene
mantissa: b
1
b
2
. . . b
r
,
mantissa length: r,
base: ,
exponent: e,
exponent range: [L, U],
machine precision: =
1
2

1r
.
Example: 3 digits, base 10, =
1
2
10
2
= 5 10
3
Theorem:
|Fl(x) x|
|x|
.
What about errors in solving linear systems?
14
Floating-Point Errors: Computation
Assume the following model of oating point computation:
Perform exact computation.
Put the result in (general) oating point format.
Round the mantissa to its allowable length.
Example: 3 digits, base 10, U = 5, V = 5.
Let us add two numbers: z = x + y, where
x = 11.5 Fl(x) = .123 10
2
,
y = 6.32 Fl(y) = .632 10
1
.
The computation proceeds as follows:
x + y = 11.5 + 6.32 = 18.82,
z = x + y = 18.82 = .1882 10
2
,
Fl(z) = .188 10
2
.
15
Floating-Point Errors: An Example
(x 1)
6
= x
6
6x
5
+ 15x
4
20x
3
+ 15x
2
6x + 1
// Plots (x-1)^6 with increasingly refined scale.
// The monomial expansion leads to severe
// cancellations (zoom.sce).
clf
k=0;
for delta = [.1 .01 .008 .007 .005 .003]
x=linspace(1-delta,1+delta,100);
y=x.^6-6*x.^5+15*x.^4-20*x.^3+15*x.^2-6*x+ones(100,1);
// y=(x-ones(100,1)).^6;
k=k+1;
subplot(2,3,k);
plot(x,y,x,zeros(1,100));
end
16
Floating-Point Errors: Another Example
Taylor expansion of the exponential function:
e
x
=
k

i=1
x
k
k!
+
e

(n + 1)!
x
n+1
for some between 0 and x.
// Plot the relative error between e^x and its Taylor
// expansion as a function of the number of terms nTerms
// in the expansion (ExpTaylor.sce).
clf
nTerms=100;
x=10;
err=exp(x)*ones(nTerms,1);
s=1;
term=1;
for k=1:nTerms
term=x*term/k;
s=s+term;
err(k)=abs(err(k)-s);
end
relerr=err/exp(x);
plot(1:nTerms,log(relerr))
17
Floating-Point Errors: A third Example
When A is a nonsingular n n matrix, and b is a vector in IR
n
,
A x = b =x = A
1
b.
// Two ways of solving a linear equation (inverse.sce).
n=500;
A=rand(n,n);
b=rand(n,1);
timer; x=A\b; timer
timer; y=inv(A)*b; timer
sprintf(|x-y|=%e,norm(x-y))
sprintf(|A*x-b|=%e,norm(A*x-b))
sprintf(|A*y-b|=%e,norm(A*y-b))
18
Algorithm Complexity
Time complexity. Example: Solving a linear system via Gaussian
elimination.
For a full n n matrix, the cost is 2/3n
3
+ 2n
2
ops.
For a lower-triangular n n matrix, the cost is n
2
ops.
For a tridiagonal n n matrix, the cost is 8n ops.
Space complexity.
Storing a full n n matrix requires n
2
oats.
Storing a diagonal n n matrix requires n oats.
Naive LU decomposition requires storing an extra n n matrix.
19
An example of linear system: Dene f(x) = a cos x+b sinx+
c cos 2x + d sin 2x. What are the a, b, c, and d values such that
f(x
1
) = y
1
, f(x
2
) = y
2
, f(x
3
) = y
3
, f(x
4
) = y
4
?
a cos x
1
+ b sin x
1
+ c cos 2x
1
+ d sin 2x
1
= y
1
a cos x
2
+ b sin x
2
+ c cos 2x
2
+ d sin 2x
2
= y
2
a cos x
3
+ b sin x
3
+ c cos 2x
3
+ d sin 2x
3
= y
3
a cos x
4
+ b sin x
4
+ c cos 2x
4
+ d sin 2x
4
= y
4
_

_
cos x
1
sin x
1
cos 2x
1
sin 2x
1
cos x
2
sin x
2
cos 2x
2
sin 2x
2
cos x
3
sin x
3
cos 2x
3
sin 2x
3
cos x
4
sin x
4
cos 2x
4
sin 2x
4
_

_
_

_
a
b
c
d
_

_
=
_

_
y
1
y
2
y
3
y
4
_

_
[ cos x sinx cos 2x sin 2x ] [ p ] = [ y ]
20
SCILAB Solution
function csinterp(x,y)
\\ Interpolates 4 data points using the function
\\ a*cos(x)+b*sin(x)+c*cos(2x)+d*sin(2x).
\\ The inputs are two column vectors of dimension 4
\\ representing the given x and y=f(x) values.
A=[cos(x) sin(x) cos(2*x) sin(2*x)];
b=y;
params=A\b;
samples=linspace(x(1)-1,x(4)+1,100);
fsamples=csfunc(samples,params);
plot(samples,fsamples);
plot(x,y,ro);
endfunction
function fx=csfunc(x,params)
\\ This function evaluates f in x.
y=2*x;
fx=[cos(x) sin(x) cos(y) sin(y)]*params;
endfunction
21
The Interpolation problem
Given some data points (x
1
, y
1
), ..., (x
n
, y
n
) and a family of func-
tions f : IR IR
p
IR, nd the parameters a
1
, ..., a
p
such that
f(x
i
, a
1
, ..., a
p
) = y
i
for i = 1, ..., n.
In general, we take p = n and
f(x, a
1
, ..., a
n
) =
n

i=1
a
i
f
i
(x).
Polynomial interpolation:
f(x) = a
n
x
n1
+ a
n1
x
n2
+ . . . + a
2
x + a
1
.
22
SCILAB SOLUTION
// Given a linearly parameterized family of functions
// y=f(x)=func(x)*params, where params is a vector of
// n parameters, and given a column vector x=[x1;...;xn]
// of x_i values and the corresponding vector y=[y1;...;yn]
// of y_i=f(xi) values, this function returns
// the corresponding vector of parameter values params.
function params=interpol(func,x,y)
n=length(x);
params=func(x,n)\y;
plotlpfunc(func,params,x,n);
plot(x,y,ro);
endfunction
// Plots a linearly parameterized function, which has
// the form: y=func(x,n)*params
function plotlpfunc(func,params,x,n)
mi=min(x);
ma=max(x);
eps=0.01*(ma-mi);
samples=linspace(mi-eps,ma+eps,100);
fsamples=func(samples,n)*params;
plot(samples,fsamples);
endfunction
// This returns the matrix U encoding the function
// y=a*cos(x)+b*sin(x)+c*cos(2*x)+d*sin(2*x) as
// y=U*[a;b;c;d].
function U=csfunc(x,n)
y=2*x;
U=[cos(x) sin(x) cos(y) sin(y)]
endfunction
23
Linear interpolation
What is the linear function
y = ax + b that interpolates
the points (2, 1) and (3, 3)?
Quadratic interpolation
What is the quadratic function
y = ax
2
+ bx + c that interpolates
the points (1, 2), (2, 1) and (3, 3)?
24
The Vandermonde approach
Consider the polynomial of degree n 1:
P
n1
(x) = a
1
+ a
2
x + . . . + a
n
x
n1
.
Write P
n1
(x
i
) = y
i
for i = 1, . . . , n.
y
i
= a
1
+ a
2
x
i
+ . . . + a
n
x
n1
i
= ( 1 x
i
. . . x
n1
i
)
_
_
_
_
_
_
_
_
_
_
a
1
a
2
. . .
a
n
_
_
_
_
_
_
_
_
_
_
.
Stacking these n equations yields the linear system
_
_
_
_
_
_
_
_
_
_
1 x
1
. . . x
n1
1
1 x
2
. . . x
n1
2
. . . . . . . . . . . .
1 x
n
. . . x
n1
n
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
a
1
a
2
. . .
a
n
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
y
1
y
2
. . .
y
n
_
_
_
_
_
_
_
_
_
_
.
This can be rewritten as Va = y and the solution is a = V
1
y.
25
Existence and uniqueness of polynomial interpolants
Lemma 1: A nonzero polynomial of degree n1 has exactly n1
(real or complex, including multiplicity) roots.
Lemma 2: When the points x
1
, ..., x
n
are all distinct, V is nonsin-
gular.
Proof: What are the solutions of Va =

0?
Lemma 3: When the values x
1
, ..., x
n
are all distinct, the data
points (x
1
, y
1
), ..., (x
n
, y
n
) admit a polynomial interpolant of degree
n 1 and this interpolant is unique.
26

You might also like