You are on page 1of 74

Mathematics Formulary

By ir. J.C.A. Wevers


c _ 1999, 2008 J.C.A. Wevers Version: May 4, 2008
Dear reader,
This document contains 66 pages with mathematical equations intended for physicists and engineers. It
is intended to be a short reference for anyone who often needs to look up mathematical equations.
This document can also be obtained from the author, Johan Wevers (johanw@vulcan.xs4all.nl).
It can also be found on the WWW on http://www.xs4all.nl/~johanw/index.html.
This document is Copyright by J.C.A. Wevers. All rights reserved. Permission to use, copy and distribute
this unmodied document by any means and for any purpose except prot purposes is hereby granted.
Reproducing this document by any means, included, but not limited to, printing, copying existing prints,
publishing by electronic or other means, implies full agreement to the above non-prot-use clause, unless
upon explicit prior written permission of the author.
The C code for the rootnding via Newtons method and the FFT in chapter 8 are from Numerical Recipes
in C , 2nd Edition, ISBN 0-521-43108-5.
The Mathematics Formulary is made with teT
E
X and L
A
T
E
X version 2.09.
If you prefer the notation in which vectors are typefaced in boldface, uncomment the redenition of the
vec command and recompile the le.
If you nd any errors or have any comments, please let me know. I am always open for suggestions and
possible corrections to the mathematics formulary.
Johan Wevers
Contents
Contents I
1 Basics 1
1.1 Goniometric functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Complex numbers and quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5.1 Complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5.2 Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6.1 Triangles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6.2 Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8.1 Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8.2 Convergence and divergence of series . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.8.3 Convergence and divergence of functions . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.9 Products and quotients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.10 Logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.11 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.12 Primes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Probability and statistics 10
2.1 Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Probability theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.2 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Regression analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Calculus 14
3.1 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.1 Arithmetic rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.2 Arc lengts, surfaces and volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.3 Separation of quotients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.4 Special functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.5 Goniometric integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Functions with more variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.1 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
I
II Mathematics Formulary door J.C.A. Wevers
3.2.2 Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.3 Extrema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.4 The -operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.5 Integral theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.6 Multiple integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.7 Coordinate transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Orthogonality of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4 Dierential equations 23
4.1 Linear dierential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.1.1 First order linear DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.1.2 Second order linear DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.1.3 The Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.1.4 Power series substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2 Some special cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2.1 Frobenius method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2.2 Euler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.3 Legendres DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.4 The associated Legendre equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.5 Solutions for Bessels equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2.6 Properties of Bessel functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.7 Laguerres equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.8 The associated Laguerre equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.9 Hermite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2.10 Chebyshev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2.11 Weber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.3 Non-linear dierential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.4 Sturm-Liouville equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.5 Linear partial dierential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.5.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.5.2 Special cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.5.3 Potential theory and Greens theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5 Linear algebra 33
5.1 Vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.3 Matrix calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.3.1 Basic operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.3.2 Matrix equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.4 Linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.5 Plane and line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.6 Coordinate transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.7 Eigen values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.8 Transformation types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.9 Homogeneous coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Mathematics Formulary by J.C.A. Wevers III
5.10 Inner product spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.11 The Laplace transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.12 The convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.13 Systems of linear dierential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.14 Quadratic forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.14.1 Quadratic forms in IR
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.14.2 Quadratic surfaces in IR
3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6 Complex function theory 45
6.1 Functions of complex variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.2 Complex integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.2.1 Cauchys integral formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.2.2 Residue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.3 Analytical functions denied by series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.4 Laurent series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.5 Jordans theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7 Tensor calculus 50
7.1 Vectors and covectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.2 Tensor algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.3 Inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.4 Tensor product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7.5 Symmetric and antisymmetric tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7.6 Outer product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.7 The Hodge star operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.8 Dierential operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.8.1 The directional derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.8.2 The Lie-derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7.8.3 Christoel symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7.8.4 The covariant derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7.9 Dierential operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.10 Dierential geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.10.1 Space curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.10.2 Surfaces in IR
3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.10.3 The rst fundamental tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.10.4 The second fundamental tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.10.5 Geodetic curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.11 Riemannian geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8 Numerical mathematics 59
8.1 Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
8.2 Floating point representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
8.3 Systems of equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
8.3.1 Triangular matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
8.3.2 Gauss elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
8.3.3 Pivot strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
IV Mathematics Formulary door J.C.A. Wevers
8.4 Roots of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
8.4.1 Successive substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
8.4.2 Local convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8.4.3 Aitken extrapolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8.4.4 Newton iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8.4.5 The secant method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8.5 Polynomial interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
8.6 Denite integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
8.7 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
8.8 Dierential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.9 The fast Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Chapter 1
Basics
1.1 Goniometric functions
For the goniometric ratios for a point p on the unit circle holds:
cos() = x
p
, sin() = y
p
, tan() =
y
p
x
p
sin
2
(x) + cos
2
(x) = 1 and cos
2
(x) = 1 + tan
2
(x).
cos(a b) = cos(a) cos(b) sin(a) sin(b) , sin(a b) = sin(a) cos(b) cos(a) sin(b)
tan(a b) =
tan(a) tan(b)
1 tan(a) tan(b)
The sum formulas are:
sin(p) + sin(q) = 2 sin(
1
2
(p +q)) cos(
1
2
(p q))
sin(p) sin(q) = 2 cos(
1
2
(p +q)) sin(
1
2
(p q))
cos(p) + cos(q) = 2 cos(
1
2
(p +q)) cos(
1
2
(p q))
cos(p) cos(q) = 2 sin(
1
2
(p +q)) sin(
1
2
(p q))
From these equations can be derived that
2 cos
2
(x) = 1 + cos(2x) , 2 sin
2
(x) = 1 cos(2x)
sin( x) = sin(x) , cos( x) = cos(x)
sin(
1
2
x) = cos(x) , cos(
1
2
x) = sin(x)
Conclusions from equalities:
sin(x) = sin(a) x = a 2k or x = ( a) 2k, k IN
cos(x) = cos(a) x = a 2k or x = a 2k
tan(x) = tan(a) x = a k and x ,=

2
k
The following relations exist between the inverse goniometric functions:
arctan(x) = arcsin
_
x

x
2
+ 1
_
= arccos
_
1

x
2
+ 1
_
, sin(arccos(x)) =
_
1 x
2
1
2 Mathematics Formulary by ir. J.C.A. Wevers
1.2 Hyperbolic functions
The hyperbolic functions are dened by:
sinh(x) =
e
x
e
x
2
, cosh(x) =
e
x
+ e
x
2
, tanh(x) =
sinh(x)
cosh(x)
From this follows that cosh
2
(x) sinh
2
(x) = 1. Further holds:
arsinh(x) = ln[x +
_
x
2
+ 1[ , arcosh(x) = arsinh(
_
x
2
1)
1.3 Calculus
The derivative of a function is dened as:
df
dx
= lim
h0
f(x +h) f(x)
h
Derivatives obey the following algebraic rules:
d(x y) = dx dy , d(xy) = xdy +ydx , d
_
x
y
_
=
ydx xdy
y
2
For the derivative of the inverse function f
inv
(y), dened by f
inv
(f(x)) = x, holds at point P = (x, f(x)):
_
df
inv
(y)
dy
_
P

_
df(x)
dx
_
P
= 1
Chain rule: if f = f(g(x)), then holds
df
dx
=
df
dg
dg
dx
Further, for the derivatives of products of functions holds:
(f g)
(n)
=
n

k=0
_
n
k
_
f
(nk)
g
(k)
For the primitive function F(x) holds: F

(x) = f(x). An overview of derivatives and primitives is:


Chapter 1: Basics 3
y = f(x) dy/dx = f

(x)
_
f(x)dx
ax
n
anx
n1
a(n + 1)
1
x
n+1
1/x x
2
ln[x[
a 0 ax
a
x
a
x
ln(a) a
x
/ ln(a)
e
x
e
x
e
x
a
log(x) (xln(a))
1
(xln(x) x)/ ln(a)
ln(x) 1/x xln(x) x
sin(x) cos(x) cos(x)
cos(x) sin(x) sin(x)
tan(x) cos
2
(x) ln[ cos(x)[
sin
1
(x) sin
2
(x) cos(x) ln[ tan(
1
2
x)[
sinh(x) cosh(x) cosh(x)
cosh(x) sinh(x) sinh(x)
arcsin(x) 1/

1 x
2
xarcsin(x) +

1 x
2
arccos(x) 1/

1 x
2
xarccos(x)

1 x
2
arctan(x) (1 +x
2
)
1
xarctan(x)
1
2
ln(1 +x
2
)
(a +x
2
)
1/2
x(a +x
2
)
3/2
ln[x +

a +x
2
[
(a
2
x
2
)
1
2x(a
2
+x
2
)
2
1
2a
ln[(a +x)/(a x)[
The curvature of a curve is given by: =
(1 + (y

)
2
)
3/2
[y

[
The theorem of De l Hopital: if f(a) = 0 and g(a) = 0, then is lim
xa
f(x)
g(x)
= lim
xa
f

(x)
g

(x)
1.4 Limits
lim
x0
sin(x)
x
= 1 , lim
x0
e
x
1
x
= 1 , lim
x0
tan(x)
x
= 1 , lim
k0
(1 +k)
1/k
= e , lim
x
_
1 +
n
x
_
x
= e
n
lim
x0
x
a
ln(x) = 0 , lim
x
ln
p
(x)
x
a
= 0 , lim
x0
ln(x +a)
x
= a , lim
x
x
p
a
x
= 0 als [a[ > 1.
lim
x0
_
a
1/x
1
_
= ln(a) , lim
x0
arcsin(x)
x
= 1 , lim
x
x

x = 1
1.5 Complex numbers and quaternions
1.5.1 Complex numbers
The complex number z = a + bi with a and b IR. a is the real part, b the imaginary part of z.
[z[ =

a
2
+b
2
. By denition holds: i
2
= 1. Every complex number can be written as z = [z[ exp(i),
4 Mathematics Formulary by ir. J.C.A. Wevers
with tan() = b/a. The complex conjugate of z is dened as z = z

:= a bi. Further holds:


(a +bi)(c +di) = (ac bd) +i(ad +bc)
(a +bi) + (c +di) = a +c +i(b +d)
a +bi
c +di
=
(ac +bd) +i(bc ad)
c
2
+d
2
Goniometric functions can be written as complex exponents:
sin(x) =
1
2i
(e
ix
e
ix
)
cos(x) =
1
2
(e
ix
+ e
ix
)
From this follows that cos(ix) = cosh(x) and sin(ix) = i sinh(x). Further follows from this that
e
ix
= cos(x) i sin(x), so e
iz
,= 0z. Also the theorem of De Moivre follows from this:
(cos() +i sin())
n
= cos(n) +i sin(n).
Products and quotients of complex numbers can be written as:
z
1
z
2
= [z
1
[ [z
2
[(cos(
1
+
2
) +i sin(
1
+
2
))
z
1
z
2
=
[z
1
[
[z
2
[
(cos(
1

2
) +i sin(
1

2
))
The following can be derived:
[z
1
+z
2
[ [z
1
[ +[z
2
[ , [z
1
z
2
[ [ [z
1
[ [z
2
[ [
And from z = r exp(i) follows: ln(z) = ln(r) +i, ln(z) = ln(z) 2ni.
1.5.2 Quaternions
Quaternions are dened as: z = a +bi +cj +dk, with a, b, c, d IR and i
2
= j
2
= k
2
= 1. The products
of i, j, k with each other are given by ij = ji = k, jk = kj = i and ki = ik = j.
1.6 Geometry
1.6.1 Triangles
The sine rule is:
a
sin()
=
b
sin()
=
c
sin()
Here, is the angle opposite to a, is opposite to b and opposite to c. The cosine rule is: a
2
=
b
2
+c
2
2bc cos(). For each triangle holds: + + = 180

.
Further holds:
tan(
1
2
( +))
tan(
1
2
( ))
=
a +b
a b
The surface of a triangle is given by
1
2
ab sin() =
1
2
ah
a
=
_
s(s a)(s b)(s c) with h
a
the perpendicular
on a and s =
1
2
(a +b +c).
Chapter 1: Basics 5
1.6.2 Curves
Cycloid: if a circle with radius a rolls along a straight line, the trajectory of a point on this circle has the
following parameter equation:
x = a(t + sin(t)) , y = a(1 + cos(t))
Epicycloid: if a small circle with radius a rolls along a big circle with radius R, the trajectory of a point
on the small circle has the following parameter equation:
x = a sin
_
R +a
a
t
_
+ (R +a) sin(t) , y = a cos
_
R +a
a
t
_
+ (R +a) cos(t)
Hypocycloid: if a small circle with radius a rolls inside a big circle with radius R, the trajectory of a
point on the small circle has the following parameter equation:
x = a sin
_
R a
a
t
_
+ (R a) sin(t) , y = a cos
_
R a
a
t
_
+ (R a) cos(t)
A hypocycloid with a = R is called a cardioid. It has the following parameterequation in polar coordinates:
r = 2a[1 cos()].
1.7 Vectors
The inner product is dened by: a

b =

i
a
i
b
i
= [a [ [

b [ cos()
where is the angle between a and

b. The external product is in IR


3
dened by:
a

b =
_
_
a
y
b
z
a
z
b
y
a
z
b
x
a
x
b
z
a
x
b
y
a
y
b
x
_
_
=

e
x
e
y
e
z
a
x
a
y
a
z
b
x
b
y
b
z

Further holds: [a

b [ = [a [ [

b [ sin(), and a (

b c ) = (a c )

b (a

b )c.
1.8 Series
1.8.1 Expansion
The Binomium of Newton is:
(a +b)
n
=
n

k=0
_
n
k
_
a
nk
b
k
where
_
n
k
_
:=
n!
k!(n k)!
.
By subtracting the series
n

k=0
r
k
and r
n

k=0
r
k
one nds:
n

k=0
r
k
=
1 r
n+1
1 r
6 Mathematics Formulary by ir. J.C.A. Wevers
and for [r[ < 1 this gives the geometric series:

k=0
r
k
=
1
1 r
.
The arithmetic series is given by:
N

n=0
(a +nV ) = a(N + 1) +
1
2
N(N + 1)V .
The expansion of a function around the point a is given by the Taylor series:
f(x) = f(a) + (x a)f

(a) +
(x a)
2
2
f

(a) + +
(x a)
n
n!
f
(n)
(a) +R
where the remainder is given by:
R
n
(h) = (1 )
n
h
n
n!
f
(n+1)
(h)
and is subject to:
mh
n+1
(n + 1)!
R
n
(h)
Mh
n+1
(n + 1)!
From this one can deduce that
(1 x)

n=0
_

n
_
x
n
One can derive that:

n=1
1
n
2
=

2
6
,

n=1
1
n
4
=

4
90
,

n=1
1
n
6
=

6
945
n

k=1
k
2
=
1
6
n(n + 1)(2n + 1) ,

n=1
(1)
n+1
n
2
=

2
12
,

n=1
(1)
n+1
n
= ln(2)

n=1
1
4n
2
1
=
1
2
,

n=1
1
(2n 1)
2
=

2
8
,

n=1
1
(2n 1)
4
=

4
96
,

n=1
(1)
n+1
(2n 1)
3
=

3
32
1.8.2 Convergence and divergence of series
If

n
[u
n
[ converges,

n
u
n
also converges.
If lim
n
u
n
,= 0 then

n
u
n
is divergent.
An alternating series of which the absolute values of the terms drop monotonously to 0 is convergent
(Leibniz).
If
_

p
f(x)dx < , then

n
f
n
is convergent.
If u
n
> 0 n then is

n
u
n
convergent if

n
ln(u
n
+ 1) is convergent.
If u
n
= c
n
x
n
the radius of convergence of

n
u
n
is given by:
1

= lim
n
n
_
[c
n
[ = lim
n

c
n+1
c
n

.
Chapter 1: Basics 7
The series

n=1
1
n
p
is convergent if p > 1 and divergent if p 1.
If: lim
n
u
n
v
n
= p, than the following is true: if p > 0 than

n
u
n
and

n
v
n
are both divergent or both
convergent, if p = 0 holds: if

n
v
n
is convergent, than

n
u
n
is also convergent.
If L is dened by: L = lim
n
n
_
[n
n
[, or by: L = lim
n

u
n+1
u
n

, then is

n
u
n
divergent if L > 1 and
convergent if L < 1.
1.8.3 Convergence and divergence of functions
f(x) is continuous in x = a only if the upper - and lower limit are equal: lim
xa
f(x) = lim
xa
f(x). This is
written as: f(a

) = f(a
+
).
If f(x) is continuous in a and: lim
xa
f

(x) = lim
xa
f

(x), than f(x) is dierentiable in x = a.


We dene: |f|
W
:= sup([f(x)[ [x W), and lim
x
f
n
(x) = f(x). Than holds: f
n
is uniform convergent
if lim
n
|f
n
f| = 0, or: ( > 0)(N)(n N)|f
n
f| < .
Weierstrass test: if

|u
n
|
W
is convergent, than

u
n
is uniform convergent.
We dene S(x) =

n=N
u
n
(x) and F(y) =
b
_
a
f(x, y)dx := F. Than it can be proved that:
Theorem For Demands on W Than holds on W
rows f
n
continuous, f is continuous
f
n
uniform convergent
C series S(x) uniform convergent, S is continuous
u
n
continuous
integral f is continuous F is continuous
rows f
n
can be integrated, f
n
can be integrated,
f
n
uniform convergent
_
f(x)dx = lim
n
_
f
n
dx
I series S(x) is uniform convergent, S can be integrated,
_
Sdx =
_
u
n
dx
u
n
can be integrated
integral f is continuous
_
Fdy =
__
f(x, y)dxdy
rows f
n
C
1
; f

n
unif.conv f

= (x)
D series u
n
C
1
;

u
n
conv;

u

n
u.c. S

(x) =

n
(x)
integral f/y continuous F
y
=
_
f
y
(x, y)dx
8 Mathematics Formulary by ir. J.C.A. Wevers
1.9 Products and quotients
For a, b, c, d IR holds:
The distributive property: (a +b)(c +d) = ac +ad +bc +bd
The associative property: a(bc) = b(ac) = c(ab) and a(b +c) = ab +ac
The commutative property: a +b = b +a, ab = ba.
Further holds:
a
2n
b
2n
a b
= a
2n1
a
2n2
b +a
2n3
b
2
b
2n1
,
a
2n+1
b
2n+1
a +b
=
n

k=0
a
2nk
b
2k
(a b)(a
2
ab +b
2
) = a
3
b
3
, (a +b)(a b) = a
2
+b
2
,
a
3
b
3
a +b
= a
2
ba +b
2
1.10 Logarithms
Denition:
a
log(x) = b a
b
= x. For logarithms with base e one writes ln(x).
Rules: log(x
n
) = nlog(x), log(a) + log(b) = log(ab), log(a) log(b) = log(a/b).
1.11 Polynomials
Equations of the type
n

k=0
a
k
x
k
= 0
have n roots which may be equal to each other. Each polynomial p(z) of order n 1 has at least one root
in C. If all a
k
IR holds: when x = p with p C a root, than p

is also a root. Polynomials up to and


including order 4 have a general analytical solution, for polynomials with order 5 there does not exist a
general analytical solution.
For a, b, c IR and a ,= 0 holds: the 2nd order equation ax
2
+bx +c = 0 has the general solution:
x =
b

b
2
4ac
2a
For a, b, c, d IR and a ,= 0 holds: the 3rd order equation ax
3
+bx
2
+cx+d = 0 has the general analytical
solution:
x
1
= K
3ac b
2
9a
2
K

b
3a
x
2
= x

3
=
K
2
+
3ac b
2
18a
2
K

b
3a
+i

3
2
_
K +
3ac b
2
9a
2
K
_
with K =
_
9abc 27da
2
2b
3
54a
3
+

4ac
3
c
2
b
2
18abcd + 27a
2
d
2
+ 4db
3
18a
2
_
1/3
Chapter 1: Basics 9
1.12 Primes
A prime is a number IN that can only be divided by itself and 1. There are an innite number of primes.
Proof: suppose that the collection of primes P would be nite, than construct the number q = 1 +

pP
p,
than holds q = 1(p) and so Q cannot be written as a product of primes from P. This is a contradiction.
If (x) is the number of primes x, than holds:
lim
x
(x)
x/ ln(x)
= 1 and lim
x
(x)
x
_
2
dt
ln(t)
= 1
For each N 2 there is a prime between N and 2N.
The numbers F
k
:= 2
k
+ 1 with k IN are called Fermat numbers. Many Fermat numbers are prime.
The numbers M
k
:= 2
k
1 are called Mersenne numbers. They occur when one searches for perfect numbers,
which are numbers n IN which are the sum of their dierent dividers, for example 6 = 1+2+3. There are
23 Mersenne numbers for k < 12000 which are prime: for k 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521,
607, 1279, 2203, 2281, 3217, 4253, 4423, 9689, 9941, 11213.
To check if a given number n is prime one can use a sieve method. The rst known sieve method was
developed by Eratosthenes. A faster method for large numbers are the 4 Fermat tests, who dont prove
that a number is prime but give a large probability.
1. Take the rst 4 primes: b = 2, 3, 5, 7,
2. Take w(b) = b
n1
mod n, for each b,
3. If w = 1 for each b, then n is probably prime. For each other value of w, n is certainly not prime.
Chapter 2
Probability and statistics
2.1 Combinations
The number of possible combinations of k elements from n elements is given by
_
n
k
_
=
n!
k!(n k)!
The number of permutations of p from n is given by
n!
(n p)!
= p!
_
n
p
_
The number of dierent ways to classify n
i
elements in i groups, when the total number of elements is N,
is
N!

i
n
i
!
2.2 Probability theory
The probability P(A) that an event A occurs is dened by:
P(A) =
n(A)
n(U)
where n(A) is the number of events when A occurs and n(U) the total number of events.
The probability P(A) that A does not occur is: P(A) = 1 P(A). The probability P(A B) that A
and B both occur is given by: P(A B) = P(A) + P(B) P(A B). If A and B are independent, than
holds: P(A B) = P(A) P(B).
The probability P(A[B) that A occurs, given the fact that B occurs, is:
P(A[B) =
P(A B)
P(B)
10
Chapter 2: Probability and statistics 11
2.3 Statistics
2.3.1 General
The average or mean value x) of a collection of values is: x) =

i
x
i
/n. The standard deviation
x
in
the distribution of x is given by:

x
=

_
n

i=1
(x
i
x))
2
n
When samples are being used the sample variance s is given by s
2
=
n
n 1

2
.
The covariance
xy
of x and y is given by::

xy
=
n

i=1
(x
i
x))(y
i
y))
n 1
The correlation coecient r
xy
of x and y than becomes: r
xy
=
xy
/
x

y
.
The standard deviation in a variable f(x, y) resulting from errors in x and y is:

2
f(x,y)
=
_
f
x

x
_
2
+
_
f
y

y
_
2
+
f
x
f
y

xy
2.3.2 Distributions
1. The Binomial distribution is the distribution describing a sampling with replacement. The
probability for success is p. The probability P for k successes in n trials is then given by:
P(x = k) =
_
n
k
_
p
k
(1 p)
nk
The standard deviation is given by
x
=
_
np(1 p) and the expectation value is = np.
2. The Hypergeometric distribution is the distribution describing a sampling without replacement
in which the order is irrelevant. The probability for k successes in a trial with A possible successes
and B possible failures is then given by:
P(x = k) =
_
A
k
__
B
n k
_
_
A+B
n
_
The expectation value is given by = nA/(A+B).
3. The Poisson distribution is a limiting case of the binomial distribution when p 0, n and
also np = is constant.
P(x) =

x
e

x!
12 Mathematics Formulary by ir. J.C.A. Wevers
This distribution is normalized to

x=0
P(x) = 1.
4. The Normal distribution is a limiting case of the binomial distribution for continuous variables:
P(x) =
1

2
exp
_

1
2
_
x x)

_
2
_
5. The Uniform distribution occurs when a random number x is taken from the set a x b and
is given by:
_

_
P(x) =
1
b a
if a x b
P(x) = 0 in all other cases
x) =
1
2
(b a) and
2
=
(b a)
2
12
.
6. The Gamma distribution is given by:
_
P(x) =
x
1
e
x/

()
if 0 y
with > 0 and > 0. The distribution has the following properties: x) = ,
2
=
2
.
7. The Beta distribution is given by:
_

_
P(x) =
x
1
(1 x)
1
(, )
if 0 x 1
P(x) = 0 everywhere else
and has the following properties: x) =

+
,
2
=

( +)
2
( + + 1)
.
For P(
2
) holds: = V/2 and = 2.
8. The Weibull distribution is given by:
_

_
P(x) =

x
1
e
x

if 0 x > 0
P(x) = 0 in all other cases
The average is x) =
1/
(( + 1))
9. For a two-dimensional distribution holds:
P
1
(x
1
) =
_
P(x
1
, x
2
)dx
2
, P
2
(x
2
) =
_
P(x
1
, x
2
)dx
1
with
(g(x
1
, x
2
)) =
__
g(x
1
, x
2
)P(x
1
, x
2
)dx
1
dx
2
=

x1

x2
g P
Chapter 2: Probability and statistics 13
2.4 Regression analyses
When there exists a relation between the quantities x and y of the form y = ax+b and there is a measured
set x
i
with related y
i
, the following relation holds for a and b with x = (x
1
, x
2
, ..., x
n
) and e = (1, 1, ..., 1):
y ax be < x, e >

From this follows that the inner products are 0:


_
(y, x) a(x, x) b(e, x) = 0
(y, e ) a(x, e ) b(e, e ) = 0
with (x, x) =

i
x
2
i
, (x, y ) =

i
x
i
y
i
, (x, e ) =

i
x
i
and (e, e ) = n. a and b follow from this.
A similar method works for higher order polynomial ts: for a second order t holds:
y a

x
2
bx ce <

x
2
, x, e >

with

x
2
= (x
2
1
, ..., x
2
n
).
The correlation coecient r is a measure for the quality of a t. In case of linear regression it is given by:
r =
n

xy

y
_
(n

x
2
(

x)
2
)(n

y
2
(

y)
2
)
Chapter 3
Calculus
3.1 Integrals
3.1.1 Arithmetic rules
The primitive function F(x) of f(x) obeys the rule F

(x) = f(x). With F(x) the primitive of f(x) holds


for the denite integral
b
_
a
f(x)dx = F(b) F(a)
If u = f(x) holds:
b
_
a
g(f(x))df(x) =
f(b)
_
f(a)
g(u)du
Partial integration: with F and G the primitives of f and g holds:
_
f(x) g(x)dx = f(x)G(x)
_
G(x)
df(x)
dx
dx
A derivative can be brought under the intergral sign (see section 1.8.3 for the required conditions):
d
dy
_

_
x=h(y)
_
x=g(y)
f(x, y)dx
_

_ =
x=h(y)
_
x=g(y)
f(x, y)
y
dx f(g(y), y)
dg(y)
dy
+f(h(y), y)
dh(y)
dy
3.1.2 Arc lengts, surfaces and volumes
The arc length of a curve y(x) is given by:
=
_

1 +
_
dy(x)
dx
_
2
dx
The arc length of a parameter curve F(x(t)) is:
=
_
Fds =
_
F(x(t))[

x(t)[dt
14
Chapter 3: Calculus 15
with

t =
dx
ds
=

x(t)
[

x(t)[
, [

t [ = 1
_
(v,

t)ds =
_
(v,

t(t))dt =
_
(v
1
dx +v
2
dy +v
3
dz)
The surface A of a solid of revolution is:
A = 2
_
y

1 +
_
dy(x)
dx
_
2
dx
The volume V of a solid of revolution is:
V =
_
f
2
(x)dx
3.1.3 Separation of quotients
Every rational function P(x)/Q(x) where P and Q are polynomials can be written as a linear combination
of functions of the type (x a)
k
with k ZZ, and of functions of the type
px +q
((x a)
2
+b
2
)
n
with b > 0 and n IN. So:
p(x)
(x a)
n
=
n

k=1
A
k
(x a)
k
,
p(x)
((x b)
2
+c
2
)
n
=
n

k=1
A
k
x +B
((x b)
2
+c
2
)
k
Recurrent relation: for n ,= 0 holds:
_
dx
(x
2
+ 1)
n+1
=
1
2n
x
(x
2
+ 1)
n
+
2n 1
2n
_
dx
(x
2
+ 1)
n
3.1.4 Special functions
Elliptic functions
Elliptic functions can be written as a power series as follows:
_
1 k
2
sin
2
(x) = 1

n=1
(2n 1)!!
(2n)!!(2n 1)
k
2n
sin
2n
(x)
1
_
1 k
2
sin
2
(x)
= 1 +

n=1
(2n 1)!!
(2n)!!
k
2n
sin
2n
(x)
with n!! = n(n 2)!!.
16 Mathematics Formulary by ir. J.C.A. Wevers
The Gamma function
The gamma function (y) is dened by:
(y) =

_
0
e
x
x
y1
dx
One can derive that (y +1) = y(y) = y!. This is a way to dene faculties for non-integers. Further one
can derive that
(n +
1
2
) =

2
n
(2n 1)!! and
(n)
(y) =

_
0
e
x
x
y1
ln
n
(x)dx
The Beta function
The betafunction (p, q) is dened by:
(p, q) =
1
_
0
x
p1
(1 x)
q1
dx
with p and q > 0. The beta and gamma functions are related by the following equation:
(p, q) =
(p)(q)
(p +q)
The Delta function
The delta function (x) is an innitely thin peak function with surface 1. It can be dened by:
(x) = lim
0
P(, x) with P(, x) =
_
0 for [x[ >
1
2
when [x[ <
Some properties are:

(x)dx = 1 ,

F(x)(x)dx = F(0)
3.1.5 Goniometric integrals
When solving goniometric integrals it can be useful to change variables. The following holds if one denes
tan(
1
2
x) := t:
dx =
2dt
1 +t
2
, cos(x) =
1 t
2
1 +t
2
, sin(x) =
2t
1 +t
2
Chapter 3: Calculus 17
Each integral of the type
_
R(x,

ax
2
+bx +c)dx can be converted into one of the types that were treated
in section 3.1.3. After this conversion one can substitute in the integrals of the type:
_
R(x,
_
x
2
+ 1)dx : x = tan() , dx =
d
cos()
of
_
x
2
+ 1 = t +x
_
R(x,
_
1 x
2
)dx : x = sin() , dx = cos()d of
_
1 x
2
= 1 tx
_
R(x,
_
x
2
1)dx : x =
1
cos()
, dx =
sin()
cos
2
()
d of
_
x
2
1 = x t
These denite integrals are easily solved:
/2
_
0
cos
n
(x) sin
m
(x)dx =
(n 1)!!(m1)!!
(m+n)!!

_
/2 when m and n are both even
1 in all other cases
Some important integrals are:

_
0
xdx
e
ax
+ 1
=

2
12a
2
,

x
2
dx
(e
x
+ 1)
2
=

2
3
,

_
0
x
3
dx
e
x
+ 1
=

4
15
3.2 Functions with more variables
3.2.1 Derivatives
The partial derivative with respect to x of a function f(x, y) is dened by:
_
f
x
_
x0
= lim
h0
f(x
0
+h, y
0
) f(x
0
, y
0
)
h
The directional derivative in the direction of is dened by:
f

= lim
r0
f(x
0
+r cos(), y
0
+r sin()) f(x
0
, y
0
)
r
= (

f, (sin, cos )) =
f v
[v[
When one changes to coordinates f(x(u, v), y(u, v)) holds:
f
u
=
f
x
x
u
+
f
y
y
u
If x(t) and y(t) depend only on one parameter t holds:
f
t
=
f
x
dx
dt
+
f
y
dy
dt
The total dierential df of a function of 3 variables is given by:
df =
f
x
dx +
f
y
dy +
f
z
dz
18 Mathematics Formulary by ir. J.C.A. Wevers
So
df
dx
=
f
x
+
f
y
dy
dx
+
f
z
dz
dx
The tangent in point x
0
at the surface f(x, y) = 0 is given by the equation f
x
(x
0
)(xx
0
)+f
y
(x
0
)(yy
0
) = 0.
The tangent plane in x
0
is given by: f
x
(x
0
)(x x
0
) +f
y
(x
0
)(y y
0
) = z f(x
0
).
3.2.2 Taylor series
A function of two variables can be expanded as follows in a Taylor series:
f(x
0
+h, y
0
+k) =
n

p=0
1
p!
_
h

p
x
p
+k

p
y
p
_
f(x
0
, y
0
) +R(n)
with R(n) the residual error and
_
h

p
x
p
+k

p
y
p
_
f(a, b) =
p

m=0
_
p
m
_
h
m
k
pm

p
f(a, b)
x
m
y
pm
3.2.3 Extrema
When f is continuous on a compact boundary V there exists a global maximum and a global minumum
for f on this boundary. A boundary is called compact if it is limited and closed.
Possible extrema of f(x, y) on a boundary V IR
2
are:
1. Points on V where f(x, y) is not dierentiable,
2. Points where

f =

0,
3. If the boundary V is given by (x, y) = 0, than all points where

f(x, y) +

(x, y) = 0 are
possible for extrema. This is the multiplicator method of Lagrange, is called a multiplicator.
The same as in IR
2
holds in IR
3
when the area to be searched is constrained by a compact V , and V is
dened by
1
(x, y, z) = 0 and
2
(x, y, z) = 0 for extrema of f(x, y, z) for points (1) and (2). Point (3) is
rewritten as follows: possible extrema are points where

f(x, y, z) +
1

1
(x, y, z) +
2

2
(x, y, z) = 0.
3.2.4 The -operator
In cartesian coordinates (x, y, z) holds:

=

x
e
x
+

y
e
y
+

z
e
z
gradf =
f
x
e
x
+
f
y
e
y
+
f
z
e
z
div a =
a
x
x
+
a
y
y
+
a
z
z
Chapter 3: Calculus 19
curl a =
_
a
z
y

a
y
z
_
e
x
+
_
a
x
z

a
z
x
_
e
y
+
_
a
y
x

a
x
y
_
e
z

2
f =

2
f
x
2
+

2
f
y
2
+

2
f
z
2
In cylindrical coordinates (r, , z) holds:

=

r
e
r
+
1
r

+

z
e
z
gradf =
f
r
e
r
+
1
r
f

+
f
z
e
z
div a =
a
r
r
+
a
r
r
+
1
r
a

+
a
z
z
curl a =
_
1
r
a
z

z
_
e
r
+
_
a
r
z

a
z
r
_
e

+
_
a

r
+
a

r

1
r
a
r

_
e
z

2
f =

2
f
r
2
+
1
r
f
r
+
1
r
2

2
f

2
+

2
f
z
2
In spherical coordinates (r, , ) holds:

=

r
e
r
+
1
r

+
1
r sin

gradf =
f
r
e
r
+
1
r
f

+
1
r sin
f

div a =
a
r
r
+
2a
r
r
+
1
r
a

+
a

r tan
+
1
r sin
a

curl a =
_
1
r
a

+
a

r tan

1
r sin
a

_
e
r
+
_
1
r sin
a
r

r

a

r
_
e

+
_
a

r
+
a

r

1
r
a
r

_
e

2
f =

2
f
r
2
+
2
r
f
r
+
1
r
2

2
f

2
+
1
r
2
tan
f

+
1
r
2
sin
2

2
f

2
General orthonormal curvilinear coordinates (u, v, w) can be derived from cartesian coordinates by the
transformation x = x(u, v, w). The unit vectors are given by:
e
u
=
1
h
1
x
u
, e
v
=
1
h
2
x
v
, e
w
=
1
h
3
x
w
where the terms h
i
give normalization to length 1. The dierential operators are than given by:
gradf =
1
h
1
f
u
e
u
+
1
h
2
f
v
e
v
+
1
h
3
f
w
e
w
div a =
1
h
1
h
2
h
3
_

u
(h
2
h
3
a
u
) +

v
(h
3
h
1
a
v
) +

w
(h
1
h
2
a
w
)
_
20 Mathematics Formulary by ir. J.C.A. Wevers
curl a =
1
h
2
h
3
_
(h
3
a
w
)
v

(h
2
a
v
)
w
_
e
u
+
1
h
3
h
1
_
(h
1
a
u
)
w

(h
3
a
w
)
u
_
e
v
+
1
h
1
h
2
_
(h
2
a
v
)
u

(h
1
a
u
)
v
_
e
w

2
f =
1
h
1
h
2
h
3
_

u
_
h
2
h
3
h
1
f
u
_
+

v
_
h
3
h
1
h
2
f
v
_
+

w
_
h
1
h
2
h
3
f
w
__
Some properties of the -operator are:
div(v) = divv + grad v curl(v) = curlv + (grad) v curl grad =

0
div(u v) = v (curlu) u (curlv) curl curlv = grad divv
2
v div curlv = 0
div grad =
2

2
v (
2
v
1
,
2
v
2
,
2
v
3
)
Here, v is an arbitrary vectoreld and an arbitrary scalar eld.
3.2.5 Integral theorems
Some important integral theorems are:
Gauss:
__
_ (v n)d
2
A =
___
(divv )d
3
V
Stokes for a scalar eld:
_
( e
t
)ds =
__
(n grad)d
2
A
Stokes for a vector eld:
_
(v e
t
)ds =
__
(curlv n)d
2
A
this gives:
__
_ (curlv n)d
2
A = 0
Ostrogradsky:
__
_ (n v )d
2
A =
___
(curlv )d
3
A
__
_ (n)d
2
A =
___
(grad)d
3
V
Here the orientable surface
__
d
2
A is bounded by the Jordan curve s(t).
3.2.6 Multiple integrals
Let A be a closed curve given by f(x, y) = 0, than the surface A inside the curve in IR
2
is given by
A =
__
d
2
A =
__
dxdy
Let the surface A be dened by the function z = f(x, y). The volume V bounded by A and the xy plane
is than given by:
V =
__
f(x, y)dxdy
Chapter 3: Calculus 21
The volume inside a closed surface dened by z = f(x, y) is given by:
V =
___
d
3
V =
__
f(x, y)dxdy =
___
dxdydz
3.2.7 Coordinate transformations
The expressions d
2
A and d
3
V transform as follows when one changes coordinates to u = (u, v, w) through
the transformation x(u, v, w):
V =
___
f(x, y, z)dxdydz =
___
f(x(u))

x
u

dudvdw
In IR
2
holds:
x
u
=

x
u
x
v
y
u
y
v

Let the surface A be dened by z = F(x, y) = X(u, v). Than the volume bounded by the xy plane and F
is given by:
__
S
f(x)d
2
A =
__
G
f(x(u))

X
u

X
v

dudv =
__
G
f(x, y, F(x, y))
_
1 +
x
F
2
+
y
F
2
dxdy
3.3 Orthogonality of functions
The inner product of two functions f(x) and g(x) on the interval [a, b] is given by:
(f, g) =
b
_
a
f(x)g(x)dx
or, when using a weight function p(x), by:
(f, g) =
b
_
a
p(x)f(x)g(x)dx
The norm |f| follows from: |f|
2
= (f, f). A set functions f
i
is orthonormal if (f
i
, f
j
) =
ij
.
Each function f(x) can be written as a sum of orthogonal functions:
f(x) =

i=0
c
i
g
i
(x)
and

c
2
i
|f|
2
. Let the set g
i
be orthogonal, than it follows:
c
i
=
f, g
i
(g
i
, g
i
)
22 Mathematics Formulary by ir. J.C.A. Wevers
3.4 Fourier series
Each function can be written as a sum of independent base functions. When one chooses the orthogonal
basis (cos(nx), sin(nx)) we have a Fourier series.
A periodical function f(x) with period 2L can be written as:
f(x) = a
0
+

n=1
_
a
n
cos
_
nx
L
_
+b
n
sin
_
nx
L
__
Due to the orthogonality follows for the coecients:
a
0
=
1
2L
L
_
L
f(t)dt , a
n
=
1
L
L
_
L
f(t) cos
_
nt
L
_
dt , b
n
=
1
L
L
_
L
f(t) sin
_
nt
L
_
dt
A Fourier series can also be written as a sum of complex exponents:
f(x) =

n=
c
n
e
inx
with
c
n
=
1
2

f(x)e
inx
dx
The Fourier transform of a function f(x) gives the transformed function

f():

f() =
1

f(x)e
ix
dx
The inverse transformation is given by:
1
2
_
f(x
+
) +f(x

=
1

f()e
ix
d
where f(x
+
) and f(x

) are dened by the lower - and upper limit:


f(a

) = lim
xa
f(x) , f(a
+
) = lim
xa
f(x)
For continuous functions is
1
2
[f(x
+
) +f(x

)] = f(x).
Chapter 4
Dierential equations
4.1 Linear dierential equations
4.1.1 First order linear DE
The general solution of a linear dierential equation is given by y
A
= y
H
+y
P
, where y
H
is the solution of
the homogeneous equation and y
P
is a particular solution.
A rst order dierential equation is given by: y

(x) + a(x)y(x) = b(x). Its homogeneous equation is


y

(x) +a(x)y(x) = 0.
The solution of the homogeneous equation is given by
y
H
= k exp
__
a(x)dx
_
Suppose that a(x) = a =constant.
Substitution of exp(x) in the homogeneous equation leads to the characteristic equation +a = 0
= a.
Suppose b(x) = exp(x). Than one can distinguish two cases:
1. ,= : a particular solution is: y
P
= exp(x)
2. = : a particular solution is: y
P
= xexp(x)
When a DE is solved by variation of parameters one writes: y
P
(x) = y
H
(x)f(x), and than one solves f(x)
from this.
4.1.2 Second order linear DE
A dierential equation of the second order with constant coecients is given by: y

(x) +ay

(x) +by(x) =
c(x). If c(x) = c =constant there exists a particular solution y
P
= c/b.
Substitution of y = exp(x) leads to the characteristic equation
2
+a +b = 0.
There are now 2 possibilities:
1.
1
,=
2
: than y
H
= exp(
1
x) + exp(
2
x).
2.
1
=
2
= : than y
H
= ( +x) exp(x).
23
24 Mathematics Formulary by ir. J.C.A. Wevers
If c(x) = p(x) exp(x) where p(x) is a polynomial there are 3 possibilities:
1.
1
,
2
,= : y
P
= q(x) exp(x).
2.
1
= ,
2
,= : y
P
= xq(x) exp(x).
3.
1
=
2
= : y
P
= x
2
q(x) exp(x).
where q(x) is a polynomial of the same order as p(x).
When: y

(x) +
2
y(x) = f(x) and y(0) = y

(0) = 0 follows: y(x) =


x
_
0
f(x) sin((x t))dt.
4.1.3 The Wronskian
We start with the LDE y

(x) + p(x)y

(x) + q(x)y(x) = 0 and the two initial conditions y(x


0
) = K
0
and
y

(x
0
) = K
1
. When p(x) and q(x) are continuous on the open interval I there exists a unique solution
y(x) on this interval.
The general solution can than be written as y(x) = c
1
y
1
(x)+c
2
y
2
(x) and y
1
and y
2
are linear independent.
These are also all solutions of the LDE.
The Wronskian is dened by:
W(y
1
, y
2
) =

y
1
y
2
y

1
y

= y
1
y

2
y
2
y

1
y
1
and y
2
are linear independent if and only if on the interval I when x
0
I so that holds:
W(y
1
(x
0
), y
2
(x
0
)) = 0.
4.1.4 Power series substitution
When a series y =

a
n
x
n
is substituted in the LDE with constant coecients y

(x) +py

(x) +qy(x) = 0
this leads to:

n
_
n(n 1)a
n
x
n2
+pna
n
x
n1
+qa
n
x
n

= 0
Setting coecients for equal powers of x equal gives:
(n + 2)(n + 1)a
n+2
+p(n + 1)a
n+1
+qa
n
= 0
This gives a general relation between the coecients. Special cases are n = 0, 1, 2.
4.2 Some special cases
4.2.1 Frobenius method
Given the LDE
d
2
y(x)
dx
2
+
b(x)
x
dy(x)
dx
+
c(x)
x
2
y(x) = 0
Chapter 4: Dierential equations 25
with b(x) and c(x) analytical at x = 0. This LDE has at least one solution of the form
y
i
(x) = x
ri

n=0
a
n
x
n
with i = 1, 2
with r real or complex and chosen so that a
0
,= 0. When one expands b(x) and c(x) as b(x) = b
0
+b
1
x +
b
2
x
2
+... and c(x) = c
0
+c
1
x +c
2
x
2
+..., it follows for r:
r
2
+ (b
0
1)r +c
0
= 0
There are now 3 possibilities:
1. r
1
= r
2
: than y(x) = y
1
(x) ln[x[ +y
2
(x).
2. r
1
r
2
IN: than y(x) = ky
1
(x) ln[x[ +y
2
(x).
3. r
1
r
2
,= ZZ: than y(x) = y
1
(x) +y
2
(x).
4.2.2 Euler
Given the LDE
x
2
d
2
y(x)
dx
2
+ax
dy(x)
dx
+by(x) = 0
Substitution of y(x) = x
r
gives an equation for r: r
2
+ (a 1)r +b = 0. From this one gets two solutions
r
1
and r
2
. There are now 2 possibilities:
1. r
1
,= r
2
: than y(x) = C
1
x
r1
+C
2
x
r2
.
2. r
1
= r
2
= r: than y(x) = (C
1
ln(x) +C
2
)x
r
.
4.2.3 Legendres DE
Given the LDE
(1 x
2
)
d
2
y(x)
dx
2
2x
dy(x)
dx
+n(n 1)y(x) = 0
The solutions of this equation are given by y(x) = aP
n
(x) +by
2
(x) where the Legendre polynomials P(x)
are dened by:
P
n
(x) =
d
n
dx
n
_
(1 x
2
)
n
2
n
n!
_
For these holds: |P
n
|
2
= 2/(2n + 1).
4.2.4 The associated Legendre equation
This equation follows from the -dependent part of the wave equation
2
= 0 by substitution of
= cos(). Than follows:
(1
2
)
d
d
_
(1
2
)
dP()
d
_
+ [C(1
2
) m
2
]P() = 0
26 Mathematics Formulary by ir. J.C.A. Wevers
Regular solutions exists only if C = l(l + 1). They are of the form:
P
|m|
l
() = (1
2
)
m/2
d
|m|
P
0
()
d
|m|
=
(1
2
)
|m|/2
2
l
l!
d
|m|+l
d
|m|+l
(
2
1)
l
For [m[ > l is P
|m|
l
() = 0. Some properties of P
0
l
() zijn:
1
_
1
P
0
l
()P
0
l
()d =
2
2l + 1

ll
,

l=0
P
0
l
()t
l
=
1
_
1 2t +t
2
This polynomial can be written as:
P
0
l
() =
1

_
0
( +
_

2
1 cos())
l
d
4.2.5 Solutions for Bessels equation
Given the LDE
x
2
d
2
y(x)
dx
2
+x
dy(x)
dx
+ (x
2

2
)y(x) = 0
also called Bessels equation, and the Bessel functions of the rst kind
J

(x) = x

m=0
(1)
m
x
2m
2
2m+
m!( +m+ 1)
for := n IN this becomes:
J
n
(x) = x
n

m=0
(1)
m
x
2m
2
2m+n
m!(n +m)!
When ,= ZZ the solution is given by y(x) = aJ

(x) +bJ

(x). But because for n ZZ holds:


J
n
(x) = (1)
n
J
n
(x), this does not apply to integers. The general solution of Bessels equation is given
by y(x) = aJ

(x) +bY

(x), where Y

are the Bessel functions of the second kind:


Y

(x) =
J

(x) cos() J

(x)
sin()
and Y
n
(x) = lim
n
Y

(x)
The equation x
2
y

(x) + xy

(x) (x
2
+
2
)y(x) = 0 has the modied Bessel functions of the rst kind
I

(x) = i

(ix) as solution, and also solutions K

= [I

(x) I

(x)]/[2 sin()].
Sometimes it can be convenient to write the solutions of Bessels equation in terms of the Hankel functions
H
(1)
n
(x) = J
n
(x) +iY
n
(x) , H
(2)
n
(x) = J
n
(x) iY
n
(x)
Chapter 4: Dierential equations 27
4.2.6 Properties of Bessel functions
Bessel functions are orthogonal with respect to the weight function p(x) = x.
J
n
(x) = (1)
n
J
n
(x). The Neumann functions N
m
(x) are denied as:
N
m
(x) =
1
2
J
m
(x) ln(x) +
1
x
m

n=0

n
x
2n
The following holds: lim
x0
J
m
(x) = x
m
, lim
x0
N
m
(x) = x
m
for m ,= 0, lim
x0
N
0
(x) = ln(x).
lim
r
H(r) =
e
ikr
e
it

r
, lim
x
J
n
(x) =
_
2
x
cos(x x
n
) , lim
x
J
n
(x) =
_
2
x
sin(x x
n
)
with x
n
=
1
2
(n +
1
2
).
J
n+1
(x) +J
n1
(x) =
2n
x
J
n
(x) , J
n+1
(x) J
n1
(x) = 2
dJ
n
(x)
dx
The following integral relations hold:
J
m
(x) =
1
2
2
_
0
exp[i(xsin() m)]d =
1

_
0
cos(xsin() m)d
4.2.7 Laguerres equation
Given the LDE
x
d
2
y(x)
dx
2
+ (1 x)
dy(x)
dx
+ny(x) = 0
Solutions of this equation are the Laguerre polynomials L
n
(x):
L
n
(x) =
e
x
n!
d
n
dx
n
_
x
n
e
x
_
=

m=0
(1)
m
m!
_
n
m
_
x
m
4.2.8 The associated Laguerre equation
Given the LDE
d
2
y(x)
dx
2
+
_
m+ 1
x
1
_
dy(x)
dx
+
_
n +
1
2
(m+ 1)
x
_
y(x) = 0
Solutions of this equation are the associated Laguerre polynomials L
m
n
(x):
L
m
n
(x) =
(1)
m
n!
(n m)!
e
x
x
m
d
nm
dx
nm
_
e
x
x
n
_
28 Mathematics Formulary by ir. J.C.A. Wevers
4.2.9 Hermite
The dierential equations of Hermite are:
d
2
H
n
(x)
dx
2
2x
dH
n
(x)
dx
+ 2nH
n
(x) = 0 and
d
2
He
n
(x)
dx
2
x
dHe
n
(x)
dx
+nHe
n
(x) = 0
Solutions of these equations are the Hermite polynomials, given by:
H
n
(x) = (1)
n
exp
_
1
2
x
2
_
d
n
(exp(
1
2
x
2
))
dx
n
= 2
n/2
He
n
(x

2)
He
n
(x) = (1)
n
(exp
_
x
2
_
d
n
(exp(x
2
))
dx
n
= 2
n/2
H
n
(x/

2)
4.2.10 Chebyshev
The LDE
(1 x
2
)
d
2
U
n
(x)
dx
2
3x
dU
n
(x)
dx
+n(n + 2)U
n
(x) = 0
has solutions of the form
U
n
(x) =
sin[(n + 1) arccos(x)]

1 x
2
The LDE
(1 x
2
)
d
2
T
n
(x)
dx
2
x
dT
n
(x)
dx
+n
2
T
n
(x) = 0
has solutions T
n
(x) = cos(narccos(x)).
4.2.11 Weber
The LDE W

n
(x) + (n +
1
2

1
4
x
2
)W
n
(x) = 0 has solutions: W
n
(x) = He
n
(x) exp(
1
4
x
2
).
4.3 Non-linear dierential equations
Some non-linear dierential equations and a solution are:
y

= a
_
y
2
+b
2
y = b sinh(a(x x
0
))
y

= a
_
y
2
b
2
y = b cosh(a(x x
0
))
y

= a
_
b
2
y
2
y = b cos(a(x x
0
))
y

= a(y
2
+b
2
) y = b tan(a(x x
0
))
y

= a(y
2
b
2
) y = b coth(a(x x
0
))
y

= a(b
2
y
2
) y = b tanh(a(x x
0
))
y

= ay
_
b y
b
_
y =
b
1 +Cb exp(ax)
Chapter 4: Dierential equations 29
4.4 Sturm-Liouville equations
Sturm-Liouville equations are second order LDEs of the form:

d
dx
_
p(x)
dy(x)
dx
_
+q(x)y(x) = m(x)y(x)
The boundary conditions are chosen so that the operator
L =
d
dx
_
p(x)
d
dx
_
+q(x)
is Hermitean. The normalization function m(x) must satisfy
b
_
a
m(x)y
i
(x)y
j
(x)dx =
ij
When y
1
(x) and y
2
(x) are two linear independent solutions one can write the Wronskian in this form:
W(y
1
, y
2
) =

y
1
y
2
y

1
y

=
C
p(x)
where C is constant. By changing to another dependent variable u(x), given by: u(x) = y(x)
_
p(x), the
LDE transforms into the normal form:
d
2
u(x)
dx
2
+I(x)u(x) = 0 with I(x) =
1
4
_
p

(x)
p(x)
_
2

1
2
p

(x)
p(x)

q(x) m(x)
p(x)
If I(x) > 0, than y

/y < 0 and the solution has an oscillatory behaviour, if I(x) < 0, than y

/y > 0 and
the solution has an exponential behaviour.
4.5 Linear partial dierential equations
4.5.1 General
The normal derivative is dened by:
u
n
= (

u, n)
A frequently used solution method for PDEs is separation of variables: one assumes that the solution
can be written as u(x, t) = X(x)T(t). When this is substituted two ordinary DEs for X(x) and T(t) are
obtained.
4.5.2 Special cases
The wave equation
The wave equation in 1 dimension is given by

2
u
t
2
= c
2

2
u
x
2
30 Mathematics Formulary by ir. J.C.A. Wevers
When the initial conditions u(x, 0) = (x) and u(x, 0)/t = (x) apply, the general solution is given by:
u(x, t) =
1
2
[(x +ct) +(x ct)] +
1
2c
x+ct
_
xct
()d
The diusion equation
The diusion equation is:
u
t
= D
2
u
Its solutions can be written in terms of the propagators P(x, x

, t). These have the property that


P(x, x

, 0) = (x x

). In 1 dimension it reads:
P(x, x

, t) =
1
2

Dt
exp
_
(x x

)
2
4Dt
_
In 3 dimensions it reads:
P(x, x

, t) =
1
8(Dt)
3/2
exp
_
(x x

)
2
4Dt
_
With initial condition u(x, 0) = f(x) the solution is:
u(x, t) =
_
G
f(x

)P(x, x

, t)dx

The solution of the equation


u
t
D

2
u
x
2
= g(x, t)
is given by
u(x, t) =
_
dt

_
dx

g(x

, t

)P(x, x

, t t

)
The equation of Helmholtz
The equation of Helmholtz is obtained by substitution of u(x, t) = v(x) exp(it) in the wave equation.
This gives for v:

2
v(x, ) +k
2
v(x, ) = 0
This gives as solutions for v:
1. In cartesian coordinates: substitution of v = Aexp(i

k x) gives:
v(x) =
_

_
A(k)e
i

kx
dk
with the integrals over

k
2
= k
2
.
Chapter 4: Dierential equations 31
2. In polar coordinates:
v(r, ) =

m=0
(A
m
J
m
(kr) +B
m
N
m
(kr))e
im
3. In spherical coordinates:
v(r, , ) =

l=0
l

m=l
[A
lm
J
l+
1
2
(kr) +B
lm
J
l
1
2
(kr)]
Y (, )

r
4.5.3 Potential theory and Greens theorem
Subject of the potential theory are the Poisson equation
2
u = f(x) where f is a given function, and the
Laplace equation
2
u = 0. The solutions of these can often be interpreted as a potential. The solutions of
Laplaces equation are called harmonic functions.
When a vector eld v is given by v = grad holds:
b
_
a
(v,

t )ds = (

b ) (a )
In this case there exist functions and w so that v = grad + curl w.
The eld lines of the eld v(x) follow from:

x(t) = v(x)
The rst theorem of Green is:
___
G
[u
2
v + (u, v)]d
3
V =
__
_
S
u
v
n
d
2
A
The second theorem of Green is:
___
G
[u
2
v v
2
u]d
3
V =
__
_
S
_
u
v
n
v
u
n
_
d
2
A
A harmonic function which is 0 on the boundary of an area is also 0 within that area. A harmonic function
with a normal derivative of 0 on the boundary of an area is constant within that area.
The Dirichlet problem is:

2
u(x) = f(x) , x R , u(x) = g(x) for all x S.
It has a unique solution.
The Neumann problem is:

2
u(x) = f(x) , x R ,
u(x)
n
= h(x) for all x S.
32 Mathematics Formulary by ir. J.C.A. Wevers
The solution is unique except for a constant. The solution exists if:

___
R
f(x)d
3
V =
__
_
S
h(x)d
2
A
A fundamental solution of the Laplace equation satises:

2
u(x) = (x)
This has in 2 dimensions in polar coordinates the following solution:
u(r) =
ln(r)
2
This has in 3 dimensions in spherical coordinates the following solution:
u(r) =
1
4r
The equation
2
v = (x

) has the solution


v(x) =
1
4[x

[
After substituting this in Greens 2nd theorem and applying the sieve property of the function one can
derive Greens 3rd theorem:
u(

) =
1
4
___
R

2
u
r
d
3
V +
1
4
__
_
S
_
1
r
u
n
u

n
_
1
r
__
d
2
A
The Green function G(x,

) is dened by:
2
G = (x

), and on boundary S holds G(x,

) = 0. Than
G can be written as:
G(x,

) =
1
4[x

[
+g(x,

)
Than g(x,

) is a solution of Dirichlets problem. The solution of Poissons equation


2
u = f(x) when
on the boundary S holds: u(x) = g(x), is:
u(

) =
___
R
G(x,

)f(x)d
3
V
__
_
S
g(x)
G(x,

)
n
d
2
A
Chapter 5
Linear algebra
5.1 Vector spaces
( is a group for the operation if:
1. a, b ( a b (: a group is closed.
2. (a b) c = a (b c): a group is associative.
3. e ( so that a e = e a = a: there exists a unit element.
4. a (a ( so that a a = e: each element has an inverse.
If
5. a b = b a
the group is called Abelian or commutative. Vector spaces form an Abelian group for addition and multi-
plication: 1 a =a, (a) = ()a, ( +)(a +

b) = a +

b +a +

b.
W is a linear subspace if w
1
, w
2
W holds: w
1
+ w
2
W.
W is an invariant subspace of V for the operator A if w W holds: A w W.
5.2 Basis
For an orthogonal basis holds: (e
i
, e
j
) = c
ij
. For an orthonormal basis holds: (e
i
, e
j
) =
ij
.
The set vectors a
n
is linear independent if:

i
a
i
= 0
i

i
= 0
The set a
n
is a basis if it is 1. independent and 2. V =<a
1
, a
2
, ... >=

i
a
i
.
5.3 Matrix calculus
5.3.1 Basic operations
For the matrix multiplication of matrices A = a
ij
and B = b
kl
holds with
r
the row index and
k
the column
index:
A
r1k1
B
r2k2
= C
r1k2
, (AB)
ij
=

k
a
ik
b
kj
33
34 Mathematics Formulary by ir. J.C.A. Wevers
where
r
is the number of rows and
k
the number of columns.
The transpose of A is dened by: a
T
ij
= a
ji
. For this holds (AB)
T
= B
T
A
T
and (A
T
)
1
= (A
1
)
T
. For the
inverse matrix holds: (A B)
1
= B
1
A
1
. The inverse matrix A
1
has the property that A A
1
= II
and can be found by diagonalization: (A
ij
[II) (II[A
1
ij
).
The inverse of a 2 2 matrix is:
_
a b
c d
_
1
=
1
ad bc
_
d b
c a
_
The determinant function D = det(A) is dened by:
det(A) = D(a
1
, a
2
, ..., a
n
)
For the determinant det(A) of a matrix A holds: det(AB) = det(A) det(B). Een 2 2 matrix has
determinant:
det
_
a b
c d
_
= ad cb
The derivative of a matrix is a matrix with the derivatives of the coecients:
dA
dt
=
da
ij
dt
and
dAB
dt
= B
dA
dt
+A
dB
dt
The derivative of the determinant is given by:
d det(A)
dt
= D(
da
1
dt
, ..., a
n
) +D(a
1
,
da
2
dt
, ..., a
n
) +... +D(a
1
, ...,
da
n
dt
)
When the rows of a matrix are considered as vectors the row rank of a matrix is the number of independent
vectors in this set. Similar for the column rank. The row rank equals the column rank for each matrix.
Let

A :

V

V be the complex extension of the real linear operator A : V V in a nite dimensional V .
Then A and

A have the same caracteristic equation.
When A
ij
IR and v
1
+i v
2
is an eigenvector of A at eigenvalue =
1
+i
2
, than holds:
1. Av
1
=
1
v
1

2
v
2
and Av
2
=
2
v
1
+
1
v
2
.
2. v

= v
1
iv
2
is an eigenvalue at

=
1
i
2
.
3. The linear span < v
1
, v
2
> is an invariant subspace of A.
If

k
n
are the columns of A, than the transformed space of A is given by:
R(A) =< Ae
1
, ..., Ae
n
>=<

k
1
, ...,

k
n
>
If the columns

k
n
of a n m matrix A are independent, than the nullspace ^(A) =

0 .
Chapter 5: Linear algebra 35
5.3.2 Matrix equations
We start with the equation
A x =

b
and

b ,=

0. If det(A) = 0 the only solution is



0. If det(A) ,= 0 there exists exactly one solution ,=

0.
The equation
A x =

0
has exactly one solution ,=

0 if det(A) = 0, and if det(A) ,= 0 the solution is



0.
Cramers rule for the solution of systems of linear equations is: let the system be written as
A x =

b a
1
x
1
+... +a
n
x
n
=

b
then x
j
is given by:
x
j
=
D(a
1
, ..., a
j1
,

b, a
j+1
, ..., a
n
)
det(A)
5.4 Linear transformations
A transformation A is linear if: A(x +y ) = Ax +Ay.
Some common linear transformations are:
Transformation type Equation
Projection on the line <a > P(x) = (a, x)a/(a, a )
Projection on the plane (a, x) = 0 Q(x) = x P(x)
Mirror image in the line <a > S(x) = 2P(x) x
Mirror image in the plane (a, x) = 0 T(x) = 2Q(x) x = x 2P(x)
For a projection holds: x P
W
(x) P
W
(x) and P
W
(x) W.
If for a transformation A holds: (Ax, y ) = (x, Ay ) = (Ax, Ay ), than A is a projection.
Let A : W W dene a linear transformation; we dene:
If S is a subset of V : A(S) := Ax W[x S
If T is a subset of W: A

(T) := x V [A(x) T
Than A(S) is a linear subspace of W and the inverse transformation A

(T) is a linear subspace of V .


From this follows that A(V ) is the image space of A, notation: 1(A). A

0 ) = E
0
is a linear subspace of
V , the null space of A, notation: ^(A). Then the following holds:
dim(^(A)) + dim(1(A)) = dim(V )
36 Mathematics Formulary by ir. J.C.A. Wevers
5.5 Plane and line
The equation of a line that contains the points a and

b is:
x =a +(

b a ) =a +r
The equation of a plane is:
x =a +(

b a ) +(c a ) =a +r
1
+r
2
When this is a plane in IR
3
, the normal vector to this plane is given by:
n
V
=
r
1
r
2
[r
1
r
2
[
A line can also be described by the points for which the line equation : (a, x) + b = 0 holds, and for a
plane V: (a, x) +k = 0. The normal vector to V is than: a/[a[.
The distance d between 2 points p and q is given by d( p, q ) = | p q |.
In IR
2
holds: The distance of a point p to the line (a, x) +b = 0 is
d( p, ) =
[(a, p ) +b[
[a[
Similarly in IR
3
: The distance of a point p to the plane (a, x) +k = 0 is
d( p, V ) =
[(a, p ) +k[
[a[
This can be generalized for IR
n
and C
n
(theorem from Hesse).
5.6 Coordinate transformations
The linear transformation A from IK
n
IK
m
is given by (IK = IR of C):
y = A
mn
x
where a column of A is the image of a base vector in the original.
The matrix A

transforms a vector given w.r.t. a basis into a vector w.r.t. a basis . It is given by:
A

= ((Aa
1
), ..., (Aa
n
))
where (x) is the representation of the vector x w.r.t. basis .
The transformation matrix S

transforms vectors from coordinate system into coordinate system :


S

:= II

= ((a
1
), ..., (a
n
))
and S

= II
Chapter 5: Linear algebra 37
The matrix of a transformation A is than given by:
A

=
_
A

e
1
, ..., A

e
n
_
For the transformation of matrix operators to another coordinate system holds: A

= S

, A

=
S

and (AB)

= A

.
Further is A

= S

, A

= A

. A vector is transformed via X

= S

.
5.7 Eigen values
The eigenvalue equation
Ax = x
with eigenvalues can be solved with (AII) =

0 det(AII) = 0. The eigenvalues follow from this


characteristic equation. The following is true: det(A) =

i
and Tr(A) =

i
a
ii
=

i
.
The eigen values
i
are independent of the chosen basis. The matrix of A in a basis of eigenvectors, with
S the transformation matrix to this basis, S = (E
1
, ..., E
n
), is given by:
= S
1
AS = diag(
1
, ...,
n
)
When 0 is an eigen value of A than E
0
(A) = ^(A).
When is an eigen value of A holds: A
n
x =
n
x.
5.8 Transformation types
Isometric transformations
A transformation is isometric when: |Ax| = |x|. This implies that the eigen values of an isometric
transformation are given by = exp(i) [[ = 1. Than also holds: (Ax, Ay ) = (x, y ).
When W is an invariant subspace if the isometric transformation A with dim(A) < , than also W

is
an invariante subspace.
Orthogonal transformations
A transformation A is orthogonal if A is isometric and the inverse A

exists. For an orthogonal trans-


formation O holds O
T
O = II, so: O
T
= O
1
. If A and B are orthogonal, than AB and A
1
are also
orthogonal.
Let A : V V be orthogonal with dim(V ) < . Than A is:
Direct orthogonal if det(A) = +1. A describes a rotation. A rotation in IR
2
through angle is given
by:
R =
_
cos() sin()
sin() cos()
_
38 Mathematics Formulary by ir. J.C.A. Wevers
So the rotation angle is determined by Tr(A) = 2 cos() with 0 . Let
1
and
2
be the roots of
the characteristic equation, than also holds: '(
1
) = '(
2
) = cos(), and
1
= exp(i),
2
= exp(i).
In IR
3
holds:
1
= 1,
2
=

3
= exp(i). A rotation over E
1
is given by the matrix
_
_
1 0 0
0 cos() sin()
0 sin() cos()
_
_
Mirrored orthogonal if det(A) = 1. Vectors from E
1
are mirrored by A w.r.t. the invariant subspace
E

1
. A mirroring in IR
2
in < (cos(
1
2
), sin(
1
2
)) > is given by:
S =
_
cos() sin()
sin() cos()
_
Mirrored orthogonal transformations in IR
3
are rotational mirrorings: rotations of axis < a
1
> through
angle and mirror plane <a
1
>

. The matrix of such a transformation is given by:


_
_
1 0 0
0 cos() sin()
0 sin() cos()
_
_
For all orthogonal transformations O in IR
3
holds that O(x) O(y ) = O(x y ).
IR
n
(n < ) can be decomposed in invariant subspaces with dimension 1 or 2 for each orthogonal trans-
formation.
Unitary transformations
Let V be a complex space on which an inner product is dened. Than a linear transformation U is unitary
if U is isometric and its inverse transformation A

exists. A n n matrix is unitary if U


H
U = II. It has
determinant [ det(U)[ = 1. Each isometric transformation in a nite-dimensional complex vector space is
unitary.
Theorem: for a n n matrix A the following statements are equivalent:
1. A is unitary,
2. The columns of A are an orthonormal set,
3. The rows of A are an orthonormal set.
Symmetric transformations
A transformation A on IR
n
is symmetric if (Ax, y ) = (x, Ay ). A matrix A IM
nn
is symmetric
if A = A
T
. A linear operator is only symmetric if its matrix w.r.t. an arbitrary basis is symmetric.
All eigenvalues of a symmetric transformation belong to IR. The dierent eigenvectors are mutually
perpendicular. If A is symmetric, than A
T
= A = A
H
on an orthogonal basis.
For each matrix B IM
mn
holds: B
T
B is symmetric.
Chapter 5: Linear algebra 39
Hermitian transformations
A transformation H : V V with V = C
n
is Hermitian if (Hx, y ) = (x, Hy ). The Hermitian conjugated
transformation A
H
of A is: [a
ij
]
H
= [a

ji
]. An alternative notation is: A
H
= A

. The inner product of two


vectors x and y can now be written in the form: (x, y ) = x
H
y.
If the transformations A and B are Hermitian, than their product AB is Hermitian if:
[A, B] = AB BA = 0. [A, B] is called the commutator of A and B.
The eigenvalues of a Hermitian transformation belong to IR.
A matrix representation can be coupled with a Hermitian operator L. W.r.t. a basis e
i
it is given by
L
mn
= (e
m
, Le
n
).
Normal transformations
For each linear transformation A in a complex vector space V there exists exactly one linear transformation
B so that (Ax, y ) = (x, By ). This B is called the adjungated transformation of A. Notation: B = A

.
The following holds: (CD)

= D

. A

= A
1
if A is unitary and A

= A if A is Hermitian.
Denition: the linear transformation A is normal in a complex vector space V if A

A = AA

. This is
only the case if for its matrix S w.r.t. an orthonormal basis holds: A

A = AA

.
If A is normal holds:
1. For all vectors x V and a normal transformation A holds:
(Ax, Ay ) = (A

Ax, y ) = (AA

x, y ) = (A

x, A

y )
2. x is an eigenvector of A if and only if x is an eigenvector of A

.
3. Eigenvectors of A for dierent eigenvalues are mutually perpendicular.
4. If E

if an eigenspace from A than the orthogonal complement E

is an invariant subspace of A.
Let the dierent roots of the characteristic equation of A be
i
with multiplicities n
i
. Than the dimension
of each eigenspace V
i
equals n
i
. These eigenspaces are mutually perpendicular and each vector x V can
be written in exactly one way as
x =

i
x
i
with x
i
V
i
This can also be written as: x
i
= P
i
x where P
i
is a projection on V
i
. This leads to the spectral mapping
theorem: let A be a normal transformation in a complex vector space V with dim(V ) = n. Than:
1. There exist projection transformations P
i
, 1 i p, with the properties
P
i
P
j
= 0 for i ,= j,
P
1
+... +P
p
= II,
dimP
1
(V ) +... + dimP
p
(V ) = n
and complex numbers
1
, ...,
p
so that A =
1
P
1
+... +
p
P
p
.
2. If A is unitary than holds [
i
[ = 1 i.
3. If A is Hermitian than
i
IR i.
40 Mathematics Formulary by ir. J.C.A. Wevers
Complete systems of commuting Hermitian transformations
Consider m Hermitian linear transformations A
i
in a n dimensional complex inner product space V .
Assume they mutually commute.
Lemma: if E

is the eigenspace for eigenvalue from A


1
, than E

is an invariant subspace of all trans-


formations A
i
. This means that if x E

, than A
i
x E

.
Theorem. Consider m commuting Hermitian matrices A
i
. Than there exists a unitary matrix U so that
all matrices U

A
i
U are diagonal. The columns of U are the common eigenvectors of all matrices A
j
.
If all eigenvalues of a Hermitian linear transformation in a n-dimensional complex vector space dier, than
the normalized eigenvector is known except for a phase factor exp(i).
Denition: a commuting set Hermitian transformations is called complete if for each set of two com-
mon eigenvectors v
i
, v
j
there exists a transformation A
k
so that v
i
and v
j
are eigenvectors with dierent
eigenvalues of A
k
.
Usually a commuting set is taken as small as possible. In quantum physics one speaks of commuting
observables. The required number of commuting observables equals the number of quantum numbers
required to characterize a state.
5.9 Homogeneous coordinates
Homogeneous coordinates are used if one wants to combine both rotations and translations in one ma-
trix transformation. An extra coordinate is introduced to describe the non-linearities. Homogeneous
coordinates are derived from cartesian coordinates as follows:
_
_
x
y
z
_
_
cart
=
_
_
_
_
wx
wy
wz
w
_
_
_
_
hom
=
_
_
_
_
X
Y
Z
w
_
_
_
_
hom
so x = X/w, y = Y/w and z = Z/w. Transformations in homogeneous coordinates are described by the
following matrices:
1. Translation along vector (X
0
, Y
0
, Z
0
, w
0
):
T =
_
_
_
_
w
0
0 0 X
0
0 w
0
0 Y
0
0 0 w
0
Z
0
0 0 0 w
0
_
_
_
_
2. Rotations of the x, y, z axis, resp. through angles , , :
R
x
() =
_
_
_
_
1 0 0 0
0 cos sin 0
0 sin cos 0
0 0 0 1
_
_
_
_
R
y
() =
_
_
_
_
cos 0 sin 0
0 1 0 0
sin 0 cos 0
0 0 0 1
_
_
_
_
Chapter 5: Linear algebra 41
R
z
() =
_
_
_
_
cos sin 0 0
sin cos 0 0
0 0 1 0
0 0 0 1
_
_
_
_
3. A perspective projection on image plane z = c with the center of projection in the origin. This
transformation has no inverse.
P(z = c) =
_
_
_
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 1/c 0
_
_
_
_
5.10 Inner product spaces
A complex inner product on a complex vector space is dened as follows:
1. (a,

b ) = (

b, a ),
2. (a,
1

b
1
+
2

b
2
) =
1
(a,

b
1
) +
2
(a,

b
2
) for all a,

b
1
,

b
2
V and
1
,
2
C.
3. (a, a ) 0 for all a V , (a, a ) = 0 if and only if a =

0.
Due to (1) holds: (a, a ) IR. The inner product space C
n
is the complex vector space on which a complex
inner product is dened by:
(a,

b ) =
n

i=1
a

i
b
i
For function spaces holds:
(f, g) =
b
_
a
f

(t)g(t)dt
For each a the length |a | is dened by: |a | =
_
(a, a ). The following holds: |a | |

b | |a +

b |
|a | +|

b |, and with the angle between a and

b holds: (a,

b ) = |a | |

b | cos().
Let a
1
, ..., a
n
be a set of vectors in an inner product space V . Than the Gramian G of this set is given
by: G
ij
= (a
i
, a
j
). The set of vectors is independent if and only if det(G) = 0.
A set is orthonormal if (a
i
, a
j
) =
ij
. If e
1
, e
2
, ... form an orthonormal row in an innite dimensional vector
space Bessels inequality holds:
|x|
2

i=1
[(e
i
, x)[
2
The equal sign holds if and only if lim
n
|x
n
x| = 0.
The inner product space
2
is dened in C

by:

2
=
_
a = (a
1
, a
2
, ...) [

n=1
[a
n
[
2
<
_
A space is called a Hilbert space if it is
2
and if also holds: lim
n
[a
n+1
a
n
[ = 0.
42 Mathematics Formulary by ir. J.C.A. Wevers
5.11 The Laplace transformation
The class LT exists of functions for which holds:
1. On each interval [0, A], A > 0 there are no more than a nite number of discontinuities and each
discontinuity has an upper - and lower limit,
2. t
0
[0, > and a, M IR so that for t t
0
holds: [f(t)[ exp(at) < M.
Than there exists a Laplace transform for f.
The Laplace transformation is a generalisation of the Fourier transformation. The Laplace transform of a
function f(t) is, with s C and t 0:
F(s) =

_
0
f(t)e
st
dt
The Laplace transform of the derivative of a function is given by:
L
_
f
(n)
(t)
_
= f
(n1)
(0) sf
(n2)
(0) ... s
n1
f(0) +s
n
F(s)
The operator L has the following properties:
1. Equal shapes: if a > 0 than
L(f(at)) =
1
a
F
_
s
a
_
2. Damping: L(e
at
f(t)) = F(s +a)
3. Translation: If a > 0 and g is dened by g(t) = f(t a) if t > a and g(t) = 0 for t a, than holds:
L(g(t)) = e
sa
L(f(t)).
If s IR than holds '(f) = L('(f)) and (f) = L((f)).
For some often occurring functions holds:
f(t) = F(s) = L(f(t)) =
t
n
n!
e
at
(s a)
n1
e
at
cos(t)
s a
(s a)
2
+
2
e
at
sin(t)

(s a)
2
+
2
(t a) exp(as)
Chapter 5: Linear algebra 43
5.12 The convolution
The convolution integral is dened by:
(f g)(t) =
t
_
0
f(u)g(t u)du
The convolution has the following properties:
1. f g LT
2. L(f g) = L(f) L(g)
3. Distribution: f (g +h) = f g +f h
4. Commutative: f g = g f
5. Homogenity: f (g) = f g
If L(f) = F
1
F
2
, than is f(t) = f
1
f
2
.
5.13 Systems of linear dierential equations
We start with the equation

x = Ax. Assume that x = v exp(t), than follows: Av = v. In the 2 2 case
holds:
1.
1
=
2
: than x(t) =

v
i
exp(
i
t).
2.
1
,=
2
: than x(t) = (ut +v) exp(t).
Assume that = +i is an eigenvalue with eigenvector v, than

is also an eigenvalue for eigenvector


v

. Decompose v = u +i w, than the real solutions are


c
1
[ucos(t) wsin(t)]e
t
+c
2
[v cos(t) +usin(t)]e
t
There are two solution strategies for the equation

x = Ax:
1. Let x = v exp(t) det(A
2
II) = 0.
2. Introduce: x = u and y = v, this leads to x = u and y = v. This transforms a n-dimensional set of
second order equations into a 2n-dimensional set of rst order equations.
44 Mathematics Formulary by ir. J.C.A. Wevers
5.14 Quadratic forms
5.14.1 Quadratic forms in IR
2
The general equation of a quadratic form is: x
T
Ax + 2x
T
P + S = 0. Here, A is a symmetric matrix. If
= S
1
AS = diag(
1
, ...,
n
) holds: u
T
u +2u
T
P +S = 0, so all cross terms are 0. u = (u, v, w) should
be chosen so that det(S) = +1, to maintain the same orientation as the system (x, y, z).
Starting with the equation
ax
2
+ 2bxy +cy
2
+dx +ey +f = 0
we have [A[ = ac b
2
. An ellipse has [A[ > 0, a parabola [A[ = 0 and a hyperbole [A[ < 0. In polar
coordinates this can be written as:
r =
ep
1 e cos()
An ellipse has e < 1, a parabola e = 1 and a hyperbola e > 1.
5.14.2 Quadratic surfaces in IR
3
Rank 3:
p
x
2
a
2
+q
y
2
b
2
+r
z
2
c
2
= d
Ellipsoid: p = q = r = d = 1, a, b, c are the lengths of the semi axes.
Single-bladed hyperboloid: p = q = d = 1, r = 1.
Double-bladed hyperboloid: r = d = 1, p = q = 1.
Cone: p = q = 1, r = 1, d = 0.
Rank 2:
p
x
2
a
2
+q
y
2
b
2
+r
z
c
2
= d
Elliptic paraboloid: p = q = 1, r = 1, d = 0.
Hyperbolic paraboloid: p = r = 1, q = 1, d = 0.
Elliptic cylinder: p = q = 1, r = d = 0.
Hyperbolic cylinder: p = d = 1, q = 1, r = 0.
Pair of planes: p = 1, q = 1, d = 0.
Rank 1:
py
2
+qx = d
Parabolic cylinder: p, q > 0.
Parallel pair of planes: d > 0, q = 0, p ,= 0.
Double plane: p ,= 0, q = d = 0.
Chapter 6
Complex function theory
6.1 Functions of complex variables
Complex function theory deals with complex functions of a complex variable. Some denitions:
f is analytical on ( if f is continuous and dierentiable on (.
A Jordan curve is a curve that is closed and singular.
If K is a curve in C with parameter equation z = (t) = x(t) +iy(t), a t b, than the length L of K is
given by:
L =
b
_
a

_
dx
dt
_
2
+
_
dy
dt
_
2
dt =
b
_
a

dz
dt

dt =
b
_
a
[

(t)[dt
The derivative of f in point z = a is:
f

(a) = lim
za
f(z) f(a)
z a
If f(z) = u(x, y) +iv(x, y) the derivative is:
f

(z) =
u
x
+i
v
x
= i
u
y
+
v
y
Setting both results equal yields the equations of Cauchy-Riemann:
u
x
=
v
y
,
u
y
=
v
x
These equations imply that
2
u =
2
v = 0. f is analytical if u and v satisfy these equations.
6.2 Complex integration
6.2.1 Cauchys integral formula
Let K be a curve described by z = (t) on a t b and f(z) is continuous on K. Than the integral of f
over K is:
_
K
f(z)dz =
b
_
a
f((t))

(t)dt
fcontinuous
= F(b) F(a)
45
46 Mathematics Formulary by ir. J.C.A. Wevers
Lemma: let K be the circle with center a and radius r taken in a positive direction. Than holds for
integer m:
1
2i
_
K
dz
(z a)
m
=
_
0 if m ,= 1
1 if m = 1
Theorem: if L is the length of curve K and if [f(z)[ M for z K, than, if the integral exists, holds:

_
K
f(z)dz

ML
Theorem: let f be continuous on an area G and let p be a xed point of G. Let F(z) =
_
z
p
f()d for all
z G only depend on z and not on the integration path. Than F(z) is analytical on G with F

(z) = f(z).
This leads to two equivalent formulations of the main theorem of complex integration: let the function f
be analytical on an area G. Let K and K

be two curves with the same starting - and end points, which
can be transformed into each other by continous deformation within G. Let B be a Jordan curve. Than
holds
_
K
f(z)dz =
_
K

f(z)dz
_
B
f(z)dz = 0
By applying the main theorem on e
iz
/z one can derive that

_
0
sin(x)
x
dx =

2
6.2.2 Residue
A point a C is a regular point of a function f(z) if f is analytical in a. Otherwise a is a singular point
or pole of f(z). The residue of f in a is dened by
Res
z=a
f(z) =
1
2i
_
K
f(z)dz
where K is a Jordan curve which encloses a in positive direction. The residue is 0 in regular points, in
singular points it can be both 0 and ,= 0. Cauchys residue proposition is: let f be analytical within and
on a Jordan curve K except in a nite number of singular points a
i
within K. Than, if K is taken in a
positive direction, holds:
1
2i
_
K
f(z)dz =
n

k=1
Res
z=a
k
f(z)
Lemma: let the function f be analytical in a, than holds:
Res
z=a
f(z)
z a
= f(a)
Chapter 6: Complex function theory 47
This leads to Cauchys integral theorem: if F is analytical on the Jordan curve K, which is taken in a
positive direction, holds:
1
2i
_
K
f(z)
z a
dz =
_
f(a) if a inside K
0 if a outside K
Theorem: let K be a curve (K need not be closed) and let () be continuous on K. Than the function
f(z) =
_
K
()d
z
is analytical with n-th derivative
f
(n)
(z) = n!
_
K
()d
( z)
n+1
Theorem: let K be a curve and G an area. Let (, z) be dened for K, z G, with the following
properties:
1. (, z) is limited, this means [(, z)[ M for K, z G,
2. For xed K, (, z) is an analytical function of z on G,
3. For xed z G the functions (, z) and (, z)/z are continuous functions of on K.
Than the function
f(z) =
_
K
(, z)d
is analytical with derivative
f

(z) =
_
K
(, z)
z
d
Cauchys inequality: let f(z) be an analytical function within and on the circle C : [z a[ = R and let
[f(z)[ M for z C. Than holds

f
(n)
(a)


Mn!
R
n
6.3 Analytical functions denied by series
The series

f
n
(z) is called pointwise convergent on an area G with sum F(z) if

>0

zG

N0IR

n>n0
_

f(z)
N

n=1
f
n
(z)

<
_
The series is called uniform convergent if

>0

N0IR

n>n0

zG
_

f(z)
N

n=1
f
n
(z)

<
_
48 Mathematics Formulary by ir. J.C.A. Wevers
Uniform convergence implies pointwise convergence, the opposite is not necessary.
Theorem: let the power series

n=0
a
n
z
n
have a radius of convergence R. R is the distance to the rst
non-essential singularity.
If lim
n
n
_
[a
n
[ = L exists, than R = 1/L.
If lim
n
[a
n+1
[/[a
n
[ = L exists, than R = 1/L.
If these limits both dont exist one can nd R with the formula of Cauchy-Hadamard:
1
R
= lim
n
sup
n
_
[a
n
[
6.4 Laurent series
Taylors theorem: let f be analytical in an area G and let point a G has distance r to the boundary
of G. Than f(z) can be expanded into the Taylor series near a:
f(z) =

n=0
c
n
(z a)
n
with c
n
=
f
(n)
(a)
n!
valid for [z a[ < r. The radius of convergence of the Taylor series is r. If f has a pole of order k in a
than c
1
, ..., c
k1
= 0, c
k
,= 0.
Theorem of Laurent: let f be analytical in the circular area G : r < [z a[ < R. Than f(z) can be
expanded into a Laurent series with center a:
f(z) =

n=
c
n
(z a)
n
with c
n
=
1
2i
_
K
f(w)dw
(w a)
n+1
, n ZZ
valid for r < [z a[ < R and K an arbitrary Jordan curve in G which encloses point a in positive direction.
The principal part of a Laurent series is:

n=1
c
n
(z a)
n
. One can classify singular points with this.
There are 3 cases:
1. There is no principal part. Than a is a non-essential singularity. Dene f(a) = c
0
and the series is
also valid for [z a[ < R and f is analytical in a.
2. The principal part contains a nite number of terms. Than there exists a k IN so that
lim
za
(z a)
k
f(z) = c
k
,= 0. Than the function g(z) = (z a)
k
f(z) has a non-essential singularity in
a. One speaks of a pole of order k in z = a.
3. The principal part contains an innite number of terms. Then, a is an essential singular point of f,
such as exp(1/z) for z = 0.
If f and g are analytical, f(a) ,= 0, g(a) = 0, g

(a) ,= 0 than f(z)/g(z) has a simple pole (i.e. a pole of


order 1) in z = a with
Res
z=a
f(z)
g(z)
=
f(a)
g

(a)
Chapter 6: Complex function theory 49
6.5 Jordans theorem
Residues are often used when solving denite integrals. We dene the notations C
+

= z[[z[ = , (z) 0
and C

= z[[z[ = , (z) 0 and M


+
(, f) = max
zC
+

[f(z)[, M

(, f) = max
zC

[f(z)[. We assume that


f(z) is analytical for (z) > 0 with a possible exception of a nite number of singular points which do not
lie on the real axis, lim

M
+
(, f) = 0 and that the integral exists, than

f(x)dx = 2i

Resf(z) in (z) > 0


Replace M
+
by M

in the conditions above and it follows that:

f(x)dx = 2i

Resf(z) in (z) < 0


Jordans lemma: let f be continuous for [z[ R, (z) 0 and lim

M
+
(, f) = 0. Than holds for > 0
lim

_
C
+

f(z)e
iz
dz = 0
Let f be continuous for [z[ R, (z) 0 and lim

(, f) = 0. Than holds for < 0


lim

_
C

f(z)e
iz
dz = 0
Let z = a be a simple pole of f(z) and let C

be the half circle [z a[ = , 0 arg(z a) , taken from


a + to a . Than is
lim
0
1
2i
_
C

f(z)dz =
1
2
Res
z=a
f(z)
Chapter 7
Tensor calculus
7.1 Vectors and covectors
A nite dimensional vector space is denoted by 1, J. The vector space of linear transformations from 1
to J is denoted by L(1, J). Consider L(1,IR) := 1

. We name 1

the dual space of 1. Now we can


dene vectors in 1 with basis c and covectors in 1

with basis

c. Properties of both are:
1. Vectors: x = x
i
c
i
with basis vectors c
i
:
c
i
=

x
i
Transformation from system i to i

is given by:
c
i
= A
i
i
c
i
=
i
1 , x
i

= A
i

i
x
i
2. Covectors:

x = x
i

c
i
with basis vectors

c
i

c
i
= dx
i
Transformation from system i to i

is given by:

c
i

= A
i

c
i
1

, x
i
= A
i
i
x
i
Here the Einstein convention is used:
a
i
b
i
:=

i
a
i
b
i
The coordinate transformation is given by:
A
i
i
=
x
i
x
i

, A
i

i
=
x
i

x
i
From this follows that A
i
k
A
k
l
=
k
l
and A
i
i
= (A
i

i
)
1
.
In dierential notation the coordinate transformations are given by:
dx
i
=
x
i
x
i

dx
i

and

x
i

=
x
i
x
i

x
i
The general transformation rule for a tensor T is:
T
q1...qn
s1...sm
=

x
u

u
q1
x
p1

u
qn
x
pn

x
r1
u
s1

x
rm
u
sm
T
p1...pn
r1...rm
For an absolute tensor = 0.
50
Chapter 7: Tensor calculus 51
7.2 Tensor algebra
The following holds:
a
ij
(x
i
+y
i
) a
ij
x
i
+a
ij
y
i
, but: a
ij
(x
i
+y
j
) , a
ij
x
i
+a
ij
y
j
and
(a
ij
+a
ji
)x
i
x
j
2a
ij
x
i
x
j
, but: (a
ij
+a
ji
)x
i
y
j
, 2a
ij
x
i
y
j
en (a
ij
a
ji
)x
i
x
j
0.
The sum and dierence of two tensors is a tensor of the same rank: A
p
q
B
p
q
. The outer tensor product
results in a tensor with a rank equal to the sum of the ranks of both tensors: A
pr
q
B
m
s
= C
prm
qs
. The
contraction equals two indices and sums over them. Suppose we take r = s for a tensor A
mpr
qs
, this results
in:

r
A
mpr
qr
= B
mp
q
. The inner product of two tensors is dened by taking the outer product followed by
a contraction.
7.3 Inner product
Denition: the bilinear transformation B : 1 1

IR, B(x,

y ) =

y(x) is denoted by < x,

y >. For
this pairing operator < , >= holds:

y(x) =< x,

y >= y
i
x
i
, <

c
i
, c
j
>=
i
j
Let G : 1 1

be a linear bijection. Dene the bilinear forms


g : 1 1 IR g(x, y ) =< x, Gy >
h : 1

IR h(

x,

y ) =< G
1

x,

y >
Both are not degenerated. The following holds: h(Gx, Gy ) =< x, Gy >= g(x, y ). If we identify 1 and 1

with G, than g (or h) gives an inner product on 1.


The inner product (, )

on
k
(1) is dened by:
(, )

=
1
k!
(, )
T
0
k
(V)
The inner product of two vectors is than given by:
(x, y ) = x
i
y
i
<c
i
, Gc
j
>= g
ij
x
i
x
j
The matrix g
ij
of G is given by
g
ij

c
j
= Gc
i
The matrix g
ij
of G
1
is given by:
g
kl
c
l
= G
1

c
k
For this metric tensor g
ij
holds: g
ij
g
jk
=
k
i
. This tensor can raise or lower indices:
x
j
= g
ij
x
i
, x
i
= g
ij
x
j
and du
i
=

c
i
= g
ij
c
j
.
52 Mathematics Formulary by ir. J.C.A. Wevers
7.4 Tensor product
Denition: let | and 1 be two nite dimensional vector spaces with dimensions m and n. Let |

be the cartesian product of | and 1. A function t : |

IR; (

u;

v ) t(

u;

v ) = t

IR is
called a tensor if t is linear in

u and

v. The tensors t form a vector space denoted by | 1. The elements
T 1 1 are called contravariant 2-tensors: T = T
ij
c
i
c
j
= T
ij

j
. The elements T 1

are
called covariant 2-tensors: T = T
ij

c
i

c
j
= T
ij
dx
i
dx
j
. The elements T 1

1 are called mixed 2


tensors: T = T
.j
i

c
i
c
j
= T
.j
i
dx
i

j
, and analogous for T 1 1

.
The numbers given by
t

= t(

c

,

c

)
with 1 m and 1 n are the components of t.
Take x | and y 1. Than the function x y, denied by
(x y)(

u,

v) =< x,

u >
U
< y,

v >
V
is a tensor. The components are derived from: (u v )
ij
= u
i
v
j
. The tensor product of 2 tensors is given
by:
_
2
0
_
form: (v w)(

p,

q) = v
i
p
i
w
k
q
k
= T
ik
p
i
q
k
_
0
2
_
form: (

q)(v, w) = p
i
v
i
q
k
w
k
= T
ik
v
i
w
k
_
1
1
_
form: (v

p)(

q, w) = v
i
q
i
p
k
w
k
= T
i
k
q
i
w
k
7.5 Symmetric and antisymmetric tensors
A tensor t 1 1 is called symmetric resp. antisymmetric if

x,

y 1

holds: t(

x,

y ) = t(

y,

x) resp.
t(

x,

y ) = t(

y,

x).
A tensor t 1

is called symmetric resp. antisymmetric if x, y 1 holds: t(x, y ) = t(y, x) resp.


t(x, y ) = t(y, x). The linear transformations o and / in 1 J are dened by:
ot(

x,

y ) =
1
2
(t(

x,

y) +t(

y,

x))
/t(

x,

y ) =
1
2
(t(

x,

y) t(

y,

x))
Analogous in 1

. If t is symmetric resp. antisymmetric, than ot = t resp. /t = t.


The tensors e
i
e
j
= e
i
e
j
= 2o(e
i
e
j
), with 1 i j n are a basis in o(1 1) with dimension
1
2
n(n + 1).
The tensors e
i
e
j
= 2/(e
i
e
j
), with 1 i j n are a basis in /(1 1) with dimension
1
2
n(n 1).
The complete antisymmetric tensor is given by:
ijk

klm
=
il

jm

im

jl
.
The permutation-operators e
pqr
are dened by: e
123
= e
231
= e
312
= 1, e
213
= e
132
= e
321
= 1, for all
other combinations e
pqr
= 0. There is a connection with the tensor:
pqr
= g
1/2
e
pqr
and
pqr
= g
1/2
e
pqr
.
Chapter 7: Tensor calculus 53
7.6 Outer product
Let
k
(1) and
l
(1). Than
k+l
(1) is dened by:
=
(k +l)!
k!l!
/( )
If and
1
(1) = 1

holds: =
The outer product can be written as: (a

b)
i
=
ijk
a
j
b
k
, a

b = G
1
(Ga G

b ).
Take a,

b, c,

d IR
4
. Than (dt dz)(a,

b ) = a
0
b
4
b
0
a
4
is the oriented surface of the projection on the
tz-plane of the parallelogram spanned by a and

b.
Further
(dt dy dz)(a,

b, c) = det

a
0
b
0
c
0
a
2
b
2
c
2
a
4
b
4
c
4

is the oriented 3-dimensional volume of the projection on the tyz-plane of the parallelepiped spanned by
a,

b and c.
(dt dxdy dz)(a,

b, c,

d) = det(a,

b, c,

d) is the 4-dimensional volume of the hyperparellelepiped spanned
by a,

b, c and

d.
7.7 The Hodge star operator

k
(1) and
nk
(1) have the same dimension because
_
n
k
_
=
_
n
nk
_
for 1 k n. Dim(
n
(1)) = 1. The
choice of a basis means the choice of an oriented measure of volume, a volume , in 1. We can gauge
so that for an orthonormal basis e
i
holds: (e
i
) = 1. This basis is than by denition positive oriented if
=

e
1

e
2
...

e
n
= 1.
Because both spaces have the same dimension one can ask if there exists a bijection between them. If
1 has no extra structure this is not the case. However, such an operation does exist if there is an inner
product dened on 1 and the corresponding volume . This is called the Hodge star operator and denoted
by . The following holds:

w
k
(V)

w
kn
(V)

k
(V)
w = (, w)

For an orthonormal basis in IR


3
holds: the volume: = dx dy dz, dx dy dz = 1, dx = dy dz,
dz = dx dy, dy = dx dz, (dx dy) = dz, (dy dz) = dx, (dx dz) = dy.
For a Minkowski basis in IR
4
holds: = dt dx dy dz, G = dt dt dx dx dy dy dz dz, and
dt dx dy dz = 1 and 1 = dt dx dy dz. Further dt = dx dy dz and dx = dt dy dz.
7.8 Dierential operations
7.8.1 The directional derivative
The directional derivative in point a is given by:
L
a
f =<a, df >= a
i
f
x
i
54 Mathematics Formulary by ir. J.C.A. Wevers
7.8.2 The Lie-derivative
The Lie-derivative is given by:
(L
v
w)
j
= w
i

i
v
j
v
i

i
w
j
7.8.3 Christoel symbols
To each curvelinear coordinate system u
i
we add a system of n
3
functions
i
jk
of u, dened by

2
x
u
i
u
k
=
i
jk
x
u
i
These are Christoel symbols of the second kind. Christoel symbols are no tensors. The Christoel
symbols of the second kind are given by:
_
i
jk
_
:=
i
jk
=
_

2
x
u
k
u
j
, dx
i
_
with
i
jk
=
i
kj
. Their transformation to a dierent coordinate system is given by:

k
= A
i
i
A
j
j
A
k
k

i
jk
+A
i

i
(
j
A
i
k
)
The rst term in this expression is 0 if the primed coordinates are cartesian.
There is a relation between Christoel symbols and the metric:

i
jk
=
1
2
g
ir
(
j
g
kr
+
k
g
rj

r
g
jk
)
and

(ln(
_
[g[)).
Lowering an index gives the Christoel symbols of the rst kind:
i
jk
= g
il

jkl
.
7.8.4 The covariant derivative
The covariant derivative
j
of a vector, covector and of rank-2 tensors is given by:

j
a
i
=
j
a
i
+
i
jk
a
k

j
a
i
=
j
a
i

k
ij
a
k

Riccis theorem:

= 0
Chapter 7: Tensor calculus 55
7.9 Dierential operators
The Gradient
is given by:
grad(f) = G
1
df = g
ki
f
x
i

x
k
The divergence
is given by:
div(a
i
) =
i
a
i
=
1

k
(

g a
k
)
The curl
is given by:
rot(a) = G
1
d Ga =
pqr

q
a
p
=
q
a
p

p
a
q
The Laplacian
is given by:
(f) = div grad(f) = d df =
i
g
ij

j
f = g
ij

j
f =
1

x
i
_

g g
ij
f
x
j
_
7.10 Dierential geometry
7.10.1 Space curves
We limit ourselves to IR
3
with a xed orthonormal basis. A point is represented by the vector x =
(x
1
, x
2
, x
3
). A space curve is a collection of points represented by x = x(t). The arc length of a space
curve is given by:
s(t) =
t
_
t0

_
dx
d
_
2
+
_
dy
d
_
2
+
_
dz
d
_
2
d
The derivative of s with respect to t is the length of the vector dx/dt:
_
ds
dt
_
2
=
_
dx
dt
,
dx
dt
_
The osculation plane in a point P of a space curve is the limiting position of the plane through the tangent
of the plane in point P and a point Q when Q approaches P along the space curve. The osculation plane
is parallel with

x(s). If

x ,= 0 the osculation plane is given by:
y = x +

x +

x so det(y x,

x,

x) = 0
56 Mathematics Formulary by ir. J.C.A. Wevers
In a bending point holds, if
...
x,= 0:
y = x +

x +
...
x
The tangent has unit vector

=

x, the main normal unit vector n =

x and the binormal

b =

x

x. So the
main normal lies in the osculation plane, the binormal is perpendicular to it.
Let P be a point and Q be a nearby point of a space curve x(s). Let be the angle between the tangents
in P and Q and let be the angle between the osculation planes (binormals) in P and Q. Then the
curvature and the torsion in P are dened by:

2
=
_
d
ds
_
2
= lim
s0
_

s
_
2
,
2
=
_
d
ds
_
2
and > 0. For plane curves is the ordinary curvature and = 0. The following holds:

2
= (

) = (

x,

x) and
2
= (

b,

b)
Frenets equations express the derivatives as linear combinations of these vectors:

= n ,

n =

b ,

b = n
From this follows that det(

x,

x,
...
x ) =
2
.
Some curves and their properties are:
Screw line / =constant
Circle screw line =constant, =constant
Plane curves = 0
Circles =constant, = 0
Lines = = 0
7.10.2 Surfaces in IR
3
A surface in IR
3
is the collection of end points of the vectors x = x(u, v), so x
h
= x
h
(u

). On the surface
are 2 families of curves, one with u =constant and one with v =constant.
The tangent plane in a point P at the surface has basis:
c
1
=
1
x and c
2
=
2
x
7.10.3 The rst fundamental tensor
Let P be a point of the surface x = x(u

). The following two curves through P, denoted by u

= u

(t),
u

= v

(), have as tangent vectors in P


dx
dt
=
du

dt

x ,
dx
d
=
dv

x
The rst fundamental tensor of the surface in P is the inner product of these tangent vectors:
_
dx
dt
,
dx
d
_
= (c

, c

)
du

dt
dv

d
Chapter 7: Tensor calculus 57
The covariant components w.r.t. the basis c

x are:
g

= (c

, c

)
For the angle between the parameter curves in P: u = t, v =constant and u =constant, v = holds:
cos() =
g
12

g
11
g
22
For the arc length s of P along the curve u

(t) holds:
ds
2
= g

du

du

This expression is called the line element.


7.10.4 The second fundamental tensor
The 4 derivatives of the tangent vectors

x =

are each linear independent of the vectors c


1
, c
2
and

N, with

N perpendicular to c
1
and c
2
. This is written as:

+h


N
This leads to:

= (c

,

) , h

= (

N,

) =
1
_
det [g[
det(c
1
, c
2
,

)
7.10.5 Geodetic curvature
A curve on the surface x(u

) is given by: u

= u

(s), than x = x(u

(s)) with s the arc length of the


curve. The length of

x is the curvature of the curve in P. The projection of

x on the surface is a vector
with components
p

= u

of which the length is called the geodetic curvature of the curve in p. This remains the same if the surface
is curved and the line element remains the same. The projection of

x on

N has length
p = h

and is called the normal curvature of the curve in P. The theorem of Meusnier states that dierent curves
on the surface with the same tangent vector in P have the same normal curvature.
A geodetic line of a surface is a curve on the surface for which in each point the main normal of the curve
is the same as the normal on the surface. So for a geodetic line is in each point p

= 0, so
d
2
u

ds
2
+

du

ds
du

ds
= 0
The covariant derivative /dt in P of a vector eld of a surface along a curve is the projection on the
tangent plane in P of the normal derivative in P.
58 Mathematics Formulary by ir. J.C.A. Wevers
For two vector elds v(t) and w(t) along the same curve of the surface follows Leibniz rule:
d(v, w)
dt
=
_
v,
w
dt
_
+
_
w,
v
dt
_
Along a curve holds:

dt
(v

) =
_
dv

dt
+

du

dt
v

_
c

7.11 Riemannian geometry


The Riemann tensor R is dened by:
R

This is a
_
1
3
_
tensor with n
2
(n
2
1)/12 independent components not identically equal to 0. This tensor
is a measure for the curvature of the considered space. If it is 0, the space is a at manifold. It has the
following symmetry properties:
R

= R

= R

= R

The following relation holds:


[

]T

= R

+R

The Riemann tensor depends on the Christoel symbols through


R

In a space and coordinate system where the Christoel symbols are 0 this becomes:
R

=
1
2
g

)
The Bianchi identities are:

= 0.
The Ricci tensor is obtained by contracting the Riemann tensor: R

, and is symmetric in its


indices: R

= R

. The Einstein tensor G is dened by: G

1
2
g

. It has the property that

= 0. The Ricci-scalar is R = g

.
Chapter 8
Numerical mathematics
8.1 Errors
There will be an error in the solution if a problem has a number of parameters which are not exactly
known. The dependency between errors in input data and errors in the solution can be expressed in the
condition number c. If the problem is given by x = (a) the rst-order approximation for an error a in a
is:
x
x
=
a

(a)
(a)

a
a
The number c(a) = [a

(a)[/[(a)[. c 1 if the problem is well-conditioned.


8.2 Floating point representations
The oating point representation depends on 4 natural numbers:
1. The basis of the number system ,
2. The length of the mantissa t,
3. The length of the exponent q,
4. The sign s.
Than the representation of machine numbers becomes: rd(x) = s m
e
where mantissa m is a number
with t -based numbers and for which holds 1/ [m[ < 1, and e is a number with q -based numbers
for which holds [e[
q
1. The number 0 is added to this set, for example with m = e = 0. The largest
machine number is
a
max
= (1
t
)

q
1
and the smallest positive machine number is
a
min
=

q
The distance between two successive machine numbers in the interval [
p1
,
p
] is
pt
. If x is a real
number and the closest machine number is rd(x), than holds:
rd(x) = x(1 +) with [[
1
2

1t
x = rd(x)(1 +

) with [

[
1
2

1t
59
60 Mathematics Formulary by ir. J.C.A. Wevers
The number :=
1
2

1t
is called the machine-accuracy, and
,

x rd(x)
x


An often used 32 bits oat format is: 1 bit for s, 8 for the exponent and 23 for de mantissa. The base here
is 2.
8.3 Systems of equations
We want to solve the matrix equation Ax =

b for a non-singular A, which is equivalent to nding the
inverse matrix A
1
. Inverting a n n matrix via Cramers rule requires too much multiplications f(n)
with n! f(n) (e 1)n!, so other methods are preferable.
8.3.1 Triangular matrices
Consider the equation Ux = c where U is a right-upper triangular, this is a matrix in which U
ij
= 0 for
all j < i, and all U
ii
,= 0. Than:
x
n
= c
n
/U
nn
x
n1
= (c
n1
U
n1,n
x
n
)/U
n1,n1
.
.
.
.
.
.
x
1
= (c
1

j=2
U
1j
x
j
)/U
11
In code:
for (k = n; k > 0; k--)
{
S = c[k];
for (j = k + 1; j < n; j++)
{
S -= U[k][j] * x[j];
}
x[k] = S / U[k][k];
}
This algorithm requires
1
2
n(n + 1) oating point calculations.
8.3.2 Gauss elimination
Consider a general set Ax =

b. This can be reduced by Gauss elimination to a triangular form by
multiplying the rst equation with A
i1
/A
11
and than subtract it from all others; now the rst column
contains all 0s except A
11
. Than the 2nd equation is subtracted in such a way from the others that all
elements on the second row are 0 except A
22
, etc. In code:
Chapter 8: Numerical mathematics 61
for (k = 1; k <= n; k++)
{
for (j = k; j <= n; j++) U[k][j] = A[k][j];
c[k] = b[k];
for (i = k + 1; i <= n; i++)
{
L = A[i][k] / U[k][k];
for (j = k + 1; j <= n; j++)
{
A[i][j] -= L * U[k][j];
}
b[i] -= L * c[k];
}
}
This algorithm requires
1
3
n(n
2
1) oating point multiplications and divisions for operations on the
coecient matrix and
1
2
n(n 1) multiplications for operations on the right-hand terms, whereafter the
triangular set has to be solved with
1
2
n(n + 1) operations.
8.3.3 Pivot strategy
Some equations have to be interchanged if the corner elements A
11
, A
(1)
22
, ... are not all ,= 0 to allow Gauss
elimination to work. In the following, A
(n)
is the element after the nth iteration. One method is: if
A
(k1)
kk
= 0, than search for an element A
(k1)
pk
with p > k that is ,= 0 and interchange the pth and the nth
equation. This strategy fails only if the set is singular and has no solution at all.
8.4 Roots of functions
8.4.1 Successive substitution
We want to solve the equation F(x) = 0, so we want to nd the root with F() = 0.
Many solutions are essentially the following:
1. Rewrite the equation in the form x = f(x) so that a solution of this equation is also a solution of
F(x) = 0. Further, f(x) may not vary too much with respect to x near .
2. Assume an initial estimation x
0
for and obtain the series x
n
with x
n
= f(x
n1
), in the hope that
lim
n
x
n
= .
Example: choose
f(x) =
h(x)
g(x)
= x
F(x)
G(x)
than we can expect that the row x
n
with
x
0
=
x
n
= x
n1

h(x
n1
)
g(x
n1
)
62 Mathematics Formulary by ir. J.C.A. Wevers
converges to .
8.4.2 Local convergence
Let be a solution of x = f(x) and let x
n
= f(x
n1
) for a given x
0
. Let f

(x) be continuous in a
neighbourhood of . Let f

() = A with [A[ < 1. Than there exists a > 0 so that for each x
0
with
[x
0
[ holds:
1. lim
n
n
n
= ,
2. If for a particular k holds: x
k
= , than for each n k holds that x
n
= . If x
n
,= for all n than
holds
lim
n
x
n
x
n1
= A , lim
n
x
n
x
n1
x
n1
x
n2
= A , lim
n
x
n
x
n
x
n1
=
A
1 A
The quantity A is called the asymptotic convergence factor, the quantity B =
10
log [A[ is called the
asymptotic convergence speed.
8.4.3 Aitken extrapolation
We dene
A = lim
n
x
n
x
n1
x
n1
x
n2
A converges to f

(a). Than the row

n
= x
n
+
A
n
1 A
n
(x
n
x
n1
)
will converge to .
8.4.4 Newton iteration
There are more ways to transform F(x) = 0 into x = f(x). One essential condition for them all is that in
a neighbourhood of a root holds that [f

(x)[ < 1, and the smaller f

(x), the faster the series converges.


A general method to construct f(x) is:
f(x) = x (x)F(x)
with (x) ,= 0 in a neighbourhood of . If one chooses:
(x) =
1
F

(x)
Than this becomes Newtons method. The iteration formula than becomes:
x
n
= x
n1

F(x
n1
)
F

(x
n1
)
Some remarks:
Chapter 8: Numerical mathematics 63
This same result can also be derived with Taylor series.
Local convergence is often dicult to determine.
If x
n
is far apart from the convergence can sometimes be very slow.
The assumption F

() ,= 0 means that is a simple root.


For F(x) = x
k
a the series becomes:
x
n
=
1
k
_
(k 1)x
n1
+
a
x
k1
n1
_
This is a well-known way to compute roots.
The following code nds the root of a function by means of Newtons method. The root lies within the
interval [x1, x2]. The value is adapted until the accuracy is better than eps. The function funcd is a
routine that returns both the function and its rst derivative in point x in the passed pointers.
float SolveNewton(void (*funcd)(float, float*, float*), float x1, float x2, float eps)
{
int j, max_iter = 25;
float df, dx, f, root;
root = 0.5 * (x1 + x2);
for (j = 1; j <= max_iter; j++)
{
(*funcd)(root, &f, &df);
dx = f/df;
root = -dx;
if ( (x1 - root)*(root - x2) < 0.0 )
{
perror("Jumped out of brackets in SolveNewton.");
exit(1);
}
if ( fabs(dx) < eps ) return root; /* Convergence */
}
perror("Maximum number of iterations exceeded in SolveNewton.");
exit(1);
return 0.0;
}
8.4.5 The secant method
This is, in contrast to the two methods discussed previously, a two-step method. If two approximations
x
n
and x
n1
exist for a root, than one can nd the next approximation with
x
n+1
= x
n
F(x
n
)
x
n
x
n1
F(x
n
) F(x
n1
)
If F(x
n
) and F(x
n1
) have a dierent sign one is interpolating, otherwise extrapolating.
64 Mathematics Formulary by ir. J.C.A. Wevers
8.5 Polynomial interpolation
A base for polynomials of order n is given by Lagranges interpolation polynomials:
L
j
(x) =
n

l=0
l=j
x x
l
x
j
x
l
The following holds:
1. Each L
j
(x) has order n,
2. L
j
(x
i
) =
ij
for i, j = 0, 1, ..., n,
3. Each polynomial p(x) can be written uniquely as
p(x) =
n

j=0
c
j
L
j
(x) with c
j
= p(x
j
)
This is not a suitable method to calculate the value of a ploynomial in a given point x = a. To do this,
the Horner algorithm is more usable: the value s =

k
c
k
x
k
in x = a can be calculated as follows:
float GetPolyValue(float c[], int n)
{
int i; float s = c[n];
for (i = n - 1; i >= 0; i--)
{
s = s * a + c[i];
}
return s;
}
After it is nished s has value p(a).
8.6 Denite integrals
Almost all numerical methods are based on a formula of the type:
b
_
a
f(x)dx =
n

i=0
c
i
f(x
i
) +R(f)
with n, c
i
and x
i
independent of f(x) and R(f) the error which has the form R(f) = Cf
(q)
() for all
common methods. Here, (a, b) and q n + 1. Often the points x
i
are chosen equidistant. Some
common formulas are:
The trapezoid rule: n = 1, x
0
= a, x
1
= b, h = b a:
b
_
a
f(x)dx =
h
2
[f(x
0
) +f(x
1
)]
h
3
12
f

()
Chapter 8: Numerical mathematics 65
Simpsons rule: n = 2, x
0
= a, x
1
=
1
2
(a +b), x
2
= b, h =
1
2
(b a):
b
_
a
f(x)dx =
h
3
[f(x
0
) + 4f(x
1
) +f(x
2
)]
h
5
90
f
(4)
()
The midpoint rule: n = 0, x
0
=
1
2
(a +b), h = b a:
b
_
a
f(x)dx = hf(x
0
) +
h
3
24
f

()
The interval will usually be split up and the integration formulas be applied to the partial intervals if f
varies much within the interval.
A Gaussian integration formula is obtained when one wants to get both the coecients c
j
and the points
x
j
in an integral formula so that the integral formula gives exact results for polynomials of an order as
high as possible. Two examples are:
1. Gaussian formula with 2 points:
h
_
h
f(x)dx = h
_
f
_
h

3
_
+f
_
h

3
__
+
h
5
135
f
(4)
()
2. Gaussian formula with 3 points:
h
_
h
f(x)dx =
h
9
_
5f
_
h
_
3
5
_
+ 8f(0) + 5f
_
h
_
3
5
__
+
h
7
15750
f
(6)
()
8.7 Derivatives
There are several formulas for the numerical calculation of f

(x):
Forward dierentiation:
f

(x) =
f(x +h) f(x)
h

1
2
hf

()
Backward dierentiation:
f

(x) =
f(x) f(x h)
h
+
1
2
hf

()
Central dierentiation:
f

(x) =
f(x +h) f(x h)
2h

h
2
6
f

()
66 Mathematics Formulary by ir. J.C.A. Wevers
The approximation is better if more function values are used:
f

(x) =
f(x + 2h) + 8f(x +h) 8f(x h) +f(x 2h)
12h
+
h
4
30
f
(5)
()
There are also formulas for higher derivatives:
f

(x) =
f(x + 2h) + 16f(x +h) 30f(x) + 16f(x h) f(x 2h)
12h
2
+
h
4
90
f
(6)
()
8.8 Dierential equations
We start with the rst order DE y

(x) = f(x, y) for x > x


0
and initial condition y(x
0
) = x
0
. Suppose we
nd approximations z
1
, z
2
, ..., z
n
for y(x
1
), y(x
2
),..., y(x
n
). Than we can derive some formulas to obtain
z
n+1
as approximation for y(x
n+1
):
Euler (single step, explicit):
z
n+1
= z
n
+hf(x
n
, z
n
) +
h
2
2
y

()
Midpoint rule (two steps, explicit):
z
n+1
= z
n1
+ 2hf(x
n
, z
n
) +
h
3
3
y

()
Trapezoid rule (single step, implicit):
z
n+1
= z
n
+
1
2
h(f(x
n
, z
n
) +f(x
n+1
, z
n+1
))
h
3
12
y

()
Runge-Kutta methods are an important class of single-step methods. They work so well because the
solution y(x) can be written as:
y
n+1
= y
n
+hf(
n
, y(
n
)) with
n
(x
n
, x
n+1
)
Because
n
is unknown some measurements are done on the increment function k = hf(x, y) in well
chosen points near the solution. Than one takes for z
n+1
z
n
a weighted average of the measured values.
One of the possible 3rd order Runge-Kutta methods is given by:
k
1
= hf(x
n
, z
n
)
k
2
= hf(x
n
+
1
2
h, z
n
+
1
2
k
1
)
k
3
= hf(x
n
+
3
4
h, z
n
+
3
4
k
2
)
z
n+1
= z
n
+
1
9
(2k
1
+ 3k
2
+ 4k
3
)
and the classical 4th order method is:
k
1
= hf(x
n
, z
n
)
k
2
= hf(x
n
+
1
2
h, z
n
+
1
2
k
1
)
k
3
= hf(x
n
+
1
2
h, z
n
+
1
2
k
2
)
k
4
= hf(x
n
+h, z
n
+k
3
)
z
n+1
= z
n
+
1
6
(k
1
+ 2k
2
+ 2k
3
+k
4
)
Chapter 8: Numerical mathematics 67
Often the accuracy is increased by adjusting the stepsize for each step with the estimated error. Step
doubling is most often used for 4th order Runge-Kutta.
8.9 The fast Fourier transform
The Fourier transform of a function can be approximated when some discrete points are known. Suppose
we have N successive samples h
k
= h(t
k
) with t
k
= k, k = 0, 1, 2, ..., N 1. Than the discrete Fourier
transform is given by:
H
n
=
N1

k=0
h
k
e
2ikn/N
and the inverse Fourier transform by
h
k
=
1
N
N1

n=0
H
n
e
2ikn/N
This operation is order N
2
. It can be faster, order N
2
log(N), with the fast Fourier transform. The basic
idea is that a Fourier transform of length N can be rewritten as the sum of two discrete Fourier transforms,
each of length N/2. One is formed from the even-numbered points of the original N, the other from the
odd-numbered points.
This can be implemented as follows. The array data[1..2*nn] contains on the odd positions the real
and on the even positions the imaginary parts of the input data: data[1] is the real part and data[2]
the imaginary part of f
0
, etc. The next routine replaces the values in data by their discrete Fourier
transformed values if isign = 1, and by their inverse transformed values if isign = 1. nn must be a
power of 2.
#include <math.h>
#define SWAP(a,b) tempr=(a);(a)=(b);(b)=tempr
void FourierTransform(float data[], unsigned long nn, int isign)
{
unsigned long n, mmax, m, j, istep, i;
double wtemp, wr, wpr, wpi, wi, theta;
float tempr, tempi;
n = nn << 1;
j = 1;
for (i = 1; i < n; i += 2)
{
if ( j > i )
{
SWAP(data[j], data[i]);
SWAP(data[j+1], data[i+1]);
}
m = n >> 1;
68 Mathematics Formulary by ir. J.C.A. Wevers
while ( m >= 2 && j > m )
{
j -= m;
m >>= 1;
}
j += m;
}
mmax = 2;
while ( n > mmax ) /* Outermost loop, is executed log2(nn) times */
{
istep = mmax << 1;
theta = isign * (6.28318530717959/mmax);
wtemp = sin(0.5 * theta);
wpr = -2.0 * wtemp * wtemp;
wpi = sin(theta);
wr = 1.0;
wi = 0.0;
for (m = 1; m < mmax; m += 2)
{
for (i = m; i <= n; i += istep) /* Danielson-Lanczos equation */
{
j = i + mmax;
tempr = wr * data[j] - wi * data[j+1];
tempi = wr * data[j+1] + wi * data[j];
data[j] = data[i] - tempr;
data[j+1] = data[i+1] - tempi;
data[i] += tempr;
data[i+1] += tempi;
}
wr = (wtemp = wr) * wpr - wi * wpi + wr;
wi = wi * wpr + wtemp * wpi + wi;
}
mmax=istep;
}
}

You might also like