You are on page 1of 27

Mathematics Data Book

for Part I of the Engineering Tripos



S/26

12002 Edition 1

Cambridge University Engineering Department

1. Complex Variables

General

z = x + i Y = r (cos 8 + i sin8) = r eie where i2 = -1 and -n < 8 ::;; n

Real part Re(z) = x Imaginary part Im(z) = y

For integer n ,

e2nni = 1

z = rei(e+2nn)

za = ra exp[ ia (8 + 2nn)] In z = In r + i (8 + 2nn )

Complex conjugate

Z = x - iy = r e=": z Z = Izl2 which is purely real

(z" also used to denote complex conjugate)

(:~ )=;:

Argand diagram

1m (z)

z

Re (z)

r = [z I

8 = arg (z )

De Moivre's Theorem

(cos 8 + i sin 8)n = eine = cos n8 + i sin n8

2. Limits

nsxn ---) 0 as n ---) 00 if Ixl < 1 for any real value of s

xn

- ---) 0 as n ---) 00 n!

(1<)"-> eX as n -> oo

Xs In x ---) 0 as x ---) 0 where s > O.

1

2 fX
erf x = Fn oexp(-u2)du
.25 .5 .75 1 1.25
.276 .520 .711 .843 .923 3. Trigonometric Functions

smx =

2i

sin (A ± B) = sin A cos B ± cos A sin B

tan(A ± B) = tan A ± tan B

1 =+=tan AtanB

. A . B 2' A+B

sm +sm = sm -- cos

2

A-B

2

A+B A-B

cos A + cos B = 2 cos -- cos --

2 2

sin A sin B = _!_ [cos (A - B) - cos (A + B)] 2

sin A cos B = _!_ [sin (A + B) + sin (A - B)] 2

1

sin-x = - [1 - cos 2x] 2

sin-x = _!_ [3 sin x - sin 3x] 4

4. Hyperbolic Functions

cosh x =

2

cosh ix = cos x

sinh ix = i sin x

cosh (x ± y) = cosh x cosh y ± sinh x sinh y sinh (x ± y) = sinh x cosh y ± cosh x sinh y

cosh (x ± iy) = cosh x cos y ± i sinh x sin y

sinh (x ± iy) = sinh x cos y ± i cosh x sin y

5. Error Function (Gaussian Integral)

x

erf x

o o

For It> 0,

1

f 00 2 [ n ]2

o exp(-Itu )du = 41t

cosx =

2

cos (A ± B) = cos A cos B =+= sin A sin B

. . B 2 A+B. A-B

smA -sm = cos --sm --

2 2

2. A+B. A-B

cosA -cosB = - sm --sm --

2 2

1

cos A cos B = - [cos (A + B) + cos (A - B)]

2

1

cos2x = - [1 + cos 2x] 2

1

cos3x = - [3 cos x + cos 3x] 4

x -x

. e -e

smh x =

2

cos ix = cosh x

sin ix = i sinh x

1.5 .966

2 .995

00

1

1

f_:exp(-Itu2)du = [;y

2

6. Series

Arithmetic

Sn = a + (a+d) + (a+2d) + '" + (a + (n-1)d)

n

= - [2a + (n-1)d] 2

Geometric

Sn = a + ar + ar2 + ... + ar=) =

Soo = _a_ provided Irl < 1

1-r

Binomial expansion

(1) 1 n(n-1) 2 n(n-1)(n-2) 3

+x n = + nx + x + x + ...

2! 3!

If n is a positive integer the series terminates and is valid for all x. The general term is then "c, xr ,

also written (n) xr , where "c r = n!

r r!(n-r)!

When n is not a positive integer, the series does not terminate; the resulting infinite series is convergent for Ixl < 1.

Taylor series

For a function of a single variable (real or complex)

h2 hn

f(x+h) = f(x) + hf'(x) + -f"(x) + .... + -f (n) (x) + ...

2! n!

(When x = 0 this is often called a McLaurin series)

For two variables

f(x+h,y+k) =f(x,y) + [hO! +kO!]+ _!_ [h2 a2 { +2hk a2 f +k2 a2 f]+ ...

ax oy 2! ax axay ay2

in which subsequent square brackets involve the binomial coefficients (1,3,3,1), (1,4,6,4,1), etc and all the derivatives are evaluated at (x,y).

3

Integer series

N

Ln(n + 1)(n + 2) .... (n + r) = 1

~ (_1)n+1 = 1-!+ _!_ _ _!_+ .... = In 2

L..J n 2 3 4

1

N(N + 1)(N + 2) ..... (N + r)(N + r + 1)

~ (_l)n+1 = 1- _!_+ _!_ _ _!_+ .... = n L.J 2n - 1 3 5 7 4 1

00

1 1 1 ",2

1 + -+ - + -+ .... =

4 9 16 6

Ln12 = 1

(r + 2)

( See expansion of In (1 +z) )

( See expansion of tan+z )

Power series (Valid for real and complex numbers)

Z2 zn

eZ= l+z+-+ .... +- ....

2! n!

z3 Z5

sin z = z--+

3! 5!

Z2 Z4

cos Z = 1 - - + - - ....

2! 4!

3 5

. h z z

sm Z = Z + - + - + ....

3! 5!

Z2 z4

cosh z = 1 + - + - + ....

2! 4!

1 3 2 5 17 7

tan z = z + -z + - Z + - z

3 15 315

5

. 1 1 3 1.3 z

Sl Il " Z = Z + - Z + -- + ...

2.3 2.4 5

3 5

-1 _ Z Z

tan z - z- -+- .....

3 5

Z2 Z3

In (1 + z) = z - - +

2 3

convergent for all z

convergent for all z

convergent for all z

convergent for all z

convergent for all z

n convergent for [z] <

2

convergent for Izl < 1

convergent both on and within circle

Izl = 1 except at the point z = ± i

Principal Value of In (1 + z) converges both on and within circle Izl= 1 except at the point z = -1

4

7. Differentiation

For vectors and scalars which are functions of a single variable

(_u J' = u'v - uv'

(uv)' = u'v + uv'

v v2

(a.b)' = a'.b + a.b'

(a x b)' = ax b-s a x b'

( )' , , ua = u a + ua

Leibniz Theorem

(uv)(n) = u(n)v + n u(n-IV + ... + ncp u(n-p)v(p) + .. + uv(n)

where -c == [n 1= n!

p p j p!(n- p)!

8. Partial Differentiation

Stationary points

a</J a</J

A function </J (x,y) has a stationary point when = = o.

ax ay

2 2 [2]2

Provided ~ is non-zero at a stationary point, where ~ = a ~ a ~ - .iJ!_ ,the

ax ay axay

following conditions on the second derivatives there determine whether it is a maximum, a minimum or a saddle point,

Maximum: ~ > 0,

and

Minimum: ~ > 0,

and

Saddle point: all other cases for which ~ is non-zero.

The case ~ = 0 can be a maximum, a minimum, a saddle point, or none of these.

Total differential theorem

For a function </J (x,y,z , ... )

d</J = a</J dx + a</J dy

ax ay

in which a</J means (a</J 1

ax ax jy,z ...

a</J

+ -dz + ...

az

(i.e. with y, z ... kept constant).

Chain rule

When x,y,z ... are functions of u, v, w ...

5

9. Differential Equations

Integrating factor

A first order o.d.e. of the form

dy + P(x) y = Q(x) dx

can be integrated using the integrating factor exp( f P dx ), so that the equation takes the

form

~ [y expt ] P dx' )] = Q(x) expt ] P dx' ) dx

Particular integrals

For linear differential equations with constant coefficients:

Right-hand side

Trial P.I.

constant

a

a xr + b xr+ + ...

xn (n integer)

aekx

xekx

(a x + b ) ekx

(a xn + b xn-1 + ... ) ekx

sin px}

cos px

a sin px + b cos px

kx· }

e kx SIll px

e cos px

ekx ( a sin px + b cos px )

For the special case when the right hand side has an exponential or trigonometric factor which is also a solution of the differential equation:

Complementary Right-hand side
Function
ekx ekx
ekx xn ekx
sin px } Sinpx}
cos px cospx
ekx sin px} kx }
e kx SIll px
ekx cos px
e cos px 6

Trial P.1.

axekx

(a xn+ 1 + b xn + ... ) ekx

x (a sin px + b cos px)

x ekx ( a sin px + b cos px )

10. Integration
Standard indefinite integrals
Integrand Integral Integrand Integral
smx -cosx sinh x cosh x
cosx smx cosh x sinh x
tan x -In(cos x) tanh x In( cosh x)
x cosech x x
cosec x In( tan -) In( tanh -)
2 2
sec x In( tan x + sec x ) sechx 2 tan-1(ex)
cotx In( sin x) cothx In( sinh x)
sec2x tan x sech2x tanh x
tan x sec x sec x tanh x sech x - sech x
cotx cosec x - cosec x coth x cosech x - cosechx
1 sirr ! [ = J -cos-t J
~a2 _x2 or
1 sinh -I [ = J In (x+ ~x2 +a2)
~x2 + a2 or
1 cosh -I (= J
In (x + ~ x2 _ a2 )
~x2 _a2 or
1 ~ tan-tJ
x2 +a2 Standard substitutions

If the integrand is a function of: (a2-x2) or ~a2_x2

Substitute:

x = a sine

or

x = a cose

or

x = a tane or

x = a sinhf

or

x = a sece or

x = a coshf

1

or of the form: ----,===

(ax + b)~ px+ q

px + q = u2

1

1 ax+b=u

or a rational function of sin x and/or cos x

x t = tan - 2

[ whence

. 2t

smx= --2

l+t

cosx =

7

Integration by parts

f b (dV) b f b (dU)

a U dx dx = [uv] a - a V dx dx

Differentiation of an integral

d f b(x) db da f b(x)af(x, y) dy

- f(x, y)dy = f(x,b) dx - f(x,a) dx +

dx a(x) a(x) ax

Change of variable in surface and volume integration

Surface:

Ifsfix,y) dxdy = Ifs' F(u,v) IJI dudv where u(x,y) and v(x,y) are the new variables

ax ax
_ a(x, y) au av
and where J = is the Jacobian.
a(u,v) ay ay
au av For surface integrals involving vector normals

ar ar

n dA == n dxdy = ± - x- dudv au av

and the sign is chosen to preserve the sense of the normal.

Volume
Iffyf(x,y,z) dxdyd; = Iffy, F(u, V,w) IJI dudvdw
ax ax ax
-
au av aw
where J == a(x,y,z) ay ay ay
=
a(u, V, w) au av aw
az az az
au av aw Note

1 a (u, v, ... )

=---

J a (x, y, ... )

8

11. Vector Products

Scalar Product

a • b = lal Ibl cos e

(where e is the angle between the vectors)

Vector Product

a x b = lall bl sin e n

(where e is the angle between the vectors, and n is a unit vector normal to the plane containing a and b such that a, b , n form a right-handed set)

i j k

a x b = ax ay az = - b x a bx by bz

Scalar Triple Product

ax ay az ax bx Cx
a. (b x c) = bx by bz = ay by cy
Cx cy Cz az bz Cz
= b.(cxa) = c.(axb)
= - a. (c x b) = - c. (b x a) = - b. (a x c) The notation [a, b, c] is also used for a. (b x c).

Vector Triple Product

a x (b x c) = (a • c) b - (a . b) c

(a x b) x c = (a • c) b - (b . c) a

9

12. Matrices and Linear Algebra

(AB •.• N)t = N' B'A'

(AB •.. N)-l = N-1 B-1A-1

det (AB ... N) = det A det B .•. det N

where C, )' denotes the transpose

(if individual inverses exist)

(if individual matrices are square)

If A is square and if A-I exists (i.e. if det A *- 0), then Ax = b has a unique solution x = A-1b

If A is square then Ax = 0 has a non-trivial solution if and only if det A = O.

For an orthogonal matrix

Q-1 = Qt, det Q = ± 1.

Qt is also orthogonal.

If det Q = + 1 then Q describes a rotation without reflection.

Eigenvalues and Eigenvectors

If A is an n x n matrix, its eigenvalues .Ii, and corresponding eigenvectors u satisfy Au = AU.

There are in general n eigenvalues Ai and corresponding eigenvectors u, .

The eigenvalues are the roots of the n'th order polynomial equation

det (A - AI) = 0 where I is the identity matrix.

If A is real and symmetric the eigenvalues are real and the eigenvectors corresponding to different eigenvalues are orthogonal. For repeated eigenvalues, the corresponding eigenvectors can be chosen to be orthogonal. Furthermore,

UtAU = A and A = UAUt

where A is the diagonal matrix whose elements are the eigenvalues of A and U is the orthogonal matrix whose columns are the normalized eigenvectors of A.

Rayleigh's quotient

t

If' .. . f A h x Ax . d .. h

x IS an approximation to an eigenvector 0 t en -- IS a goo approximation to t e

xtx

corresponding eigenvalue.

10

Material relevant to 18 Linear Algebra Rank

The rank, r, of a matrix is the number of independent rows, or columns.

Fundamental Subspaces of an m x n matrix A

The column space is the space spanned by the columns. It has dimension equal to the rank, r, and is a subspace of R'" .

The nullspace is the space spanned by the solutions x of the equation Ax = O. The nullspace has dimension n - r and is a subspace of Rn .

The row space is the space spanned by the rows of A. It has dimension equal to rand is a subspace of Rn .

The left-nullspace is the space spanned by the solutions y of the equation yt A = O. It has dimension m - r, and is a subspace of Rm .

The nullspace is the orthogonal complement of the row space in Rn .

The left-nullspace is the orthogonal complement of the column space in «",

For A x = b to have a solution, b must lie in the column space, i.e. yt b = 0 for any y such that At y = O.

Decompositions of an m X n matrix A LU Decomposition

PA = LU, where P is a permutation matrix, L a lower triangular matrix and U an m X n echelon matrix.

QR Decomposition

A = QR, where the columns of Q are orthonormal, and R is upper-triangular and invertible. When m = n and so all matrices are square, Q is an orthogonal matrix.

Eigenvalue Decomposition (only for m = n)

Provided that A has n linearly independent eigenvectors, A = S A S -1 , where S has the eigenvectors of A as its columns, and A is a diagonal matrix with eigenvalues along the diagonal.

If A is real and symmetric, see under Eigenvalues and Eigenvectors above.

11

Singular Value Decomposition

A = Ql .I: Q~ (orthogonal x diagonal x orthogonal)

• The columns of Ql (m x m) are the eigenvectors of A At

• The columns of Q2 (n x n) are the eigenvectors of At A

• The r singular values, arranged in descending order on the diagonal of .I: (m x n) are the square roots of the non-zero eigenvalues of both A A t and A t A. r is the rank of

the matrix.

Basis of column space:

Basis of left nullspace:

Basis of row space:

Basis of nullspace:

first r columns of Ql last m - r columns of Ql first r columns of Q2

last n - r columns of Q2'

General solution of Ax = b by Gaussian Elimination

1. Transform Ax = b into Ux = c .

2. Set all free variables to zero and find a particular solution Xo .

3. Set the RHS to zero, give each free variable in tum the value 1 while the others are zero, and solve to find a set of vectors which span the nullspace of A. Arrange these vectors as the columns of a matrix X .

4. The general solution is Xo + Xu, where u is arbitrary.

Least squares solution of Ax = busing QR Solve R x = Q tb by back-substitution.

12

13. Vector Calculus

¢ is a scalar function of position and u a vector function.

Cartesian coordinates x, y, Z U = Ux i + uyj + Uz k

grad ¢ == V' ¢ == i a¢ + j a¢ + k a¢

ax ay az

div u == V'.u == aux + auy + auz

ax ay az

j k curl u == V' xu == a/ax a/ay a/az

(the Laplacian operator)

Cylindrical polar coordinates r, e, z ; u = Ur er + Ue ee + Uz ez

(er ee and ez are unit radial, tangential and axial vectors respectively)

z

<;> (r,e,Z) I

zl

div u == V'.u == 1 a (rur) +! aUe + auz

r ar r ae az

y

1 er ree ez
curl u == V'x u == alar alae a/az
r x
ur rUe Uz
div (grad ¢) == V' ~ " ~i_( r a~ ) + 1 a2¢ a2¢
---+ az2
r ar ar r2 ae2 13

Spherical polar coordinates r, e, w : U = urer + Ueee + uljIeljl where 0::::; e::::; n ; 0::::; ljI::::; 2n (e., ee and eljl are unit radial, longitudinal and azimuthal vectors respectively)

grad ¢ == V ¢ == e a¢ + ~ a¢ + ~ a¢

r ar r ae rsine aljl

z

(r,e, ljI)

1 a(r2u,) div U == V.u ==

r2 ar

1 a(sine ue)

+ +

r sin f ae

1 aUljI

----

y

x

curl U == V XU ==

ree alae

rsineeljl a I aljl rsineuljI

Spherical symmetry ¢ = ¢ (r), U = U (r) er (e, is a unit radial vector)

1 d 2 div U == --(r u) r2 dr

curl u == 0

14

Potentials

A vector field u is said to be irrotational if V x u = 0

A vector field u is said to be solenoidal or incompressible if V.u = 0

If V x u = 0, then there exists a scalar potential C/J such that u = \7 C/J (for some applications it is more natural to use u = -VC/J).

If V.u = 0, then there exists a vector potential A such that u = V x A . (A is usually chosen so that V.A = 0)

Identities

V (C/J 1 + C/J 2) = \7 C/J 1 + V' C/J 2

V.(C/Ju) = C/JV.u+(VC/J).u

VX(C/Ju) = C/JVxu+(V'C/J)xu

V.Vxu=O

VxVC/J=O

1

u x (V x u) + (u.V) u = V' [ - u2]

2

Gauss' Theorem (Divergence Theorem)

for a closed surface S enclosing a volume V. The outward normal is taken for dA .

Stokes' Theorem

f f s V x u. dA = ~ c u.d I

for an open surface S with a closed boundary curve C (the 'rim'). The normal to the surface and the sense of the line integral are related by a right hand screw rule.

15

14. Fourier Series

Full range

For -n::;; e ::;; n

fce)

1 00

= -an + L.Can cos ne + bn sin ne) 2

n = 1

1 1!

where an = - f fce) cos nede, n

-1!

1 1!

bn =- f fCe)sinnede n

-1!

Equivalently

n =-00

where

c = n

1!

_1_ f fce) e -in8 de 2n

-1(

= _!_Ca - ib ) for n > 0
2 n n
= _!_Ca_n + ib_n) for n <0
2
1 for n =0
= -an
2 If the function f C e) is periodic, of period 2n, then these relationships are valid for all e. The integrals may then be taken over any range of 2n.

Half range

If a Fourier series representation off C e) is required to be valid only in 0::;; e ::;; n , then it need contain either the sine terms alone or the cosine terms alone. For example

fce)

2 1(

where an = - f fce) cosn e de n 0

16

General Range 0:5 t:5 T

where an

1 ~ ( 2nnt . 2nnt)

!(t) = -aa + L.J an cos--+bn sm--

2 n=l T T

T T

2 f 2n nt 2 f . 2n nt

= - !(t)cos--dt, bn = - !(t) sm--dt

TO T TO T

n=-oo

T

where cn = _!_ f !(t)e-i2nntIT dt

To

00

Equivalently j'(r) = LCnei2nntiT

n =-00

i.e.

00

!(t) = LCn einOJot where

The (scientific) fundamental frequency is wa = 2n and the (scientific) n'th harmonic is nWa.

T

Examples

Some specific complex Fourier series are shown overleaf. Examples of specific real Fourier series can be found in the Electrical Data Book.

15. Fourier Transforms

fOO • dto

y(t) = y(w)ezOJt_

-00 2n

Caution -

(a) Fourier transforms are sometimes written in terms of frequency! = W /2n (b) Some books handle the 2n factor differently and define transforms with

differences in signs of the exponent

17

o

T ""4

1

T T

0-

2p 2p

1
t
0 T/
-1 - '-- Half-wave rectified cosine wave:

t

1 1 1 1 00 inwot

f(t) =-+ - eiwot + - e-iwot + - L (±l)_e __

tt 4 4 n n=-oo n2-1

neven

m',Q

signs alternate, + for n = 2

p-phase rectified cosine wave (P~2):

t

n=-oo

n multiple of p

signs alternate, + for n = p

p . n 1

f(t) = - SIll - 1 + - n p n

00

e inwot

(±l)-2- n -1

Square wave:

f(t) =

~ 2 inwot

~ --e

inn

n=-oo

n odd

Triangular wave:

f(t) =

4 00 inwot

- L (±l)_e_

. 2 2

zn n =-00 n

nodd

t

signs alternate, + for n = 1

Saw-tooth wave:

1 00

f(t) = -. L

zn

n=-oo

n*Q

inwot

(±l)_e __

n

signs alternate, + for n = 1

Pulse wave:
--I a I-- [ . ntt a 1
1 ;- f- 00 SIn--
f(t) !!:_ 1 + L T einwot
I T nna
t n=-oo --
I I n*Q T
1 I
0 T 2T 18

Discrete Fourier Transform

The DFT of a sequence (xn, n = 0, 1, '" , N-1 ) is defined by N-J

x , = .L.. xn e-i2nknl N

n=Q

for 0 s,_ k s,_ N-1

with inverse DFT

1 N-J

~ X i2nknl N xn = - £..J k e

N k=Q

for 0 s,_ n s N-1

Caution - Some books handle the _1 factor differently and define transforms with N

differences in signs of the exponent

16. Laplace Transforms

xes) = L (x(t)) = fOO x(t)e-st dt 0-

N.B. All functions in Laplace transform theory are zero for t < O.

Initial Value Theorem:

If the limit as s ---7 + 00 of s xes) is finite, then

x(O+) = lim s xes)

S~+CX>

Final Value Theorem:

Providing x(t) tends to a limit as t ---7 00 then

lim xU) = lim s xes)

t~oo s~Q

19

Table of Laplace Transforms

N.B. All functions in Laplace transform theory are zero for t < O.

Function (for t ~ 0) e -at x(t)

x(t - r) H(t - r)

dx(t) == x'(t) dt

dnx(t) == x(n)(t) dtn

t

f x(r)dr o

t

f x,(r)x2(t-r)dr o

t x(t)

1 == H(t) == u(t) 8(t)

H(t - r)

8 (t- r)

Function (for t ~ 0)

t

-at e

sin to t

e -at sin w t

t sin w t

sinh to t

Transform xes + a)

s xes) - x(O)

Remarks

Shift in s

Shift in t t ~ 0

Differentiation

S2X(S)- sx(O) -x'(O)

,,1 x(s)

d _ --xes) ds

1

S-1 e -sf

Transform

(s + a)-1



w

Integration

Convolution

Heaviside step function

Dirac delta function

t ~O

t ~O

Function (for t ~ 0) Transform
tn n! "n-1
n -at n!
t e (s + a)n+1
S
cos t»t s2 +W2
-at (s + a)
e cos W t (s+a)2 +W2
s2 _w2
t cos W t (s2+W2)2
cosh to t s
s2 _w2 20

17. Numerical Analysis

Finding roots of equations

Simple iteration

A method which sometimes works for an equation of the form x = f (x) is to iterate

Newton-Raphson

If the equation is Y = f (x) and xn is an approximation to a root then a usually better approximation xn+l is given by

Numerical evaluation of Integrals

Trapezium Rule

f a+h. Y dx "" !!_ [y(a+h) + yea) ]

a 2

Thus, if the interval (a,b) is divided using n equal intervals each of length h,

fb h

a y dx "" 2" [ Yo + 2 (Yl + Y2 + ... + Yn-l ) + Yn ]

Simpson's Rule

f a+2h h

Y dx "" - [ y(a+2h) + 4 y(a+h) + yea) ]

a 3

Thus if the interval (a,b) is divided using n equal intervals, each of length h, with

n even

h

3" [Yo + 4 (Yl + Y3 + ... + Yn-l ) + 2 (Y2 + Y4 + ... + Yn-2 ) + Yn ]

21

Finite differences

One-sided:

n+l n

U -u

Centred:

Integration of the generic ODE

du -=j(u,t) dt

"Forward Euler"

n+l n

U -u ---= !(Un,tn) /)"t

"Predictor-Corrector Method"

(i)

(ii)

"Fourth-order Runge Kutta method"

where kl = M!(un,tn)
k2 = full un + ;, ,n + ~ J
k3 = Mil un + ; , ,n + ~ J
k4 = Si f iu" +k3' t" +M) 22

Least-squares curve fitting

Straight line: Y = a + bx

{an + b L" xi = L" v,

i i

a~xi +b~x; = ~XiYi

I I I

Quadratic: Y = a + bx + cx2

an+bL"Xi +cL"x; = L"Yi iii

aL"x; +bL"xl +cL"xi = L"xlYi

iii i

The cubic Ferguson Curve

The cubic Bezier Curve

ret) = (1 - t )3p(O) + 3t (1 - t )2p(1) + 3t 2(1 - t) p(2) + t 3p(3)

23

18. Probability and Statistics

Discrete Random Variables

The probability that a random variable X takes the value r is denoted P(X = r) or Pr. The

mean, or expected value, of X is denoted E[X] and its variance Var[X]. The function g(z) is said to be a generating function for X if,

g(z) = L,Przr

all r

With this definition:

E[X] = ~ = g'(l)

Var[X] = (J2 = E[X2] - ~2 = g"(1) + g'(l) - g'(1)2

Distribution Parameters P(X=r) = Pr g(z) E[X] Var[X]
(q = I-p)
Bernoulli O<p<1 pr ql-r q+pz p pq
r = 0,1
Binomial n,O<p<1 (~}, qn-, (q + pz)n np npq
r= O ... n
Geometric O<p<1 qrp p q q
-- - -
r = 0 ... 00 l-qz P p2
Poisson A>O Ar eA(z-l) A A
e-A -
r!
r = 0 ... 00 Continuous Random Variables

The probability that a random variable X takes a value in the range (x, x + dx) is denoted j(x) dx. The cumulative probability function P(X ~x) is denoted F(x). The mean or expected value of X is denoted E[X] and its variance Var[X]. The function g(s) is said to be a generating function for X if,

g(s) = f e-sx f(x)dx

all X

With this definition:

E[X] = ~ = - g'(O)

Var[X] = (J2 = E[X2] - ~2 = g"(O) - g'(0)2

24

Distribution Params f(x) g(s) E[X] Var[X]
Uniform a<b 1 +as -bs a+b (b - a)2
-- e -e --
b-a s(b -a) 2 12
a5, x 5, b
o otherwise
Exponential A>O Ae-Ax A 1 1
-- - -
x~O A+S A ),?
o otherwise
Normal or (»O _1 e+~[~n exp(-s.u +_!_s2(J2) f.1 if
Gaussian (Jf2n 2 (J 2
-00 <x<oo
Standard 1 {I 2} 1 0 1
f2n exp -2x exp( -s2)
Normal 2
-oo<x<oo
Erlang-k k>O .uk(.ukx)k-l e-pkx ( ku r 1 1
- --
(k -I)! ku + s .u k.u2
f.1>0 x~O
o otherwise Standard Normal Distribution

If X has a normal distribution with mean f.1 and standard deviation a (denoted X - N(f.1,a) ), then y = X - f.1 has a normal distribution with mean 0 and standard deviation 1 (i.e. Y - N(O,I))

s

N(O,I) is referred to as the standard normal distribution.

Tables of the cumulative probability function F(z) = ~fz exp{- -21 x2} dx for the standard

...;2n x=-oo

normal distribution, which is usually denoted cP (z) , appear opposite.

25

z .00 .01 .02 .03 .04 .05 .06 .07 .08 .09
.0 .5000 .5040 .5080 .5120 .5160 .5199 .5239 .5279 .5319 .5359
.1 .5398 .5438 .5478 .5517 .5557 .5596 .5636 .5675 .5714 .5753
.2 .5793 .5832 .5871 .5910 .5948 .5987 .6026 .6064 .6103 .6141
.3 .6179 .6217 .6255 .6293 .6331 .6368 .6406 .6443 .6480 .6517
.4 .6554 .6591 .6628 .6664 .6700 .6736 .6772 .6808 .6844 .6879
.5 .6915 .6950 .6985 .7019 .7054 .7088 .7123 .7157 .7190 .7224
.6 .7257 .7291 .7324 .7357 .7389 .7422 .7454 .7486 .7517 .7549
.7 .7580 .7611 .7642 .7673 .7704 .7734 .7764 .7794 .7823 .7852
.8 .7881 .7910 .7939 .7967 .7995 .8023 .8051 .8078 .8106 .8133
.9 .8159 .8186 .8212 .8238 .8264 .8289 .8315 .8340 .8365 .8389
1.0 .8413 .8438 .8461 .8485 .8508 .8531 .8554 .8577 .8599 .8621
1.1 .8643 .8665 .8686 .8708 .8729 .8749 .8770 .8790 .8810 .8830
1.2 .8849 .8869 .8888 .8907 .8925 .8944 .8962 .8980 .8997 .9015
1.3 .9032 .9049 .9066 .9082 .9099 .9115 .9131 .9147 .9162 .9177
1.4 .9192 .9207 .9222 .9236 .9251 .9265 .9279 .9292 .9306 .9319
1.5 .9332 .9345 .9357 .9370 .9382 .9394 .9406 .9418 .9429 .9441
1.6 .9452 .9463 .9474 .9484 .9495 .9505 .9515 .9525 .9535 .9545
1.7 .9554 .9564 .9573 .9582 .9591 .9599 .9608 .9616 .9625 .9633
1.8 .9641 .9649 .9656 .9664 .9671 .9678 .9686 .9693 .9699 .9706
1.9 .9713 .9719 .9726 .9732 .9738 .9744 .9750 .9756 .9761 .9767
2.0 .9772 .9778 .9783 .9788 .9793 .9798 .9803 .9808 .9812 .9817
2.1 .9821 .9826 .9830 .9834 .9838 .9842 .9846 .9850 .9854 .9857
2.2 .9861 .9864 .9868 .9871 .9875 .9878 .9881 .9884 .9887 .9890
2.3 .9893 .9896 .9898 .9901 .9904 .9906 .9909 .9911 .9913 .9916
2.4 .9918 .9920 .9922 .9925 .9927 .9929 .9931 .9932 .9934 .9936
2.5 .9938 .9940 .9941 .9943 .9945 .9946 .9948 .9949 .9951 .9952
2.6 .9953 .9955 .9956 .9957 .9959 .9960 .9961 .9962 .9963 .9964
2.7 .9965 .9966 .9967 .9968 .9969 .9970 .9971 .9972 .9973 .9974
2.8 .9974 .9975 .9976 .9977 .9977 .9978 .9979 .9979 .9980 .9981
2.9 .9981 .9982 .9982 .9983 .9984 .9984 .9985 .9985 .9986 .9986
3.0 .9987 .9987 .9987 .9988 .9988 .9989 .9989 .9989 .9990 .9990
3.1 .9990 .9991 .9991 .9991 .9992 .9992 .9992 .9992 .9993 .9993
3.2 .9993 .9993 .9994 .9994 .9994 .9994 .9994 .9995 .9995 .9995
3.3 .9995 .9995 .9995 .9996 .9996 .9996 .9996 .9996 .9996 .9997
3.4 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9998

z 1.282 1.645 1.960 2.326 2.576 3.090 3.291 3.891 4.417
c]J (z) .90 .95 .975 .99 .995 .999 .9995 .99995 .999995
2(1-c]J(z)) .20 .10 .05 .02 .01 .002 .001 .0001 .00001 26

You might also like