You are on page 1of 78

a

2
7
t

-4
-3
-2
-1
0
1
0
2
1
3
Discrete uniform distribution applies to situations where
2 there are4
only a finite number of outcomes possible and all of them
3 are
5
equally likely. For instance if you toss a fair coin, there4are two 6
outcomes possible: Head or Tail. If we assume that we 5win $0 if 7
6
head comes up and $1 if tails come up. Then X=
7
our winnings is a random variable with two possible values
of 0
8
and 1 each of which is equally likely (and thus with probability
9
). If we toss a fair 6-sided die, then the random variable
10
number that turns up which is any of 1, 2, 3, 4, 5, or 11
6. All of
these have equal probability, and thus each has probability
1/6.
12
13
In general the discrete uniform random variable that can
14 assume
any integer from a to b has the probability mass function:
15
16
17
1
18
u( t ; a ,b )=
19
ba+1
Notice that this function is a constant and independent20of
21
all values have equal probability.
22
23
The probability distribution function is given by
24
25
0 if t <a ,
26
t
a+1
27
U ( t , a , b )=
if at b and t is an integer
,
ba+1
28
1 if t >b .
29
The expected value of the discrete distribution is the mid
30 point
312
1
between a and b: E(X)=(b-a)/2 and Var(X)= ( ba+1)
32
12
33
34
35
36
37
38
39

Discrete Uniform Distribution

Density
u(t)
0
0
0
0
0
0
0.166667
0.166667
0.166667
0.166667
0.166667
0.166667
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Uniform density function


0.18
0.16
0.14

u(t)

0.12
0.1

Unif

0.08
0.06
0.04
0.02
0
t

Uniform Distribution
1.2

1
0.8
U(t)

Cumulative
U(t)
0
0
0
0
0
0
0.166667
0.333333
0.5
0.666667
0.833333
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

Farid Alizadeh:
The CDF function can
be obtained from the
density function by
adding all the values
upto and including t.

0.6

0.4
0.2

0
t

0.2

0
t
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

Uniform

U(t)

n
p

Binomial Distribution

Farid Alizadeh:
Change these values
and observe the effect
on the density and
distribution functions

For binomial distribution we have two parameters: n which is the number of


trials and p which is the probability of success at each trial. Then the
random variable of interest is:
X= the number of successes in n trials. Think of tossing a coin where
heads represents success and its probability is p. Then the binomial
random variable is the number of heads if we toss the coin
formula for probability density function, which is the probability that the
binomial random variable has the value t, is given by:
n!
b( t ; n , p )=
pt ( 1 p )nt
t !( nt ) !
In this notation, the function b is a function of t; n and p are assumed fixed
parameters. The value t can assume any of the values 0, 1, ,
worksheet called binomial we have used the Excel function BINOMDIST
to calculate this formula. This function takes 4 arguments. The first three
correspond to t, n, and p. The last one is a logical value which if it is false,
then BINOMDIST calculates the density function, and if true it calculates
the distribution function.

The binomial probability distribution function can be obtained by forming


the partial sums:
B ( t ; n , p )=b( 1; n , p )+ b( 2 ; n , p )++b( t ; n , p )
The expected value of a binomial distribution is E(X)=np and the variance is
Var(X)=np(1-p).

Change the values of n and p in the cells B1 and B2 and observe the change
in the graphs.

Applications in sampling with replacement


Imagine you want to choose a sample of people, say from a telephone book,
and ask them whether they prefer Coke or Pepsi. Assume that in this group
60% prefer Coke and 40% Pepsi. Suppose you are careful so that the
probabilities of choosing any of individuals from the book are equally likely.

d Alizadeh:
nge these values
observe the effect
he density and
ibution functions

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45

0.37581
0.6778
0.879126
0.967207
0.993631
0.999136
0.999922
0.999996
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

Binomial density function


0.35
0.3
b(t;n,p)

Density
b(t)
0.1073741824
0.268435456
0.301989888
0.201326592
0.088080384
0.0264241152
0.005505024
0.000786432
0.000073728
0.000004096
1.024E-007
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Farid Alizadeh:
The CDF function can
be obtained from the
Cumulative
density function by
adding all the values
B(t)
0.107374upto and including t.

0.25
0.2
0.15
0.1
0.05
0
t

Binomial Distribution
1.2
1
B(t;n,p)

10
0.2

0.8
0.6
0.4
0.2
0
t

0
t
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

Binomial density function

Density function

Binomial Distribution

Column L

Hypergeometric Distribution
This distribution arises in situations that a sampling is done without replacement. For
instance, again assume we are picking up names at random and with equal probability
from a telephone book that contains 1,000 names. Suppose that 600 people in this group
of people like Coke and 400 like Pepsi. Now suppose we choose 50 people one at a time
from the book randomly and then cross their name off so that they cannot be selected
again, and count the number of people that prefer Coke. Contrast this situation with
sampling with replacement. There the number of names was not important, only the
probability of choosing Coke or Pepsi mattered. Here in our first choice the probability of
choosing a Coke lover is 600/1000=3/5. If we make the first choice and it turns out we
have a Pepsi lover, then for the next choice the probability of choosing a Coke lover is
600/999. And if the first choice was a Coke lover, the next time the probability of
choosing a Coke lover is 599/999. As we make our choices and cross off names the
probabilities change, if somewhat slightly. The hypergeometric distribution is the
distribution of the number of people out of n choices that are Coke lovers (or success in
the general case of Bernoulli trials.) There are three parameters: N is the total number of
individuals (1000 in our example), n is the number of samples (50 in our example) and
is the total number of successes (600 in our case) then the distribution of number of
successes in n choices out of N total and m total possible success is given by
Nm m
(
nt )( t )
f (t ; N ,n , m)=
N
(
n)
When N is large with respect to n then the probability of choosing an individual more

than once will be very small. In this case the hypergeometric and binomial distributions
are very close to each other. For instance in the example above if there were 1,000,000
names in the telephone book (instead of 1,000) then it is very unlikely that a n individual
is chosen twice.

Farid Alizadeh:
I just ude the Excel
function hypgeomdist
here. In the previous
column I use the
formula and the comb
function.

Hypergeometric density
0.14
0.12
0.1
0.08
0.06
0.04
0.02

39

36

33

30

27

24

21

18

0
15

Cumulatative
1.80075E-021
1.55711E-019
6.57249E-018
1.80498E-016
3.62695E-015
5.68596E-014
7.24137E-013
7.70292E-012
6.98369E-011
5.47986E-010
3.76633E-009
2.28932E-008
0.000000124
6.02937E-007
0.000002644
1.05085E-005
3.80061E-005
0.0001255189
0.0003796944
0.0010548856
0.0026982461
0.006368439
0.0138985233
0.0281037783
0.0527588215
0.0921437738
0.1500568967
0.2284350459
0.32603006
0.4377701798
0.5553031426
0.6687461852
0.7690760882
0.8502354305
0.9101538512
0.9504225872
0.9749819956
0.9885249167
0.9952476948
0.998236187
0.9994183655
0.9998313279
0.9999575192

12

density (e
1.8E-021
1.5E-019
6.4E-018
1.7E-016
3.4E-015
5.3E-014
6.7E-013
7.0E-012
6.2E-011
4.8E-010
3.2E-009
1.9E-008
1.0E-007
4.8E-007
2.0E-006
7.9E-006
2.7E-005
8.8E-005
0.000254
0.000675
0.001643
0.00367
0.00753
0.014205
0.024655
0.039385
0.057913
0.078378
0.097595
0.11174
0.117533
0.113443
0.10033
0.081159
0.059918
0.040269
0.024559
0.013543
0.006723
0.002988
0.001182
0.000413
0.000126

density
1.80075E-021
1.53910E-019
6.41678E-018
1.73926E-016
3.44645E-015
5.32327E-014
6.67278E-013
6.97878E-012
6.21340E-011
4.78149E-010
3.21834E-009
1.91269E-008
1.01143E-007
4.78901E-007
2.04106E-006
7.86452E-006
2.74976E-005
8.75128E-005
0.0002541755
0.0006751912
0.0016433605
0.0036701929
0.0075300842
0.014205255
0.0246550433
0.0393849523
0.0579131229
0.0783781492
0.097595014
0.1117401198
0.1175329628
0.1134430426
0.100329903
0.0811593423
0.0599184207
0.040268736
0.0245594083
0.0135429212
0.0067227781
0.0029884922
0.0011821786
0.0004129624
0.0001261913

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42

1000
600
50

N
n
m

cumulative distribution
1.2

0.8

0.6

0.4

0.2

0.6

3.33344E-005
7.49716E-006
1.40706E-006
2.14350E-007
2.54569E-008
2.21068E-009
1.24832E-010
3.43913E-012

3.3E-005
7.5E-006
1.4E-006
2.1E-007
2.5E-008
2.2E-009
1.2E-010
3.4E-012

0.9999908536
0.9999983508
0.9999997579
0.9999999722
0.9999999977
0.9999999999
1
1

0.4

0.2

0
0
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40

43
44
45
46
47
48
49
50

54

51

48

45

42

39

36

33

30

dehsnity

27

24

geometric density

lative distribution

cumulative

24
26
28
30
32
34
36
38
40
42
44
46
48
50
cumulative

rate

Farid Alizadeh:
Change this value and
observe the effect on
the density and
distribution functions

Density
p(t)
0 0.018316
1 0.073263
2 0.146525
3 0.195367
4 0.195367
5 0.156293
6 0.104196
7 0.05954
Poisson random variables arise in situations when a discrete event
8 0.02977
occurs more or less at a stable rate, and we are interested in the9 number
0.013231
of events. For instance when you are waiting for bus and you10
are 0.005292
told
0.001925
that a bus comes every 20 minutes, this usually does not mean11that
the
12
0.000642
bus shows up right at the top of the hour, 20 after the hour and 40 after
13 0.000197
the hour. The actual arrival time is somewhat random but on average
14 5.6E-005
there is one bus every 20 minutes. However sometimes two buses
may
15 1.5E-005
arrive is a 20 minute span and sometimes no bus arrives in that
span.
16 3.8E-006
Another example is the number of customers visiting your store.
17 8.8E-007
Suppose on average two customers show up every minute. Then
the
18 2.0E-007
number of customers showing up in a given span of 1 minute 19
could
be
4.1E-008
0, 1 2 or more; this random variable is Poisson. In general the20 8.3E-009
probability mass function of a Poisson random variable with 21
rate 1.6E-009
is
given by
22 2.9E-010
23 5.0E-011
24 8.3E-012
t
e
25 1.3E-012
p( t ; )=
t!
26 2.0E-013
Notice that unlike other discrete random variables Poisson variable can
27 3.0E-014
take theoretically an infinite number of values. That is
28 4.3E-015
number 0,1,2,
29 6.0E-016
30 8.0E-017
In Excel the Poisson random variables can be calculated by the
31 1.0E-017
function POISSON which takes 3 arguments. The first two are
32 1.3E-018
The third one is a logical entity which if it is false, the Poisson
33 1.6E-019
34 1.8E-020
probability mass function is calculated and if it is true, the probability
35 2.1E-021
distribution function is calculated.
36 2.3E-022
37 sums:
2.5E-023
The Poisson probability distribution function is given by partial
38 2.6E-024
39 2.7E-025
40 2.7E-026
P( t ; )= p( 0 ; )+ p( 1 ; )++ p( t ; )
The parameter is the rate of the Poisson distribution.
41 2.6E-027
42 2.5E-028
t

Poisson Distribution

The expected value of the Poisson distribution is E(X)==Var(

Notice Poisson model is not used only for events in a span of time. It
could be used for events in a span of space. For instance, in the United
States we may assume that there is one lake for each 100 miles square.

The Poisson probability distribution function is given by partial sums:


P( t ; )= p( 0 ; )+ p( 1 ; )++ p( t ; )
The parameter is the rate of the Poisson distribution.
The expected value of the Poisson distribution is E(X)==Var(
43 2.3E-029
44 2.1E-030

Notice Poisson model is not used only for events in a span of45
time.
It
1.9E-031
could be used for events in a span of space. For instance, in the
46 United
1.6E-032
47 square.
1.4E-033
States we may assume that there is one lake for each 100 miles
48
49
50
51
52
53
54
55
56
57
58
59
60

1.2E-034
9.5E-036
7.6E-037
6.0E-038
4.6E-039
3.5E-040
2.6E-041
1.9E-042
1.3E-043
9.4E-045
6.5E-046
4.4E-047
2.9E-048

Poisson density function


0.25

0.2

0.15

Poi

0.1

0.05

0
t

Poisson distrobution
1.2

0.8

Poisson

0.6

0.4

0.2

60

57

54

51

48

45

42

39

36

33

30

27

24

21

18

15

12

0
0

0.018316
0.091578
0.238103
0.43347
0.628837
0.78513
0.889326
0.948866
0.978637
0.991868
0.99716
0.999085
0.999726
0.999924
0.99998
0.999995
0.999999
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

p(t)

CDF
P(t)
0.018316
0.091578
0.238103
0.43347
0.628837
0.78513
0.889326
0.948866
0.978637
0.991868
0.99716
0.999085
0.999726
0.999924
0.99998
0.999995
0.999999
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

Farid Alizadeh:
The PDF function can be obtained
from the density function by adding all
the values upto and including t.

P(t)

Farid Alizadeh:
Change this value and
observe the effect on
the density and
distribution functions

0.4

0.2

60

57

54

51

48

45

42

39

36

33

30

27

24

21

18

15

12

0
3

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

Poisson density

60

57

54

51

Poisson distrobution

60

57

54

51

a
b

Farid Alizadeh:
Change these values and o
the effect on the density a
distribution functions

-2
3 density
t
u(t)
-3
0
-2.8
0
-2.6
0
-2.4
0
-2.2
0
-2
0.2
-1.8
0.2
-1.6
0.2
-1.4
0.2
The uniform distribution in the continuous case is similar to-1.2
the uniform
0.2
-1
0.2
discrete case. Here we have two parameters a and b and our random
-0.8 a fixed 0.2
variable can assume any number between a and b (and not just
-0.6 likely. 0.2
discrete set of numbers.) Also any point in this range is equally
-0.4
0.2
Thus the probability density function of the uniform distribution
-0.2
0.2
equals 0 if t < a, or t > b; otherwise u(t) is a constant. It is straightforward
0
to see what this constant is: The area under the graph must be equal to 0.2
0.2
0.2
one. Since it is a rectangle with width (b-a), its height should0.4
be 1/( 0.2
so that the area becomes 1. The distribution function F(
0.6
0.2
same formula as the discrete case, except t need not be an integer:
0.8
0.2
1
0.2
0 if t <a ,
1.2
0.2
t a+1
1.4
0.2
U ( t , a , b )=
if at b ,
ba+1
1.6
0.2
1 if t >b .
1.8
0.2
The mean and variance of uniform distribution is given by 2
0.2
2.2
0.2
b+a
E(
X
)=
2.4
0.2
2
( ab )2
2.6
0.2
Var ( X )=
12
2.8
0.2
3
0.2
3.2
0
3.4
0
3.6
0
3.8
0
4
0

Continuous Uniform distribution

Farid Alizadeh:
Change these values and observe
the effect on the density and
distribution functions

Continous uniform density function


0.25

u(t)

0.2

0.15
Continous uniform

0.1

0.05

0
t

Uniform Distribution
1.2
1
0.8
U(t)

cumulative
U(t)
0
0
0
0
0
0
0.04
0.08
0.12
0.16
0.2
0.24
0.28
0.32
0.36
0.4
0.44
0.48
0.52
0.56
0.6
0.64
0.68
0.72
0.76
0.8
0.84
0.88
0.92
0.96
1
1
1
1
1
1

Uniform Distribution

0.6
0.4
0.2
0
t

0.2
0
t

Continous uniform

Uniform Distribution

lambda

Farid Alizadeh:
Change these values and obs
the effect on the density and
distribution functions

density
e(t)

0
3
0.05 2.582124
0.1 2.222455
0.15 1.912884
0.2 1.646435
0.25
1.4171
The setup for the exponential distribution is similar to the case
of Poisson
0.3 1.219709
distribution. Take the example of the store which typically has
0.35 1.049813
customers per minute (For example if there are three customers
0.4 per
0.903583
minute then =3.) In the Poisson case we fixed the time span,
for 0.777721
0.45
example to one minute, and our random variable was the number
0.5 of
0.66939
customers. In the Exponential distribution we fix the number0.55
of 0.57615
0.6 time
0.495897
customers, specifically to zero. The random variable is now the
0.65
span. Thus, if you are waiting in your store, how long does it take0.426822
from
0.7 0.367369
now until the first customer arrives? This value is clearly a continuous
0.316198
random variable. If the assumptions we made in the Poisson0.75
distribution
0.272154
are still true, that is the rate of customer arrival per minute is 0.8
stable,
and
0.85 0.234245
the number of customers in two non-overlapping intervals are
0.9 0.201617
independent of each other, then the time required for the first0.95
customer
to
0.173533
arrive follows exponential distribution. If we represent this random
1 0.149361
variable by X then the probability density function of this 1.05 0.128556
1.1 0.11065
e ( t )=et for >0 and t 0
1.15 0.095237
And the probability distribution function is given by
1.2 0.081971
1.25
0.070553
Pr [ Xt ]=E (t )=1et for > 0 and t 0 1.3 0.060726
In both cases e(t)=E(t)=0 for t<0. Also e=2.71828102 is the basis of
1.35 0.052267
the natural logarithm.
1.4 0.044987
1.45 0.03872
The interpretation of these formulas is that if you have been waiting
for a
1.5 0.033327
short time (say a few seconds) then it is more likely that no 1.55
one has
0.028685
shown up yet. On the other hand the longer you wait, the less1.6
likely
it is
0.024689
that no one has shown up. Thus in your store where on average
three
1.65 0.02125
people per minute come, the probability that someone shows 1.7
up in0.01829
the
1.75 customer
0.015743
first 3 seconds (t=.05 minutes), that is waiting time for the first
0.01355
is less that 3 seconds is only .139292. But the probability that1.8
someone
1.85
0.011662
shows up within the first 90 seconds (t=1.5 minutes) is 0.988891.
1.9 0.010038
1.95
0.00864
The expected value of an exponential random variable is
2 0.007436
2
its variance Var(X)= 1/ . This means that for instance if you have three
2.05
0.0064
customers per minute visiting your store, then the average waiting
time
2.1 0.005509
2.15 0.004742

Exponential distribution

2.2
2.25
2.3
2.35
2.4
2.45
2.5
2.55
2.6
2.65
2.7
2.75
2.8
2.85
2.9
2.95
3
3.05
3.1
3.15
3.2
3.25
3.3
3.35
3.4
3.45
3.5
3.55
3.6
3.65
3.7
3.75
3.8
3.85
3.9
3.95
4
4.05
4.1
4.15
4.2
4.25
4.3
4.35
4.4
4.45
4.5
4.55
4.6
4.65

0.004081
0.003513
0.003023
0.002602
0.00224
0.001928
0.001659
0.001428
0.001229
0.001058
0.000911
0.000784
0.000675
0.000581
0.0005
0.00043
0.00037
0.000319
0.000274
0.000236
0.000203
0.000175
0.000151
0.00013
0.000112
9.6E-005
8.3E-005
7.1E-005
6.1E-005
5.3E-005
4.5E-005
3.9E-005
3.4E-005
2.9E-005
2.5E-005
2.1E-005
1.8E-005
1.6E-005
1.4E-005
1.2E-005
1.0E-005
8.7E-006
7.5E-006
6.5E-006
5.6E-006
4.8E-006
4.1E-006
3.5E-006
3.0E-006
2.6E-006

4.7
4.75
4.8
4.85
4.9
4.95
5

2.3E-006
1.9E-006
1.7E-006
1.4E-006
1.2E-006
1.1E-006
9.2E-007

Farid Alizadeh:
Change these values and observe
the effect on the density and
distribution functions

3.5

e(t)

2.5

2
Exponential density

1.5

0.5

0
0

3
t

Exponential distribution function


1.2

0.8
E(t)

cumulative
E(t)
0
0.139292
0.259182
0.362372
0.451188
0.527633
0.59343
0.650062
0.698806
0.74076
0.77687
0.80795
0.834701
0.857726
0.877544
0.894601
0.909282
0.921918
0.932794
0.942156
0.950213
0.957148
0.963117
0.968254
0.972676
0.976482
0.979758
0.982578
0.985004
0.987093
0.988891
0.990438
0.99177
0.992917
0.993903
0.994752
0.995483
0.996113
0.996654
0.99712
0.997521
0.997867
0.998164
0.998419

Exponential density function

Exponential distribution

0.6

0.4

0.2

0
0

3
t

0.2

0.99864
0.998829
0.998992
0.999133
0.999253
0.999357
0.999447
0.999524
0.99959
0.999647
0.999696
0.999739
0.999775
0.999806
0.999833
0.999857
0.999877
0.999894
0.999909
0.999921
0.999932
0.999942
0.99995
0.999957
0.999963
0.999968
0.999972
0.999976
0.99998
0.999982
0.999985
0.999987
0.999989
0.99999
0.999992
0.999993
0.999994
0.999995
0.999995
0.999996
0.999997
0.999997
0.999998
0.999998
0.999998
0.999998
0.999999
0.999999
0.999999
0.999999

0
0

3
t

0.999999
0.999999
0.999999
1
1
1
1

Exponential density

Exponential distribution

mean
stdev

0
0.5

Farid Alizadeh:
Change these
values and
observe the effect
on the density and
distribution
functions

n(t)
-5 1.53892E-022
-4.8 7.75622E-021
-4.6 3.33118E-019
-4.4 1.21915E-017
-4.2 3.80216E-016
-4 1.01045E-014
-3.8 2.28831E-013
-3.6 4.41598E-012
-3.4 7.26192E-011
-3.2 0.000000001
-3 1.21518E-008
The normal random variables are the most ubiquitous
of all
-2.8 1.23652E-007
random variables. Not only do they arise naturally in many
-2.6 1.07221E-006
situations, but other random variables are also related
to the
-2.4 7.92260E-006
normal distribution in a special way.
-2.2 4.98849E-005
-2 0.0002676605
As an example suppose you want to know the net-1.8
worth
of
0.0012238039
American workers. This value can be positive or -1.6
negative
(if
0.0047681764
they have a debt.) If the average net worth of workers
is say
-1.4 0.0158309032
0.0447890606
$20000, then the net worth of a randomly picked -1.2
individual
could in theory be any number between minus and -1
plus0.107981933
infinity. However it is reasonable to postulate that-0.8
it is0.2218416694
more
-0.6
likely that the randomly picked individuals net worth is0.38837211
closer
-0.4
0.5793831055
to $20000, and it is less likely if the net worth is substantially
-0.2 0.7365402806
above this average, or substantially below it. The random
0 0.7978845608
variable net worth of a randomly picked individual
is
0.2 0.7365402806
therefore a continuous random variable that can be0.4reasonably
0.5793831055
modeled by normal distribution. The normal density
0.6 function
0.38837211
requires two parameters: , the mean (expected value)
of the
0.8 0.2218416694
random variable and its standard deviation. The normal
1 0.107981933
density function then can be expressed by
1.2 0.0447890606
1.4 0.0158309032
1 t 2

1.6 0.0047681764
1
n(t ; , )=
e 2
1.8 0.0012238039
2
For the normal distribution function there is no closed
neat
2 0.0002676605
2.2 4.98849E-005
formula. Only an integral formula is used
2 2.4 7.92260E-006
1 x

t
2.6 1.07221E-006
1
2
N ( t ; , )=
e
dx
2.8 1.23652E-007
2
In Excel the function NORMDIST calculates these 3two
1.21518E-008
3.2 are
0.000000001
quantities. The first three arguments of NORMDIST

Normal Distribution

( )

and . The last one is a logical value which when is false the
mass function is calculated, and when true, the distribution
function is.
The expected value of normal random variables
its variance is Var(X)=
. When =0 and

formula. Only an integral formula is used


1 x

(
1
2 )
N ( t ; , )=
e
dx
2
In Excel the function NORMDIST calculates these two
quantities. The first three arguments of NORMDIST are
7.26192E-011
and . The last one is a logical value which when 3.4
is false
the
3.6 4.41598E-012
mass function is calculated, and when true, the distribution
3.8 2.28831E-013
function is.
t

The expected value of normal random variables


its variance is Var(X)=
. When =0 and

the standard normal distribution.

4
4.2
4.4
4.6
4.8
5

1.01045E-014
3.80216E-016
1.21915E-017
3.33118E-019
7.75622E-021
1.53892E-022

Roughly, for a normal random variable about there is a 68%

Farid Alizadeh:
Change these
values and
observe the effect
on the density and
distribution
functions

normal density function


0.9
0.8
0.7

n(t)

0.6
0.5
normal density

0.4
0.3
0.2
0.1
0
-6

-4

-2

0
t

normal distribution
1.2

0.8
N(t)

N(t)
7.61985E-024
3.99722E-022
1.78975E-020
6.84081E-019
2.23239E-017
6.22096E-016
1.48065E-014
3.01063E-013
5.23096E-012
7.76885E-011
0.000000001
1.07176E-008
9.96443E-008
7.93328E-007
5.41254E-006
3.16712E-005
0.0001591086
0.0006871379
0.0025551303
0.0081975359
0.0227501319
0.0547992917
0.1150696702
0.2118553986
0.3445782584
0.5
0.6554217416
0.7881446014
0.8849303298
0.9452007083
0.9772498681
0.9918024641
0.9974448697
0.9993128621
0.9998408914
0.9999683288
0.9999945875
0.9999992067
0.9999999004
0.9999999893
0.999999999
0.9999999999

0.6

normal distribution

0.4

0.2

0
-6

-4

-2

0
t

0.4

0.2
1
1
1
1
1
1
1
1
1

0
-6

-4

-2

0
t

normal density

normal distribution

df

t
Students t- distribution

-5

As in the case of chi-square distribution, suppose X1, X2, ,-4.8


independent samples from a normal population, with
-4.6
are
2. The sample mean and variance
-4.4
mean= , and variance=
-4.2
1
1
2
X = ( X 1 ++ X 2 ) , s 2=
( X 1 X )2 ++( X n
-4X ) )
(
n
n1
Now if we happen to know that the exact value of mean and-3.8
variance of this
population is and , respectively, then the standardized-3.6
-3.4
variable

-3.2
-3
-2.8
will have normal distribution
-2.6
with mean 0 and variance 1. But we usually do not know these
-2.4 quantities.
Instead we use sample mean and sample standard deviation-2.2
and form
-2
X X
T=
-1.8
s/ n
-1.6 function
This random variable has the Students t distribution. The density
-1.4
of T is t distribution with n-1 degrees of freedom. The t distribution
with
-1.2
degrees of freedom has the density function
-1
2 ( n+1)/2 -0.8
[( n+1)/ 2 ]
t
f (t ; n)=
1+
-0.6
n
n ( n /2 )
-0.4 to all real
Where is the gamma function a generalization of factorial
-0.2
valued functions. The mean of this distribution is E(X)=0 and
Var(
0
2).
0.2
0.4
This density is symmetric with respect to t=0 line. In fact it 0.6
is fairly close to
the normal distribution with somewhat fatter tails. As n gets 0.8
larger the
distribution converges to the standard normal distribution. In fact
1 for sample
sizes n>30, we can use the normal distribution instead of the1.2
without much loss in accuracy.
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
3
3.2
3.4

X
Z=

( )

3.6
3.8
4
4.2
4.4
4.6
4.8
5

0.35
0.3
0.25
0.2
0.15
0.1
0.05

-1

-2

-3

normal vs t densi
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05

-1

-2

0
-3

T-density
0.00769622
0.00453918
0.00526084
0.00612692
0.00717216
0.00844094
0.00999033
0.01189405
0.01424779
0.0171763
0.02084261
0.02546004
0.03130731
0.03874665
0.04824367
0.06038504
0.0758849
0.09556209
0.12025641
0.15063696
0.18685026
0.22799183
0.27150564
0.31283486
0.34583794
0.36432418
0.36432418
0.34583794
0.31283486
0.27150564
0.22799183
0.18685026
0.15063696
0.12025641
0.09556209
0.0758849
0.06038504
0.04824367
0.03874665
0.03130731
0.02546004
0.02084261
0.0171763

-4

normal density
1.486720E-006
3.961299E-006
1.014085E-005
2.494247E-005
5.894307E-005
0.0001338302
0.0002919469
0.0006119019
0.0012322192
0.0023840882
0.0044318484
0.0079154516
0.0135829692
0.0223945303
0.0354745928
0.0539909665
0.0789501583
0.1109208347
0.1497274656
0.194186055
0.2419707245
0.2896915528
0.3332246029
0.3682701403
0.391042694
0.3989422804
0.391042694
0.3682701403
0.3332246029
0.2896915528
0.2419707245
0.194186055
0.1497274656
0.1109208347
0.0789501583
0.0539909665
0.0354745928
0.0223945303
0.0135829692
0.0079154516
0.0044318484
0.0023840882
0.0012322192

-4

T-dist (cum)
0.007696219
0.008604055
0.009656223
0.010881607
0.012316039
0.014004228
0.016002295
0.018381104
0.021230662
0.024665921
0.028834443
0.03392645
0.040187911
0.047937241
0.057585976
0.069662984
0.084839964
0.103952382
0.128003664
0.158131057
0.195501109
0.241099476
0.295400604
0.357967577
0.427135165
0.5
0.572864835
0.642032423
0.704599396
0.758900524
0.804498891
0.841868943
0.871996336
0.896047618
0.915160036
0.930337016
0.942414024
0.952062759
0.959812089
0.96607355
0.971165557
0.975334079
0.978769338

0.4

-5

normal (cum)
2.86652E-007
7.93328E-007
2.11245E-006
5.41254E-006
1.33457E-005
3.16712E-005
0.000072348
0.000159109
0.000336929
0.000687138
0.001349898
0.00255513
0.004661188
0.008197536
0.013903448
0.022750132
0.035930319
0.054799292
0.080756659
0.11506967
0.158655254
0.211855399
0.274253118
0.344578258
0.420740291
0.5
0.579259709
0.655421742
0.725746882
0.788144601
0.841344746
0.88493033
0.919243341
0.945200708
0.964069681
0.977249868
0.986096552
0.991802464
0.995338812
0.99744487
0.998650102
0.999312862
0.999663071

t density

-5

Farid Alizadeh:
As this parameter gets larger,
the t-distribution gets closer
to normal ditribution. Change
it and see the effect.

t distributon
1.2

t distributon
1.2

0.8

0.6

0.4

0.2

1.4

0.6

-0.2

-1

-1.8

0
-2.6

0.01424779
0.01189405
0.00999033
0.00844094
0.00717216
0.00612692
0.00526084
0.00453918

-3.4

0.0006119019
0.0002919469
0.0001338302
5.894307E-005
2.494247E-005
1.014085E-005
3.961299E-006
1.486720E-006

-4.2

0.981618896
0.983997705
0.985995772
0.987683961
0.989118393
0.990343777
0.991395945
0.992303781

-5

0.999840891
0.999927652
0.999968329
0.999986654
0.999994587
0.999997888
0.999999207
0.999999713

normal vs t distribu
1.2

0.8

0.6

0.4

0.2

-1

-2

-3

t-distribution

normal vs t density

norm a l

t distributon

-1

-2

t-distribution

-3

-4

-4

t density

t distributon

4.6

3.8

2.2

1.4

0.6

-0.2

-1

-1.8

-2.6

-3.4

normal vs t distributons

no rm a
l

df

3
density
f(t)

Farid Alizadeh:
Change the parameter
df to see the effect on
the Chi-square and its
cumulative graphs. For
large df (>40) the curve
looks like normal
distribution.

0
0
0.2 0.17487961
0.4 0.23291449
0.6 0.25811467
0.8
0.259108
1 0.24197072
1.2 0.21272017
1.4 0.17715993
1.6 0.14030499
1.8 0.10592275
2 0.07635476
2.2 0.05261732
2.4 0.03469346
2.6 0.02190188
2.8 0.01324508
3 0.00767619
3.2 0.00426479
3.4
0.0022721
Suppose Z1, Z2,,Zn, are independent random variables with
standard
3.6
0.001161
normal distribution. Then the random variable
3.8 0.00056911
4 0.00026766
X =Z 21 ++ Z 2n
has the chi-square distribution with n degrees of freedom.4.2 0.0001208
If the random variable X follows chi-square distribution, then
4.4 5.2320E-005
4.6 2.1750E-005
E( X )=n , var ( X )=2 n
4.8 8.6788E-006
The chi-square distribution is fundamental in statistical applications.
For
5
3.3244E-006
instance if X1, X2,,Xn are samples taken independently from a normal
5.2variance
1.2225E-006
population with
then the sample
is
mean= , and variance= 2
5.4 4.3161E-007
n
5.6 1.4631E-007
1
2
s =
( X i X )2
5.8 4.7620E-008
n1 i =1
6 1.4883E-008
In this case it can be shown that the random variable
6.2 4.4665E-009
2
( n1 ) s
6.4 1.2872E-009
2
6.6 3.5624E-010

follows the chi-square distribution with n-1 degrees of freedom.


This fact in
6.8 9.4684E-011
turn is used to test hypothesis on value of variance. This distribution
is also
7 2.4168E-011
5.9247E-012
the basis of the chi-square test for testing the distribution of7.2
random
7.4 1.3949E-012
variables, and testing the independence of random variables.
7.6 3.1542E-013
7.8 6.8505E-014
8 1.4290E-014
8.2 2.8631E-015
8.4 5.5099E-016
8.6 1.0185E-016
8.8 1.8083E-017
9 3.0839E-018

Chi-square distribution

9.2
9.4
9.6
9.8
10

5.0520E-019
7.9496E-020
1.2016E-020
1.7446E-021
2.4332E-022

cumulative
F(t)
0
0.0224107
0.05975751
0.10356763
0.15053297
0.19874804
0.24699569
0.29446527
0.34061018
0.38506506
0.4275933
0.46805163
0.50636538
0.54251045
0.57650008
0.60837482
0.63819497
0.66603475
0.69197783
0.71611387
0.73853587
0.75933811
0.77861461
0.79645792
0.81295825
0.82820286
0.84227555
0.85525642
0.86722164
0.87824338
0.88838977
0.89772497
0.90630921
0.91419891
0.92144684
0.92810223
0.93421095
0.93981568
0.94495606
0.9496689
0.95398829
0.95794582
0.96157068
0.96488988
0.96792836
0.97070911

Chi-square density
0.3

0.25

0.2

0.15

0.1

0.05

Chi-squar distribution
1.2

0.8

0.6

0.4

0.2

0.97325336
0.97558066
0.97770902
0.979655
0.98143386

Farid Alizadeh:
Change the parameters df1
and df2 to see the effect on the
F-distribution and it cumulative
curves.

df1
df2

t
0
0.2
0.4
0.6
If U1 is a random chi-square variable with m degrees of freedom,
0.8 and
chi-square random variable with n degrees of freedom, then the random
1
variable
1.2
1.4
U1/m
1.6
V=
U 2/ n
1.8
satisfies the F- distribution with two parameters m and n (written2 as
short). The density of this function is
2.2
2.4
m/ 21
[( ( m+ n)/ 2 ]
t
2.6
f (t ; m , n )=
( m / 2) ( n /2 ) ( 1+ t )( m+n )/ 2
2.8
The expected value of a random variable following F distribution is
3
E(X)=n/(n-2)
3.2
3.4
The F distribution is most useful in ANOVA (Analysis of Variance)
3.6 where two
sets of standard normal random variables Xi and Yj are present.3.8
In such cases
we can form the two random variables U1 and U2 with
4
4.2
2
2
U 1 =X 1 ++ X m
4.4
4.6
U 2 =Y 21 ++Y 2n
Then the random variable U1/U2 has an F distribution. Note that
4.8the set of
variables Xi and the set of variables Yj need not be distinct; all that
5 is required is
that U1 and U2 be independent. This results in powerful techniques
5.2 for
analyzing statistical hypotheses. These techniques collectively 5.4
are called
5.6
ANOVA.
5.8
6
Also there is an interesting relationship between the t anf F distributions.
It can
6.2
be shown that if X follows t distribution with n degrees of freedom, then
6.4
F1,n random variable.
6.6
6.8
7
7.2
7.4
7.6
7.8
8
8.2
8.4
8.6

F distribution

8.8
9
9.2
9.4
9.6
9.8
10
10.2
10.4
10.6
10.8
11
11.2
11.4
11.6
11.8
12
12.2
12.4
12.6
12.8
13
13.2
13.4
13.6
13.8
14
14.2
14.4
14.6
14.8
15
15.2
15.4
15.6
15.8
16
16.2
16.4
16.6
16.8
17
17.2
17.4
17.6
17.8
18
18.2
18.4
18.6

18.8
19
19.2
19.4
19.6
19.8
20

6
3

F-dist(x)
0.3

F-dist-Cum(x)F-dist(x)
0
0
0.04519745 0.22598726
0.15649013
0.2782317
0.27185402 0.19227314
0.37192243 0.12508551
0.45472475 0.08280232
0.52258079
0.0565467
0.57836762 0.03984774
0.62459302 0.02889088
0.66325364 0.02147812
0.69589476 0.01632056
0.72370502 0.01264103
0.74760153 0.00995688
0.76829693 0.00795977
0.78634984 0.00644747
0.80220227 0.00528414
0.81620723 0.00437655
0.82864924 0.00365942
0.83975959 0.00308621
0.84972784 0.00262322
0.85871056 0.00224568
0.866838
0.0019351
0.87421926 0.00167756
0.88094629
0.0014624
0.88709707 0.00128141
0.89273808
0.0011282
0.89792631 0.00099774
0.90271085 0.00088603
0.90713418 0.00078988
0.91123323 0.00070673
0.91504023
0.0006345
0.91858341 0.00057148
0.9218876 0.00051628
0.92497472 0.00046775
0.92786417 0.00042492
0.93057319
0.000387
0.93311713 0.00035333
0.93550971 0.00032332
0.93776323 0.00029652
0.93988871
0.0002725
0.94189611 0.00025092
0.94379438
0.0002315
0.94559165 0.00021396
0.94729526 0.00019809

0.25

0.2

0.15

0.1

0.05

F-dist-Cum(x)
1.2

0.8

0.6

0.4

0.2

F-di

0.94891191
0.95044767
0.9519081
0.95329825
0.95462278
0.95588592
0.95709159
0.95824338
0.95934459
0.96039829
0.96140728
0.96237418
0.9633014
0.96419118
0.96504561
0.96586661
0.966656
0.96741544
0.96814651
0.96885066
0.96952925
0.97018358
0.97081484
0.97142414
0.97201254
0.97258104
0.97313056
0.97366197
0.9741761
0.97467373
0.97515559
0.97562237
0.97607472
0.97651325
0.97693856
0.97735118
0.97775166
0.97814047
0.9785181
0.97888497
0.97924153
0.97958817
0.97992527
0.9802532
0.98057229
0.98088289
0.9811853
0.98147983
0.98176675
0.98204634

0.00018371
0.00017064
0.00015874
0.00014789
0.00013797
0.00012889
0.00012057
0.00011292
0.00010589
9.9405E-005
9.3425E-005
8.7900E-005
8.2788E-005
7.8051E-005
7.3658E-005
6.9577E-005
6.5782E-005
6.2249E-005
5.8957E-005
5.5885E-005
5.3016E-005
5.0333E-005
4.7822E-005
4.5470E-005
4.3265E-005
4.1195E-005
3.9251E-005
3.7423E-005
3.5704E-005
3.4084E-005
3.2558E-005
3.1118E-005
2.9760E-005
2.8476E-005
2.7263E-005
2.6116E-005
2.5030E-005
2.4001E-005
2.3026E-005
2.2101E-005
2.1224E-005
2.0390E-005
1.9599E-005
1.8846E-005
1.8131E-005
1.7449E-005
1.6801E-005
1.6183E-005
1.5594E-005
1.5032E-005

0.98231886
0.98258455
0.98284367
0.98309642
0.98334302
0.98358369
0.98381862

1.4496E-005
1.3984E-005
1.3495E-005
1.3028E-005
1.2582E-005
1.2155E-005
1.1747E-005

F-dist-C um (x )

mean
standard deviation

0.1
0.4
t

l(t)
0
0
0.02 1.42361E-022
0.04 1.12126E-015
0.06 3.01468E-012
0.08 4.38436E-010
0.1 1.46117E-008
0.12
Suppose X is a random variable such that ln( 2.03509E-007
with known mean and variance. Then X0.14 1.60447E-006
0.16 8.51126E-006
lognormal distribution. The density function
of the
0.18 0.000033808
lognormal density is
0.2 0.000107884
2
( )0.0002903136
1 ln (t )ln
0.22

1
2

0.24 0.0006820474
f (t ; , )=
e
t 2
0.26 0.0014351771
We have
0.28 0.0027576363
2 / 2
0.3 0.0049112531
E( X )=e
0.32 0.0082029249
E( ln ( X ) )=ln ( )
0.34 0.0129699066
2
2
var ( X )=2 e e 10.36 0.0195610477
0.38 0.0283161853
var ( ln ( X ))= 2
Notice that the lognormal random variables
only
0.4 can
0.0395458622
assume nonnegative values.
0.42 0.0535131952
0.44 0.0704192243
0.46 0.0903925164
The lognormal random variables are multiplicative
in
0.48
0.1134832955
nature. If X and Y are normal and independent then
0.5 0.1396619446
is also normal with mean equal sum of means
of
0.52 0.1688214222
Y. Likewise variance of X+Y is sum of variances of
0.54 0.2007829343
Y. Thus normal random variables are additive
0.56 0.2353041117
and Y are lognormal, then XY is lognormal.
0.58 Lognormal
0.2720889253
variables are used, for instance, in modeling
the growth
0.6 0.3107986133
of stock prices.
0.62 0.3510629818
0.64 0.3924915398
0.66 0.4346840464
0.68 0.4772401561
0.7 0.519767951
0.72 0.561891238
0.74 0.6032555649
0.76 0.6435329657
0.78 0.6824254948
0.8 0.7196676398
0.82 0.7550277238
0.84 0.7883084221
0.86 0.8193465205
0.88 0.8480120423
0.9 0.8742068654

L(t)
0
5.62255E-024
5.33016E-017
1.62602E-013
2.61351E-011
9.48073E-010
1.42297E-008
0.00000012
6.77699E-007
2.85306E-006
9.61732E-006
0.000027264
0.000067326
0.0001486254
0.0002991166
0.0005571946
0.0009722336
0.0016042407
0.0025226426
0.0038043248
0.0055311156
0.0077869394
0.0106548651
0.0142142505
0.0185381481
0.0236910907
0.0297273306
0.0366895665
0.0446081551
0.0535007817
0.0633725415
0.0742163736
0.0860137844
0.0987357934
0.1123440409
0.1267919984
0.1420262324
0.1579876771
0.1746128814
0.1918352006
0.209585913
0.2277952455
0.246393298
0.2653108626
0.2844801352
0.303835322

0.92
0.94
0.96
0.98
1
1.02
1.04
1.06
1.08
1.1
1.12
1.14
1.16
1.18
1.2
1.22
1.24
1.26
1.28
1.3
1.32
1.34
1.36
1.38
1.4
1.42
1.44
1.46
1.48
1.5
1.52
1.54
1.56
1.58
1.6
1.62
1.64
1.66
1.68
1.7
1.72
1.74
1.76
1.78
1.8
1.82
1.84
1.86
1.88
1.9

0.8978629434
0.9189402307
0.937424406
0.9533244708
0.966670292
0.9775101443
0.9859082991
0.9919426966
0.9957027284
0.9972871526
0.9968021544
0.9943595627
0.9900752248
0.9840675421
0.9764561623
0.9673608249
0.9569003512
0.9451917738
0.9323495937
0.9184851577
0.9037061452
0.8881161551
0.8718143834
0.8548953816
0.8374488873
0.8195597185
0.8013077237
0.7827677805
0.7640098349
0.7450989768
0.7260955446
0.707055254
0.6880293467
0.6690647547
0.6502042765
0.631486762
0.6129473035
0.5946174292
0.5765252994
0.5586959008
0.5411512397
0.5239105311
0.5069903842
0.4904049818
0.4741662542
0.4582840468
0.4427662801
0.4276191034
0.4128470405
0.3984531285

0.3233131437
0.3428532448
0.3623985128
0.3818953173
0.4012936743
0.4205473472
0.439613889
0.4584546364
0.4770346615
0.4953226883
0.5132909807
0.5309152074
0.548174289
0.5650502313
0.5815279495
0.5975950875
0.6132418325
0.6284607311
0.6432465062
0.6575958778
0.6715073891
0.684981238
0.6980191165
0.710624057
0.7228002872
0.7345530933
0.7458886912
0.7568141067
0.7673370644
0.7774658838
0.7872093847
0.7965767993
0.8055776924
0.8142218887
0.822519406
0.8304803962
0.8381150906
0.8454337522
0.8524466327
0.8591639339
0.8655957745
0.8717521602
0.8776429584
0.8832778757
0.8886664391
0.89381798
0.8987416208
0.9034462638
0.9079405829
0.9122330164

1.92
1.94
1.96
1.98
2
2.02
2.04
2.06
2.08
2.1
2.12
2.14
2.16
2.18
2.2
2.22
2.24
2.26
2.28
2.3
2.32
2.34
2.36
2.38
2.4
2.42
2.44
2.46
2.48
2.5
2.52
2.54
2.56
2.58
2.6
2.62
2.64
2.66
2.68
2.7
2.72
2.74
2.76
2.78
2.8
2.82
2.84
2.86
2.88
2.9

0.3844390487
0.3708052504
0.3575510675
0.3446748281
0.3321739565
0.3200450691
0.3082840639
0.2968862034
0.2858461916
0.2751582453
0.2648161604
0.2548133722
0.245143012
0.2357979577
0.2267708813
0.2180542918
0.209640574
0.2015220243
0.1936908831
0.1861393631
0.1788596766
0.1718440578
0.1650847849
0.1585741977
0.1523047152
0.1462688493
0.140459218
0.1348685562
0.1294897257
0.1243157228
0.1193396853
0.1145548986
0.1099547998
0.1055329815
0.1012831947
0.0971993507
0.0932755224
0.0895059449
0.0858850155
0.0824072935
0.0790674988
0.0758605111
0.0727813679
0.0698252625
0.0669875421
0.0642637047
0.0616493971
0.0591404114
0.0567326824
0.0544222843

0.916331762
0.9202447736
0.9239797593
0.9275441801
0.9309452511
0.9341899419
0.9372849793
0.94023685
0.9430518043
0.9457358597
0.9482948061
0.9507342099
0.95305942
0.9552755726
0.9573875973
0.9594002228
0.9613179828
0.9631452218
0.9648861016
0.9665446068
0.9681245513
0.9696295837
0.9710631939
0.9724287181
0.9737293454
0.9749681226
0.9761479603
0.9772716379
0.978341809
0.9793610065
0.9803316474
0.9812560379
0.9821363776
0.9829747644
0.9837731988
0.984533588
0.9852577499
0.9859474172
0.9866042411
0.9872297949
0.9878255775
0.9883930167
0.9889334725
0.9894482404
0.9899385538
0.9904055873
0.9908504594
0.9912742349
0.9916779277
0.9920625026

2.92
2.94
2.96
2.98
3

0.0522054276
0.0500784556
0.0480378411
0.0460801834
0.0442022042

0.9924288786
0.9927779298
0.9931104887
0.9934273474
0.9937292598

0
0.12
0.24
0.36
0.48
0.6
0.72
0.84
0.96
1.08
1.2
1.32
1.44
1.56
1.68
1.8
1.92
2.04
2.16
2.28
2.4
2.52
2.64
2.76
2.88
3

L(t)
0
0.12
0.24
0.36
0.48
0.6
0.72
0.84
0.96
1.08
1.2
1.32
1.44
1.56
1.68
1.8
1.92
2.04
2.16
2.28
2.4
2.52
2.64
2.76
2.88
3

l(t)

lognormal density

1.2

0.8

0.6

0.4

0.2

0
t

lognormal pdf

1.2

0.8

0.6

0.4

0.2

0
0.12
0.24
0.36
0.48
0.6
0.72
0.84
0.96
1.08
1.2
1.32
1.44
1.56
1.68
1.8
1.92
2.04
2.16
2.28
2.4
2.52
2.64
2.76
2.88
3

0
t

You might also like