Professional Documents
Culture Documents
OF
MTH-202
She has been a constant source of inspiration and motivation for hard
work. She has been very co-operative throughout this project work.
Through this column, it would be my upmost pleasure to express my
warm thanks to her for her encouragement, co-operation and consent
without which i mightn’t be able to accomplish this project.
Yashu Dhingra
DIFFERENCE BETWEEN POISSON DIDSTRIBUTION
AND BINOMIAL DISTRIBUTION
Poisson
Probability mass function
The horizontal axis is the index k. The function is only defined at integer values of k. The
connecting lines are only guides for the eye.
where
• e is the base of the natural logarithm (e = 2.71828...)
• k is the number of occurrences of an event— the probability of which is
given by the function
• k! is the factorial of k
• λ is a positive real number, equal to the expected number of occurrences
that occur during the given interval. For instance, if the events occur on
average 4 times per minute, and one is interested in the probability of an
event occurring k times in a 10 minute interval, one would use a Poisson
distribution as the model with λ=10×4=40.
As a function of k, this is the probability mass function. The Poisson distribution
can be derived as a limiting case of the binomial distribution.
The Poisson distribution can be applied to systems with a large number of
possible events, each of which is rare. A classic example is the nuclear decay of
atoms.
The Poisson distribution is sometimes called a Poissonian, analogous to the
term Gaussian for a Gauss or normal distribution.
It is not a priori clear that this approach leads to the same model as in Approach
1. We shall see shortly that they are, in fact, equivalent. At this point, we only
note the following connection: the probability that there is no occurrence before
timet is, according to the current approach, equal toe−λt. Now note that this is
equivalent to saying that the waiting time for the first occurrence has an
exponential distribution with parameterλ, in full agreement with Approach 1.
The Poisson process defined and studied so far seems to be a very reasonable
model for the type of processes we have in mind. It is also a very interesting and
subtle construction from a pure mathematical point of view, showing a nice
interplay between discrete and continuous distributions. In the next sections,
we shall explore some more of its properties.
The waiting time paradox
The waiting times between successive occurrences have exponential
distributions by construction.
Notation
The following notation is helpful, when we talk about the Poisson distribution.
Poisson Distribution
A Poisson random variable is the number of successes that result from a
Poisson experiment. The probability distribution of a Poisson random variable
is called a Poisson distribution.
Given the mean number of successes (μ) that occur in a specified region, we
can compute the Poisson probability based on the following formula:
where x is the actual number of successes that result from the experiment, and
e is approximately equal to 2.71828.
Example 1
The average number of homes sold by the Acme Realty company is 2 homes
per day. What is the probability that exactly 3 homes will be sold tomorrow?
Mean
The mean of the Poisson distribution is :
E[X] = λ
Variance
The variance of the Poisson distribution is :
Var(X) = λ
Generating function
We show here that the generating function of the Poisson distribution is
G(s) = eλ(s - 1)
Animation
This animation simulates the Poisson distribution as follows :
1) Observations are drawn repetitively from the exponential distribution
Exp(λ) (yellow upper frame of the animation).
2) The values of these observations are added until the sum exceeds 1.
3) Suppose that the sum of the first k observations is less than 1, but that the (k
+ 1)th observation makes the sum exceed 1. The integer k is then considered as
an observation drawn from the Poisson(λ) distribution.
Examples
An elementary example is this: roll a standard die ten times and count the
number of fours. The distribution of this random number is a binomial
distribution with n=10 and p=1/6.
As another example, flip a coin three times and count the number of heads. The
distribution of this random number is a binomial distribution with n=3 and p=1/2.
To understand binomial distributions and binomial probability, it helps to
understand binomial experiments and some associated notation; so we cover
those topics first.
Binomial Experiment
A binomial experiment (also known as a Bernoulli trial) is a statistical
experiment that has the following properties:
Consider the following statistical experiment. You flip a coin 2 times and count
the number of times the coin lands on heads. This is a binomial experiment
because:
Notation
The following notation is helpful, when we talk about binomial probability.
Suppose we flip a coin two times and count the number of heads (successes).
The binomial random variable is the number of heads, which can take on values
of 0, 1, or 2. The binomial distribution is presented below.
Number of Probabilit
heads y
0 0.25
1 0.50
2 0.25
Binomial Probability
The binomial probability refers to the probability that a binomial experiment
results in exactly x successes. For example, in the above table, we see that the
binomial probability of getting exactly one head in two coin flips is 0.50.
Example 1
Other situations in which binomial distributions arise are quality control, public
opinion surveys, medical research, and insurance problems.
Poisson Limit
If the probability p is small and the number of observations is large the binomial
probabilities are hard to calculate. In this instance it is much easier to
approximate the binomial probabilities by poisson probabilities. The binomial
distribution approaches the poisson distribution for large n and small p. In the
movie we increase the number of observations from 6 to 50, where the
parameter p in the binomial distribution remains 1/10. The movie shows that the
degree of approximations improves as the number of observations increases.
A Simple Example
The four possible outcomes that could occur if you flipped a coin twice . four
outcomes are equally likely: each has probability . To see this, note that the
tosses of the coin are independent (neither affects the other). Hence, the
probability of a head on Flip 1 and a head on Flip 2 is the product of Pr[H] and
Pr[H] , which is 1/2×1/2=1/4. The same calculation applies to the probability of a
head on Flip one and a tail on Flip 2. Each is 1/2×1/2=1/4.
Outcom First Second
e Flip Flip
1 Heads Heads
2 Heads Tails
3 Tails Heads
4 Tails Tails
Table 1: Four Possible Outcomes
The four possible outcomes can be classifid in terms of the number of heads
that come up. The number could be two (Outcome 1), one (Outcomes 2 and 3) or
0 (Outcome 4). The probabilities of these possibilities. Since two of the
outcomes represent the case in which just one head appears in the two tosses,
the probability of this event is equal to 1/4+1/4=1/2. summarizes the situation.
Number of Probabilit
Heads y
0 1/4
1 1/2
2 1/4
Table 2: Probabilities of Getting 0,1, or 2 heads.
If these conditions are met, then X has a binomial distribution with parameters n
and p, abbreviated B(n,p).
BIBLIOGRAPHY
WWW.GOOGLE.COM
WWW.ANSWERS.COM
WWW.GURUJI.COM
WWW.ASK.COM
WWW.WIKIPEDIA.COM