You are on page 1of 9

STAT 211 1

Handout 6 (Chapter 6): Point Estimation

A point estimate of a parameter is a single number that can be regarded as the most
plausible value of .
^
= + error of estimation, is an unbiased estimator of if
Unbiased Estimator: A point estimator,

^ ^
E(
)= for every possible value of . Otherwise, it is biased and Bias = E(
)- .
Read the example 6.2 (your textbook).

Example 1: When X is a binomial r.v. with parameters, n and p, the sample proportion X/n is an unbiased
estimator of p.
^
To prove this, you need to show E(X/n)=p where p =X/n.
E(X/n) = E(X)/n, Using the rules of the expected value.
= np / n =p If X~Binomial(n,p) then E(X)=np (Chapter 3)

Example 2: A sample of 15 students who had taken calculus class yielded the following information on
brand of calculator owned: T H C T H H C T T C C H S S S (T: Texas Instruments, H: Hewlett Packard,
C=Casio, S=Sharp).
(a) Estimate the true proportion of all such students who own a Texas Instruments calculator.
Answer=0.2667

(b) Three out of four calculators made by only Hewlett Packard utilize reverse Polish logic. Estimate
the true proportion of all such students who own a calculator that does not use reverse Polish
logic.
Answer=0.80

Example 3 (Exercise 6.8) : In a random sample of 80 components of a certain type, 12 are found to be
defective.
(a) A point estimate of the proportion of all such components which are not defective.
Answer=0.85

(b) Randomly select 5 of these components and connect them in series for the system. Estimate the
proportion of all such systems that work properly.
Answer=0.4437

Example 4 (Exercise 6.12) :


n1 _

X: yield of 1st type of fertilizer.


(x i x) 2
, E(X)= 1 Var(X)= 2
S12 i 1

n1 1
n2 _

Y: yield of 2nd type of fertilizer.


( yi y) 2 , E(Y)= 2 Var(Y)= 2
S 22 i 1

n2 1
STAT 211 2

(n1 1)S12 (n2 1)S 22


^
Show
2
is an unbiased estimator for 2
n1 n2 2
(n1 1) S12 (n2 1) S 22
It means that you need to show E 2
n1 n2 2
(n1 1) S12 (n2 1) S 22 (n1 1) (n2 1)
E
n1 n2 2
E S12 E S 22
n1 n2 2 n1 n2 2

(n1 1) (n 2 1)
2 2 2
n1 n 2 2 n1 n 2 2

Example 5 (Exercise 6.13) : X1,X2,.,Xn be a random sample from the pdf f(x)=0.5(1+x), -1x1,
^ _
is an unbiased estimator for .
-11. Show that
3X
^
It means that you need show E .


^ _
E 3E X 3E ( X ) = where

chapter5
1
1
x2 x3 1 1
E ( X ) x 0.5(1 x) dx 0.5 0.5 , 1 1
1 2 3 1 2 3 2 3 3

^ ^.
The standard error: The standard error of an estimator
is its standard deviation

The estimated standard error: The estimated standard error of an estimator is its estimated standard
^
deviation ^ = s ^ .


The minimum variance unbiased estimator (MVUE): The best point estimate. Among all estimators
^
of that are unbiased choose the one that has minimum variance. The resulting
is MVUE.
STAT 211 3

^
p(1 p)
Example 6: If we go back to example 1, the standard error of
^
p is ^ Var p where

p n

Var(X)
^
Var p 2 np (1 p ) p (1 p )

rulesof variance n ChX~aBiptenro3mial(n,p)



n2 n

Var (X )np(1 p)
^
Example 7: If we go back to example 5, the standard error of
is

Var (X ) 3 3
^ _ 2 2
^ Var 9Var X 9 9
chapter5 n 9n n
1 2 3 2
where Var(X)= E ( X 2 ) [ E ( X )] 2
3 9 9
1
1
x3 x4 1 1 1
0.5
2
E(X2)= x 0 . 5(1 x ) dx 0 . 5 3
1 4 1 3 4 3 4 3
^ _
Example 8: For normal distribution,
x is the MVUE for . Proof is as follows.
STAT 211 4

The following graphs are generated by creating 500 samples with size 5 from N(0,1) and calculating the
sample mean and the sample median for each sample.

Example 9 (Exercise 6.3): Given normally distributed data yield the following summary statistics.
Variable n Mean Median TrMean StDev SE Mean
thickness 16 1.3481 1.3950 1.3507 0.3385 0.0846

Variable Minimum Maximum Q1 Q3


thickness 0.8300 1.8300 1.0525 1.6425

(a) A point estimate of the mean value of coating thickness.

(b) A point estimate of the median value of coating thickness.

(c) A point estimate of the value that separates the largest 10% of all values in the coating thickness
distribution from the remaining 90%.
Answer=1.78138

(d) Estimate P(X<1.5) (The proportion of all thickness values less than 1.5)
Answer=0.6736

(e) Estimated standard error of the estimator used in (a).


Answer=0.084625
STAT 211 5

Boxplot of thickness

0.8 1.3 1.8


thickness

Normal Probability Plot for thickness


ML Estimates - 95% CI

99 ML Estimates
Mean 1.34812
95
StDev 0.327781
90
Goodness of Fit
80
70 AD* 1.074
Percent

60
50
40
30
20

10
5

0.4 1.4 2.4


Data

METHODS OF OBTAINING POINT ESTIMATORS

1. The Method of Moments (MME)


Let X1,X2,.,Xn be a random sample from a pmf or pdf. For k=1,2,., the k th population moment
1 n k
of the distribution is E(Xk). The kth sample moment is xi .
n i 1
STAT 211 6

Steps to follow : If you have only one unknown parameter


(i) calculate E(X).
1 n 1 _
(ii) equate it to xi x .
n i 1
(iii) Solve for unknown parameter (such as 1).
If you have two unknown parameters, you also need to compute the following to
solve two unknown parameters with two equations.
(iv) calculate E(X2).
1 n 2
(v) equate it to xi .
n i 1
(vi) Solve for the second unknown parameter (such as 2).
If you have more than two unknown parameters, repeat the same steps for k=3,.. until you can solve it.
_
Example 10: Show that MME of the parameter in Poisson distribution is x
There is one unknown parameter.
The 1th population moment of the distribution is E(X)= .
_
The 1th sample moment is x
_
Then x is the MME for

Example 11: Find the MME for the parameters and in gamma distribution.
There are two unknown parameters.
The 1th population moment of the distribution is E(X)= .
_
The 1th sample moment is x
_
Then = x but this did not help to solve for any unknown parameter. We need to
continue the steps.
The 2nd population moment of the distribution is E(X2)= 2(1+).
1 n 2
nd
The 2 sample moment is xi
n i 1
1 n 2
Then 2(1+)= xi
n i 1
Since we have 2 unknown parameters and two equations, we can solve for the unknown
parameters.
_ 2
n
_

The MME for and are
( x) 2
2
xi x
and i 1 , respectively
n
_


i 1
xi x

_
x

Example 12: Find the MME for the parameters and 2 in normal distribution.
There are two unknown parameters.
The 1th population moment of the distribution is E(X)= .
_
The 1th sample moment is x
_
Then = x but we still need to solve for the second unknown parameters. We need to
continue the steps.
STAT 211 7

The 2nd population moment of the distribution is E(X2)= 2 +2 .


1 n 2
The 2nd sample moment is xi
n i 1
1 n 2
Then 2 +2 = xi
n i 1
Then this can be solved for the second unknown parameter.
2
n
_

The MME for and 2 are x and
_
i 1
xi x
, respectively
n

2. The Method of Maximum Likelihood (MLE)


Likelihood function is the joint pmf or pdf of X which is the function of unknown values when x's
are observed. The maximum likelihood estimates are the values which maximize the likelihood
function.
Steps to follow:
(i) Determine the likelihood function.
(ii) Take the natural logarithm of the likelihood function.
(iii) Take a first derivative with respect to each unknown and equate it to zero (if you have m
unknown parameters, you will have m equations as a result of derivatives).
(iv) Solve for unknown 's.
(v)Check if it really maximizes your function by looking at a second derivative.
_
Example 13: Show that MLE of the parameter in Poisson distribution is x
There is one unknown parameter.
L=Likelihood = p(x1,x2,.,xn) = p(x1)p(x2).p(xn) by independence
e n i
x
e x1 e x2 e x3 e xn
= . . .. = n
x1! x2 ! x3 ! xn ! xi ! i 1

ln(L)= n x ln( ) ln( x !)


i i

d ln( L)
n
x 0 then x
i _

d 2 ln( L)

xi 0 then the MLE of is
^ _
d 2
2 x
^ ^ ^
of the parameters 1 , 2 ,..., m . Then the
The Invariance Principle: Let
1 2 m be the MLE's
., ,...,

^ ^ ^
MLE of any function h( 1 , 2 ,..., m ) of these parameters is the function h(
1 2 m ) of the
., ,...,

MLE's

Example 14:
STAT 211 8

(1) Let X1,,Xn be a random sample of normally distributed random variables with the mean and the
standard deviation .
n _

The method of moment estimates of and 2 are


_
x and
(x i x) 2
(n 1) s 2 , respectively
i 1

n n
n _

The maximum likelihood estimates of and 2 are x and


_ ( xi x) 2 (n 1) s 2 , respectively
i 1

n n
(2) Let X1,,Xn be a random sample of exponentially distributed random variables with parameter .
_
The method of moment estimate and the maximum likelihood estimate of are 1 / x .

(3) Let X1,,Xn be a random sample of binomial distributed random variables with parameter p.
The method of moment estimate and the maximum likelihood estimate of p are X/n.

(4) Let X1,,Xn be a random sample of Poisson distributed random variables with parameter .
_
The method of moment estimate and the maximum likelihood estimate of are x .

All the estimates above are unbiased? Some Yes but others No. (will be discussed in class)

Example 15 (Exercise 6.20): random sample of n bike helmets are selected.


X: number among the n that are flawed =0,1,2,..,n
p=P(flawed)
(a) Maximum likelihood estimate (MLE) of p if n=20 and x=3?

(b) Is the estimator in (a) unbiased?

(c) Maximum likelihood of (1-p)5 (none of the next five helmets examined is flawed)?

(d) Instead of selecting 20 helmets to examine, examine the helmets in succession until 3 flawed ones
are found. What would be different in X and p?

Example 16 (Exercise 6.22):


X: the proportion of allotted time that a randomly selected student spends working on a certain aptitude
The pdf of x is f(x;)= ( 1) x , 0x1, >-1.
A random sample of 10 students yield the data: 0.92, 0.79, 0.90, 0.65, 0.86, 0.47, 0.73, 0.97, 0.94, 0.77.

(a) Obtain the MME of and compute the estimate using the data.
1
1
x 2 1
E ( X ) x ( 1) x dx ( 1)


0 2 0
2
_
Set E(X)= x and then solve for .
_
_ ~ 2 x 1 2(0.8) 1
The given data yield x = 0.80 then the method of moment estimator for is
_
1 0 .8
1 x
=3
(b) Obtain the MLE of and compute the estimate using the data.
STAT 211 9

n n n
L=Likelihood= f ( x ) ( 1) x
i 1
i
i 1

i ( 1) n
x
i 1
i

n
ln(L)= n ln( 1) ln( x )
i 1
i

n
d ln( L) n
ln( xi ) =0 then solve for .
d 1 i 1
n
The given data yield ln( x ) -2.4295
i 1
i then the maximum likelihood estimator for is

n
n ln( xi )
^ (10 2.4295)
i 1
=3.1161
n
((2.4295))
ln(xi )
i 1

Proposition: Under very general conditions on the joint distribution of the sample when the sample size
is large, the MLE of any parameter is approximately unbiased and has a variance that is nearly as small
as can be achieved by an estimator.

You might also like