Professional Documents
Culture Documents
Corresponds to
Chapter 7 of
Tamhane and Dunlop
2
Inferences on Mean (Large Samples)
3
Pivots
• Definition: Casella & Berger, p. 413
X −µ
• E.g.Z =
σ/ n
~ N ( 0,1)
4
Confidence Intervals on the Mean:
Large Samples
⎡ ⎤
⎢ X −µ
P − zα 2 ≤ Z = ≤ zα 2 ⎥ = 1 − α Note: zα/2 = -qnorm(α/2)
⎢ σ ⎥
⎢⎣ n ⎥⎦
5
Confidence Intervals on the Mean
σ σ
x − zα 2 ≤ µ ≤ x + zα 2
n n
σ
x − zα ≤ µ ( Lower One-Sided CI)
n
σ
µ ≤ x + zα (Upper One-Sided CI)
n
σ
is the standard error of the mean
n
6
Confidence Intervals in S-Plus
t.test(lottery.payoff)
One-sample t-Test
data: lottery.payoff
t = 35.9035, df = 253, p-value = 0
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
274.4315 306.2850
sample estimates:
mean of x
290.3583
7
This code was created using S-PLUS(R) Software. S-PLUS(R) is a registered trademark of Insightful Corporation.
Sample Size Determination for a z-interval
iSuppose that we require a (1-α )-level two-sided CI for µ
of the form [ x − E , x + E ] with a margin of error E
2
σ ⎡ zα 2σ ⎤
iSet E = zα 2 and solve for n, obtaining n = ⎢ ⎥
n ⎣ E ⎦
•Calculation is done at the design stage so a sample
estimate of σ is not available.
•An estimate for σ can be obtained by anticipating
the range of the observations and dividing by 4.
Based on assuming normality since then 95% of the
observation are expected to fall in [ µ − 2σ , µ + 2σ ]
8
Example 7.1 (Airline Revenue)
9
Example 7.2 – Strength of Steel Beams
See Example 7.2 on page 240 of the course textbook.
10
Power Calculation for One-sided Z-tests
π ( µ ) = P[Test rejects H 0 | µ ]
Φ(−z) 1−Φ(z)
-z z
11
Power Calculation for One-sided Z-tests
12
Power Functions Curves
13
Example 7.3 (SAT Couching: Power Calculation)
See Example 7.3 on page 244 of the course textbook.
⎡
π ( µ ) = Φ ⎢ − zα +
( µ − µ0 ) n ⎤
⎥
⎢⎣ σ ⎥⎦
14
Power
Calculation
Two-Sided
Test
(See Figure 7.3 on page 245
of the course textbook.)
15
Power Curve for Two-sided Test
It is easier to
detect large
differences
from the null
(See Figure 7.4 on hypothesis
page 246 of the
course textbook.)
Larger samples
lead to more
powerful tests
16
Power as a function of µ and n, µ0=0, σ=1
Uses function persp in S-Plus
10.8
0.4 0.6
power
0.2
0
0.5
100
80
0
mu 60
-0. 40
5 n 17
20
-1
This graph was created using S-PLUS(R) Software. S-PLUS(R) is a registered trademark of Insightful Corporation.
Sample Size Determination for a One-Sided z-Test
• Determine the sample size so that a study will have
sufficient power to detect an effect of practically important
magnitude
• If the goal of the study is to show that the mean response µ
under a treatment is higher than the mean response µ0
without the treatment, then µ−µ0 is called the treatment
effect
• Let δ > 0 denote a practically important treatment effect
and let 1−β denote the minimum power required to detect
it. The goal is to find the minimum sample size n which
would guarantee that an α-level test of H0 has at least 1-β
power to reject H0 when the treatment effect is at least δ.
18
Sample Size Determination for a One-sided Z-test
Because Power is an increasing function of µ−µ0, it is only
necessary to find n that makes the power 1− β at µ = µ0+δ.
⎛ δ n⎞
π ( µ0 + δ ) = Φ ⎜⎜ − zα + ⎟⎟ = 1 − β [See Equation (7.7), Slide 11]
⎝ σ ⎠
δ n
Since Φ ( zβ ) = 1 − β we have - zα + = zβ .
σ
Solving for n, we obtain
⎡ ( zα + zβ ) σ ⎤
2
n=⎢ ⎥
⎢⎣ δ ⎥⎦
zβ
19
Example 7.5 (SAT Coaching: Sample Size
Determination
20
Sample Size Determination for a
Two-Sided z-Test
⎡ ( zα 2 + z β ) σ ⎤
2
Read on you own the
n ⎢ ⎥
⎢⎣ δ ⎥⎦ derivation on pages 248-249
21
Power and Sample Size in S-Plus
normal.sample.size(mean.alt = 0.3)
mean.null sd1 mean.alt delta alpha power n1
0 1 0.3 0.3 0.05 0.8 88
22
This code was created using S-PLUS(R) Software. S-PLUS(R) is a registered trademark of Insightful Corporation.
Inference on Mean (Small Samples)
The sampling variability of s2 may be sizable if the sample is small
(less than 30). Inference methods must take this variability into
account when σ2 is unknown .
23
Confidence Intervals on Mean
⎡ X −µ ⎤
1 − α = P ⎢ −tn −1,α 2 ≤ T = ≤ tn −1,α 2 ⎥
⎣ S n ⎦
⎡ S S ⎤
= P ⎢ X − tn −1,α 2 ≤ µ ≤ X + tn −1,α 2 ⎥
⎣ n n ⎦
S S
X − tn −1,α 2 ≤ µ ≤ X + tn −1,α 2 [Two-Sided 100(1-α )% CI]
n n
24
Example 7.7, 7.8, and 7.9
See Examples 7.7, 7.8, and 7.9 from the course textbook.
25
Inference on Variance
Assume that X 1 ,..., X n is a random sample from an N ( µ , σ 2 ) distribution
(n − 1) S 2
χ2 = has a Chi-square distribution with n -1 d.f.
σ 2
⎡ 2
1−α = P ⎢χ
( n − 1 ) S 2
≤χ α⎥
⎤
α ≤
2
⎣
n −1,1−
2
σ 2 n −1,
2 ⎦
26
CI for σ2 and σ
(n − 1) s 2 (n − 1) s 2
≤σ ≤
2
χ 2
α χ2 α
n −1, n −1,1−
2 2
n −1 n −1
s ≤σ ≤ s
χ 2
α χ2 α
n −1, n −1,1−
2 2
27
Hypothesis Test on Variance
(n − 1) s 2
χ =2
σ 2
0
28
Prediction Intervals
• Many practical applications call for an interval estimate of
– an individual (future) observation sampled from a population
– rather than of the mean of the population.
• An interval estimate for an individual observation is called
a prediction interval
1 1
x − tn −1,α 2 s 1 + ≤ X ≤ x + tn −1,α 2 s 1 +
n n
29
Confidence vs. Prediction Interval
Prediction interval of a single future observation:
1 1
x − tn −1,α 2 s 1 + ≤ X ≤ x + tn −1,α 2 s 1 +
n n
As n → ∞ interval converges to [ µ − zα 2σ , µ + zα 2σ ]
Confidence interval for µ:
1 1
x − tn −1,α 2 s ≤ µ ≤ x + tn −1,α 2 s
n n
As n → ∞ interval converges to single point µ
30
Example 7.12: Tear Strength of Rubber
31
Tolerance Intervals
Suppose we want an interval which will contain at least
.90 = 1-γ of the strengths of the future batches (observations) with
95% = 1-α confidence
32