Professional Documents
Culture Documents
Computing a t-test
the t statistic
the t distribution
Part I
Computing a t-test
the t statistic the t distribution
Z as test statistic
Use a Z-statistic only if you know the population standard deviation ().
Z-statistic converts a sample mean into a z-score from the null distribution.
Z test X H0 SE X H0
p-value is the probability of getting a Ztest as extreme as yours under the null distribution
One tail
Reject H0 Fail to reject H0
.05
Zcrit
-1.65
Two tail
Reject H0
Fail to reject H0
Reject H0
.025
Zcrit
-1.96
.025
Zcrit
1.96
t as a test statistic
Sample _ X
Population
z-test
Population
Sample _ X, s
t-test
t as a test statistic
t-test: uses sample data to evaluate a hypothesis about a population mean when population stdev () is unknown
We use the sample stdev (s) to estimate the standard error standard error x
sx =
s
n
X H0
X H0 sX
t as a test statistic
Use a t-statistic when you dont know the population standard deviation ().
t-statistic converts a sample mean into a t-score (using the null hypothesis)
ttest X H0 SE X H0 s n
p-value is the probability of getting a ttest as extreme as yours under the null distribution
t distribution
You can use s to approximate , but then the sampling distribution is a t distribution instead of a normal distribution
Why are Z-scores normally distributed, but t-scores are not?
normal normal
ztest
X H0
ttest
X H0 sX
non normal
constant
Random variable
t distribution
With a very large sample, the estimated standard error will be very near the true standard error, and thus t will be almost exactly the same as Z. Unlike the standard normal (z) distribution, t is a family of curves. As n gets bigger, t becomes more normal. For smaller n, the t distribution is platykurtic (narrower peak, fatter tails) We use degrees of freedom to identify which t curve to use. For a basic t-test, df = n-1
Degrees of freedom
the number of scores in a sample that are free to vary
e.g. for 1 sample, sample mean restricts one value so df = n-1
t distribution
Too many different curves to put each one in a table. Table E.6 shows just the critical values for one tail at various degrees of freedom and various levels of alpha.
Table E.6
Level of significance for a one-tailed test
df
1 2 3 4
.05
6.314 2.920 2.353 2.132
.025
.01
.005
tcrit
12.706 32.821 63.657 4.303 6.965 9.925 3.182 4.541 5.841 2.776 3.747 4.604
For a sample of size 25, doing a two-tailed test, what is the degrees of freedom and the critical value of t for an alpha of .05 and for an alpha of .01? df=24, tcrit=2.064; tcrit=2.797 You have a sample of size 13 and you are doing a one-tailed test. Your tcalc = 2. What do you approximate the p-value to be?
p-value between .025 and .05 p-value between .05 and .10
What if you had the same data, but were doing a two-tailed test?
Illustration
In a study of families of cancer patients, Compas et al (1994) observed that very young children report few symptoms of anxiety on the CMAS. Contained within the CMAS are nine items that make up a social desirability scale. Compas wanted to know if young children have unusually high social desirability scores.
Illustration
He got a sample of 36 children of families with a cancer parent. The mean SDS score was 4.39 with a standard deviation of 2.61. Previous studies indicated that a population of elementary school children (all ages) typically has a mean of 3.87 on the SDS. Is there evidence that Compass sample of very young children was significantly different than the general child population? tcalc=1.195, df = 35 two tailed p-value = btwn .20 and .30
What should he conclude? What can he do now?
Part II
Measures of Effect Size
Confidence Intervals Cohens d
Effect Size
Tells you about the magnitude of the phenomenon Helpful in deciding importance Not just which direction but how far
The null hypothesis is never true in fact. Give me a large enough sample and I can guarantee a significant result. -Abelson
Confidence Interval
We could estimate effect size with our observed sample deviation X H
0
But we want a window of uncertainty around that estimate. So we provide a confidence interval for our observed deviation We say we are xx% confident that the true effect size lies somewhere in that window
__ X
__ X
could be
__ X
.05
X 1.96 ( SE )
__ X
.05 __ X
X 1.96 ( SE )
Lets generalize.
Our Window!
to
X tcrit (SE )
X 1.96 ( SE )
__ X
X 1.96 ( SE )
Confidence intervals
Confidence level = 1 - If alpha is .05, then the confidence level is 95% 95% confidence means that 95% of the time, this procedure will capture the true mean (or the true effect) somewhere within the range.
X tcrit ( s X )
( X H0 ) tcrit (sX )
Exercise in constructing CI
We have a sample of 10 girls who, on average, went on their 1st dates at 15.5 yrs, with a standard deviation of 4.2 years.
What range of values can we assert with 95% confidence contains the true population mean? Margin = 3 years CI = (12.50, 18.50)
Using an alpha=.05, would we reject the null hypothesis that =10?
What about that =17?
yes
no
Exercise in constructing CI
We have a sample of 10 girls who, on average, went on their 1st dates at 15.5 yrs, with a standard deviation of 4.2 years.
Lets say we were comparing this sample (of girls from New York) to the general American population = 13 years What is our C.I. estimate of the effect size for being from New York? Margin = 3 years CI = (-0.50, 5.50)
Factors affecting a CI
1. Level of confidence
1. (higher confidence ==> wider interval)
2. Sample size
1. (larger n ==> narrower interval)
Confidence Intervals
Pros Gives a range of likely values for effect in original units Has all the information of a significance test and more builds in the level of certainty Cons Units are specific to sample Hard to compare across studies
Cohens D
Exercise in constructing d
We have a sample of 10 girls who, on average, went on their 1st dates at 15.5 yrs, with a standard deviation of 4.2 years.
Lets say we were comparing this sample (of girls from New York) to the general American population = 13 years What is our d estimate of the effect size for being from New York?
Exercise in constructing d
What is our d estimate of the effect size for being from New York? 15.5 13 d .595 4.2 Is this big?
.2 .5 .8 >1 small moderate large a very big deal
Cohens D
Pros Uses an important reference point (s) Is standardized Can be compared across studies Cons Loses raw units Provides no estimate of certainty
Review
Hypothesis Tests
t-test
ttest X sX
or ( X H ) tcrit sX
0