You are on page 1of 8

Bernardo, Mary Louise De Guzman, Ivy Bernadette Pelimiano, Sheena Mariz Assumptions for Test Statistics T-test for

One Mean

3E3 October 08, 2009

1. It follows a standard normal distribution under the null hypothesis 2. ps2 follows a with x2 distribution p degrees of freedom under the null hypothesis, where p is a positive constant 3. Z and s are independent. Z-test for One Mean 1. The distribution of the response is normally distributed. 2. Nuisance parameters should be known, or estimated with high. 3. Z-tests focus on a single parameter, and treat all other unknown parameters as being fixed at their true values. 4. The sample size is not large enough for these estimates to be reasonably accurate; the Z-test may not perform well. T-test for Equal Variances 1. 2. 3. 4. 5. Problem objective: Compare two populations. Data type: Interval Descriptive measurement: Central location Experimental design: Independent samples Population variances: Equal

T-test for Unequal Variances 1. 2. 3. 4. 5. Problem objective: Compare two populations. Data type: Interval Descriptive measurement: Central location Experimental design: Independent samples Population variances: Unequal

Z-test for Two Independent Means

The distribution of the response is normally distributed. 2. Nuisance parameters should be known, or estimated with high. 3. Z-tests focus on a single parameter, and treat all other unknown parameters as being fixed at their true values. 4. The sample size is not large enough for these estimates to be reasonably accurate, the Z-test may not perform well.
1.

Paired T-test for Two Related Means 1. 2. 3. 4. Independent observations; Interval scale, or ordinal scale with many alternatives. Normal Distribution(s); No skew.

One-way ANOVA for Three or more Means 1. The subjects are sampled randomly. 2. The groups are independent. 3. The population variances are homogeneous. 4. The population distribution of the DV is normal in shape. 5. The null hypothesis. Two-way ANOVA for Three or more Means 1. The distribution of the response is normally distributed. 2. The variance for each treatment is identical. 3. The samples are independent. Fishers Least Significant Difference 1. If all sample sizes are equal, LSD will be same for all means 2. Must be calculated if sample sizes are different Tukey-Kramer 1. Unequal group sizes

Tukeys Multiple Comparison 1. No difference on the population mean

2. All sample sizes are equal Bonferroni Adjustment Method 1. Must reduce type I error 2. There are different population means 3. Valid for equal and unequal sample sizes Tukey Procedure 1. 2. 3. 4. The observations being tested are independent The means are from normal distribution populations. There is equal variation across observations. (Homoscedasticity)

One-way Chi-square for One Variance or Standard Deviation 1. Sample variance is an estimator of population variance 2. There are randomly selected samples 3. There is normal sample population F-test for Variances 1. Sample variance: samples arise from populations with homogeneous variances. Levenes Test for Homogeneity of Variance 1. Variances of population from different samples drawn are equal Z-test for One Proportion 1. Population consists of nominal values 2. The population proportion is used in the parameter of data to describe a population of nominal data One-way Chi-square for One Proportion 1. Data is classified to categories 2. Nominal variables 3. There are randomly selected samples

Z-test for Two Independent Proportions 1. There are large sample sizes 2. There is normal distribution of differences between two sample means if the population is normal Two-way Chi-square for Two Independent Proportions 1. Nominal Data type 2. Problem Objective: analyze the relationship between two variables and compare two or more population Two-way Chi-square for Three or more Proportions 1. Problem Objective: analyze the relationship between two variables and compare three or more population 2. Nominal Data type Marascuilo Procedure 1. Unequal Population Proportions Wilcoxon Rank Sum Test 1. Problem objective: Compare two populations. 2. Data type: Ordinal or interval but not normal 3. Experimental design: Independent samples. Wilcoxon Signed-Rank Test 1. 2. 3. 4. Problem objective: Compare two populations. Data type: Interval Distribution of differences: Non-normal Experimental design: Matched pairs

Kruskal-Wallis Test 1. Problem objective: Compare two or more populations. 2. Data type: Ordinal or interval but not normal 3. Experimental design: Independent samples

Friedman Rank Test 1. 2. 3. 4. 5. Problem objective: Compare two or more populations. Data type: Ordinal or interval but not normal Experimental design: Blocked samples Focuses on one independent variable (treatment variable); interest Includes second variable referred to as a blocking variable, that can be used to control for compounding variables

O.J. Dunn Test 1. There are equal sample sizes 2. Nemenyi Test 1. Sample sizes are unequal

Degrees of Freedom T-test dfn = n 1 T-test 12 = 22 df = n1 + n2 - 2 T-test 12 22 df =

Paired T-test for related mean df = n 1 provided that differences are normally distributed One-way ANOVA for Three or more Means dfc = c 1 dfE = N c dfT = N 1 Two-way ANOVA dfR= r 1 dfc = c 1 dfI = (r 1) (c 1) dfE = rc (n 1) dfT = N 1 Tukeys HSD (Honestly Significant Difference) df = N c Where N = number of population

c = number of treatment means Fishers LSD (Least Significant Difference) or Tukey-Kramer or Tukeys Multiple Comparison or Bonferroni Adjustment Method df: v = n k One-way Chi-square for One Variance or Standard Deviation df: v = n 1 F-test for Variances dfnumerator = v1 = n1 1 dfdenominator = v2 = n2 1 Levene's Test for Homogeneity of Variance df = k 1, n k One-way Chi-square for One Proportion df: v = k 1 Two-way Chi-square for Two Independent Proportions df: v = k 1 Marascuilo Procedure df = n 1 Wilcoxon Rank Sum Test df = n1 + n2 2 Wilcoxon Signed-Rank Test df = n 1 Chi-square (Goodness of Fit)

df = k 1 c Where k = number of categories c = number of parameters being estimated from the sample data Chi-square (Test of Independence) df = (r 1) (c 1) Where r = number of rows c = number of columns Kruskal-Wallis Test df = k 1 Friedman Test dfC = c 1 dfR = n 1 dfE = (c 1) (n 1) = N n c + 1

You might also like