You are on page 1of 11

CHI-SQUARE TEST

Adapted by Anne F. Maben from "Statistics for the Social Sciences" by Vicki Sharp
The chi-square (I) test is used to determine whether there is a significant difference between the expected
frequencies and the observed frequencies in one or more categories. Do the number of individuals or objects
that
fall in each category differ significantly from the number you would expect? Is this difference between the
expected and observed due to sampling error, or is it a real difference?
Chi-Square Test Requirements
1. Quantitative data.
2. One or more categories.
3. Independent observations.
4. Adequate sample size (at least 10).
5. Simple random sample.
6. Data in frequency form.
7. All observations must be used.
Expected Frequencies
When you find the value for chi square, you determine whether the observed frequencies differ significantly
from the expected frequencies. You find the expected frequencies for chi square in three ways:
I . You hypothesize that all the frequencies are equal in each category. For example, you might expect that
half of the entering freshmen class of 200 at Tech College will be identified as women and half as men. You
figure the expected frequency by dividing the number in the sample by the number of categories. In this exam
pie, where there are 200 entering freshmen and two categories, male and female, you divide your sample of
200 by 2, the number of categories, to get 100 (expected frequencies) in each category.
2. You determine the expected frequencies on the basis of some prior knowledge. Let's use the Tech College
example again, but this time pretend we have prior knowledge of the frequencies of men and women in each
category from last year's entering class, when 60% of the freshmen were men and 40% were women. This
year you might expect that 60% of the total would be men and 40% would be women. You find the expected
frequencies by multiplying the sample size by each of the hypothesized population proportions. If the
freshmen total were 200, you would expect 120 to be men (60% x 200) and 80 to be women (40% x 200).
Now let's take a situation, find the expected frequencies, and use the chi-square test to solve the problem.
Situation
Thai, the manager of a car dealership, did not want to stock cars that were bought less frequently because of
their unpopular color. The five colors that he ordered were red, yellow, green, blue, and white. According to Thai,
the expected frequencies or number of customers choosing each color should follow the percentages of last
year.
She felt 20% would choose yellow, 30% would choose red, 10% would choose green, 10% would choose blue,
and 30% would choose white. She now took a random sample of 150 customers and asked them their color
preferences. The results of this poll are shown in Table 1 under the column labeled ˜observed frequencies."
Table 1 - Color Preference for 150 Customers for Thai's Superior Car Dealership
Category Color Observed Frequencies Expected Frequencies
Yellow 35 30
Red 50 45
Green 30 15
Blue 10 15
White 25 45
The expected frequencies in Table 1 are figured from last year's percentages. Based on the percentages for
last year, we would expect 20% to choose yellow. Figure the expected frequencies for yellow by taking 20% of
the 150 customers, getting an expected frequency of 30 people for this category. For the color red we would
expect 30% out of 150 or 45 people to fall in this category. Using this method, Thai figured out the expected
frequencies 30, 45, 15, 15, and 45. Obviously, there are discrepancies between the colors preferred by
customers
in the poll taken by Thai and the colors preferred by the customers who bought their cars last year. Most striking
is the difference in the green and white colors. If Thai were to follow the results of her poll, she would stock twice
as many green cars than if she were to follow the customer color preference for green based on last year's
sales.
In the case of white cars, she would stock half as many this year. What to do??? Thai needs to know whether or
not the discrepancies between last year's choices (expected frequencies) and this year's preferences on the
basis
of his poll (observed frequencies) demonstrate a real change in customer color preferences. It could be that the
differences are simply a result of the random sample she chanced to select. If so, then the population of cus-
tomers
really has not changed from last year as far as color preferences go. The null hypothesis states that there
is no significant difference between the expected and observed frequencies. The alternative hypothesis states
they are different. The level of significance (the point at which you can say with 95% confidence that the
difference is NOT due to chance alone) is set at .05 (the standard for most science experiments.) The chi-square
formula used on these data is
X 2 = (O - E)2 where O is the Observed Frequency in each category
E E is the Expected Frequency in the corresponding category
˜ is ˜sum of˜
df is the "degree of freedom" (n-1)
X 2 is Chi Square
PROCEDURE
We are now ready to use our formula for X 2 and find out if there is a significant difference between the
observed and expected frequencies for the customers in choosing cars. We will set up a worksheet; then you will
follow the directions to form the columns and solve the formula.
1. Directions for Setting Up Worksheet for Chi Square
Category O E (O - E) (O - E)2 (O - E)2
E
yellow 35 30 5 25 0.83
red 50 45 5 25 0.56
green 30 15 15 225 15
blue 10 15 -5 25 1.67
white 25 45 -20 400 8.89
X 2 = 26.95
2. After calculating the Chi Square value, find the "Degrees of Freedom." (DO NOT SQUARE THE NUMBER
YOU GET, NOR FIND THE SQUARE ROOT - THE NUMBER YOU GET FROM COMPLETING THE
CALCULATIONS AS ABOVE IS ˜CHI SQUARE.)
Degrees of freedom (df) refers to the number of values that are free to vary after restriction has been
placed on the data. For instance, if you have four numbers with the restriction that their sum has to be 50,
then three of these numbers can be anything, they are free to vary, but the fourth number definitely is
restricted. For example, the first three numbers could be 15, 20, and 5, adding up to 40; then the fourth
number has to be 10 in order that they sum to 50. The degrees of freedom for these values are then three.
The degrees of freedom here is defined as N - 1, the number in the group minus one restriction (4 - I ).
3. Find the table value for Chi Square. Begin by finding the df found in step 2 along the left hand side of the
table. Run your fingers across the proper row until you reach the predetermined level of significance (.05) at
the column heading on the top of the table. The table value for Chi Square in the correct box of 4 df and
P=.05 level of significance is 9.49.
4. If the calculated chi-square value for the set of data you are analyzing (26.95) is equal to or greater than the
table value (9.49 ), reject the null hypothesis. There IS a significant difference between the data sets that
cannot be due to chance alone. If the number you calculate is LESS than the number you find on the table,
than you can probably say that any differences are due to chance alone.
In this situation, the rejection of the null hypothesis means that the differences between the expected
frequencies (based upon last year's car sales) and the observed frequencies (based upon this year's poll
taken by Thai) are not due to chance. That is, they are not due to chance variation in the sample Thai took;
there is a real difference between them. Therefore, in deciding what color autos to stock, it would be to Thai's
advantage to pay careful attention to the results of her poll!
The steps in using the chi-square test may be summarized as follows:
Chi-Square I. Write the observed frequencies in column O
Test Summary 2. Figure the expected frequencies and write them in column E.
3. Use the formula to find the chi-square value:
4. Find the df. (N-1)
5. Find the table value (consult the Chi Square Table.)
6. If your chi-square value is equal to or greater than the table value, reject the null
hypothesis: differences in your data are not due to chance alone
For example, the reason observed frequencies in a fruit fly genetic breeding lab did not match expected
frequencies could be due to such influences as:
Mate selection (certain flies may prefer certain mates)
Too small of a sample size was used
Incorrect identification of male or female flies
The wrong genetic cross was sent from the lab
The flies were mixed in the bottle (carrying unexpected alleles)

Chi-square test
The chi-square is one of the most popular statistics because it is easy to calculate and
interpret. There are two kinds of chi-square tests. The first is called a one-way analysis,
and the second is called a two-way analysis. The purpose of both is to determine whether
the observed frequencies (counts) markedly differ from the frequencies that we would
expect by chance.

The observed cell frequencies are organized in rows and columns like a spreadsheet. This
table of observed cell frequencies is called a contingency table, and the chi-square test if
part of a contingency table analysis.

The chi-square statistic is the sum of the contributions from each of the individual cells.
Every cell in a table contributes something to the overall chi-square statistic. If a given
cell differs markedly from the expected frequency, then the contribution of that cell to the
overall chi-square is large. If a cell is close to the expected frequency for that cell, then
the contribution of that cell to the overall chi-square is low. A large chi-square statistic
indicates that somewhere in the table, the observed frequencies differ markedly from the
expected frequencies. It does not tell which cell (or cells) are causing the high chi-
square...only that they are there. When a chi-square is high, you must visually examine
the table to determine which cell(s) are responsible.

When there are exactly two rows and two columns, the chi-square statistic becomes
inaccurate, and Yate's correction for continuity is usually applied. Statistics Calculator
will automatically use Yate's correction for two-by-two tables when the expected
frequency of any cell is less than 5 or the total N is less than 50.

If there is only one column or one row (a one-way chi-square test), the degrees of
freedom is the number of cells minus one. For a two way chi-square, the degrees of
freedom is the number or rows minus one times the number of columns minus one.

Using the chi-square statistic and its associated degrees of freedom, the software reports
the probability that the differences between the observed and expected frequencies
occurred by chance. Generally, a probability of .05 or less is considered to be a significant
difference.

A standard spreadsheet interface is used to enter the counts for each cell. After you've
finished entering the data, the program will print the chi-square, degrees of freedom and
probability of chance.

Use caution when interpreting the chi-square statistic if any of the expected cell
frequencies are less than five. Also, use caution when the total for all cells is less than 50.

Example

A drug manufacturing company conducted a survey of customers. The research question


is: Is there a significant relationship between packaging preference (size of the bottle
purchased) and economic status? There were four packaging sizes: small, medium, large,
and jumbo. Economic status was: lower, middle, and upper. The following data was
collected.

Lower Middle Upper


Small 24 22 18
Medium 23 28 19
Large 18 27 29
Jumbo 16 21 33
------------------------------------------------

Chi-square statistic = 9.743


Degrees of freedom = 6
Probability of chance = .1359

1. Exploratory Data Analysis


1.3. EDA Techniques
1.3.5. Quantitative Techniques

1.3.5 Chi-Square Goodness-of-Fit


.15. Test
Purpose: The chi-square test (Snedecor and Cochran, 1989) is used to test if a sample of
Test for data came from a population with a specific distribution.
distributional
adequacy An attractive feature of the chi-square goodness-of-fit test is that it can be
applied to any univariate distribution for which you can calculate the cumulative
distribution function. The chi-square goodness-of-fit test is applied to binned
data (i.e., data put into classes). This is actually not a restriction since for non-
binned data you can simply calculate a histogram or frequency table before
generating the chi-square test. However, the value of the chi-square test statistic
are dependent on how the data is binned. Another disadvantage of the chi-square
test is that it requires a sufficient sample size in order for the chi-square
approximation to be valid.

The chi-square test is an alternative to the Anderson-Darling and Kolmogorov-


Smirnov goodness-of-fit tests. The chi-square goodness-of-fit test can be applied
to discrete distributions such as the binomial and the Poisson. The Kolmogorov-
Smirnov and Anderson-Darling tests are restricted to continuous distributions.

Additional discussion of the chi-square goodness-of-fit test is contained in the


product and process comparisons chapter (chapter 7).

Definition The chi-square test is defined for the hypothesis:


H0: The data follow a specified distribution.
Ha: The data do not follow the specified distribution.

Test Statistic: For the chi-square goodness-of-fit computation, the data are
divided into k bins and the test statistic is defined as

where is the observed frequency for bin i and

is the expected frequency for bin i. The expected


frequency is calculated by

where F is the cumulative Distribution function for the


distribution being tested, Yu is the upper limit for class i, Yl is the
lower limit for class i, and N is the sample size.

This test is sensitive to the choice of bins. There is no optimal


choice for the bin width (since the optimal bin width depends on
the distribution). Most reasonable choices should produce similar,
but not identical, results. Dataplot uses 0.3*s, where s is the
sample standard deviation, for the class width. The lower and
upper bins are at the sample mean plus and minus 6.0*s,
respectively. For the chi-square approximation to be valid, the
expected frequency should be at least 5. This test is not valid for
small samples, and if some of the counts are less than five, you
may need to combine some bins in the tails.

Significance
Level: .
Critical The test statistic follows, approximately, a chi-square distribution
Region: with (k - c) degrees of freedom where k is the number of non-
empty cells and c = the number of estimated parameters
(including location and scale parameters and shape parameters)
for the distribution + 1. For example, for a 3-parameter Weibull
distribution, c = 4.

Therefore, the hypothesis that the data are from a population with
the specified distribution is rejected if

where is the chi-square percent point function with k -

c degrees of freedom and a significance level of .

In the above formulas for the critical regions, the Handbook


follows the convention that is the upper critical value

from the chi-square distribution and is the lower


critical value from the chi-square distribution. Note that this is the
opposite of what is used in some texts and software programs. In
particular, Dataplot uses the opposite convention.

Sample Output Dataplot generated the following output for the chi-square test where 1,000
random numbers were generated for the normal, double exponential, t with 3
degrees of freedom, and lognormal distributions. In all cases, the chi-square test
was applied to test for a normal distribution. The test statistics show the
characteristics of the test; when the data are from a normal distribution, the test
statistic is small and the hypothesis is accepted; when the data are from the
double exponential, t, and lognormal distributions, the statistics are significant
and the hypothesis of an underlying normal distribution is rejected at significance
levels of 0.10, 0.05, and 0.01.

The normal random numbers were stored in the variable Y1, the double
exponential random numbers were stored in the variable Y2, the t random
numbers were stored in the variable Y3, and the lognormal random numbers
were stored in the variable Y4.

*************************************************
** normal chi-square goodness of fit test y1 **
*************************************************

CHI-SQUARED GOODNESS-OF-FIT TEST

NULL HYPOTHESIS H0: DISTRIBUTION FITS THE DATA


ALTERNATE HYPOTHESIS HA: DISTRIBUTION DOES NOT FIT THE DATA
DISTRIBUTION: NORMAL

SAMPLE:
NUMBER OF OBSERVATIONS = 1000
NUMBER OF NON-EMPTY CELLS = 24
NUMBER OF PARAMETERS USED = 0

TEST:
CHI-SQUARED TEST STATISTIC = 17.52155
DEGREES OF FREEDOM = 23
CHI-SQUARED CDF VALUE = 0.217101

ALPHA LEVEL CUTOFF CONCLUSION


10% 32.00690 ACCEPT H0
5% 35.17246 ACCEPT H0
1% 41.63840 ACCEPT H0

CELL NUMBER, BIN MIDPOINT, OBSERVED FREQUENCY,


AND EXPECTED FREQUENCY
WRITTEN TO FILE DPST1F.DAT

*************************************************
** normal chi-square goodness of fit test y2 **
*************************************************

CHI-SQUARED GOODNESS-OF-FIT TEST

NULL HYPOTHESIS H0: DISTRIBUTION FITS THE DATA


ALTERNATE HYPOTHESIS HA: DISTRIBUTION DOES NOT FIT THE DATA
DISTRIBUTION: NORMAL

SAMPLE:
NUMBER OF OBSERVATIONS = 1000
NUMBER OF NON-EMPTY CELLS = 26
NUMBER OF PARAMETERS USED = 0

TEST:
CHI-SQUARED TEST STATISTIC = 2030.784
DEGREES OF FREEDOM = 25
CHI-SQUARED CDF VALUE = 1.000000

ALPHA LEVEL CUTOFF CONCLUSION


10% 34.38158 REJECT H0
5% 37.65248 REJECT H0
1% 44.31411 REJECT H0

CELL NUMBER, BIN MIDPOINT, OBSERVED FREQUENCY,


AND EXPECTED FREQUENCY
WRITTEN TO FILE DPST1F.DAT

*************************************************
** normal chi-square goodness of fit test y3 **
*************************************************

CHI-SQUARED GOODNESS-OF-FIT TEST

NULL HYPOTHESIS H0: DISTRIBUTION FITS THE DATA


ALTERNATE HYPOTHESIS HA: DISTRIBUTION DOES NOT FIT THE DATA
DISTRIBUTION: NORMAL

SAMPLE:
NUMBER OF OBSERVATIONS = 1000
NUMBER OF NON-EMPTY CELLS = 25
NUMBER OF PARAMETERS USED = 0

TEST:
CHI-SQUARED TEST STATISTIC = 103165.4
DEGREES OF FREEDOM = 24
CHI-SQUARED CDF VALUE = 1.000000

ALPHA LEVEL CUTOFF CONCLUSION


10% 33.19624 REJECT H0
5% 36.41503 REJECT H0
1% 42.97982 REJECT H0

CELL NUMBER, BIN MIDPOINT, OBSERVED FREQUENCY,


AND EXPECTED FREQUENCY
WRITTEN TO FILE DPST1F.DAT

*************************************************
** normal chi-square goodness of fit test y4 **
*************************************************

CHI-SQUARED GOODNESS-OF-FIT TEST

NULL HYPOTHESIS H0: DISTRIBUTION FITS THE DATA


ALTERNATE HYPOTHESIS HA: DISTRIBUTION DOES NOT FIT THE DATA
DISTRIBUTION: NORMAL

SAMPLE:
NUMBER OF OBSERVATIONS = 1000
NUMBER OF NON-EMPTY CELLS = 10
NUMBER OF PARAMETERS USED = 0

TEST:
CHI-SQUARED TEST STATISTIC = 1162098.
DEGREES OF FREEDOM = 9
CHI-SQUARED CDF VALUE = 1.000000

ALPHA LEVEL CUTOFF CONCLUSION


10% 14.68366 REJECT H0
5% 16.91898 REJECT H0
1% 21.66600 REJECT H0
CELL NUMBER, BIN MIDPOINT, OBSERVED FREQUENCY,
AND EXPECTED FREQUENCY
WRITTEN TO FILE DPST1F.DAT

As we would hope, the chi-square test does not reject the normality hypothesis
for the normal distribution data set and rejects it for the three non-normal cases.
Questions The chi-square test can be used to answer the following types of questions:

• Are the data from a normal distribution?


• Are the data from a log-normal distribution?
• Are the data from a Weibull distribution?
• Are the data from an exponential distribution?
• Are the data from a logistic distribution?
• Are the data from a binomial distribution?

Importance Many statistical tests and procedures are based on specific distributional
assumptions. The assumption of normality is particularly common in classical
statistical tests. Much reliability modeling is based on the assumption that the
distribution of the data follows a Weibull distribution.

There are many non-parametric and robust techniques that are not based on
strong distributional assumptions. By non-parametric, we mean a technique, such
as the sign test, that is not based on a specific distributional assumption. By
robust, we mean a statistical technique that performs well under a wide range of
distributional assumptions. However, techniques based on specific distributional
assumptions are in general more powerful than these non-parametric and robust
techniques. By power, we mean the ability to detect a difference when that
difference actually exists. Therefore, if the distributional assumption can be
confirmed, the parametric techniques are generally preferred.

If you are using a technique that makes a normality (or some other type of
distributional) assumption, it is important to confirm that this assumption is in
fact justified. If it is, the more powerful parametric techniques can be used. If the
distributional assumption is not justified, a non-parametric or robust technique
may be required.

Related Anderson-Darling Goodness-of-Fit Test


Techniques Kolmogorov-Smirnov Test
Shapiro-Wilk Normality Test
Probability Plots
Probability Plot Correlation Coefficient Plot
Case Study Airplane glass failure times data.
Software Some general purpose statistical software programs, including Dataplot, provide
a chi-square goodness-of-fit test for at least some of the common distributions.
Chi-Square Examples
Vartanian: SW 131
In the sample given below, there are 300 females and 200 males. Is there a significant difference
between males and
females in their likelihood of being in poverty?
Females Males Total
In poverty 150 (cell a) 50 (cell b) 200
Out of poverty 150 (cell c) 150 (cell d) 300
Total 300 200 500
Answer:
Expected in poverty: 200/500= .40 or 40%
Expected out of pov: 300/500=.60 or 60%
Expected # in each cell:
cell a: 40% of 300 = 120
cell b: 40% of 200 = 80
cell c: 60% of 300=180
cell d: 60% of 200=120
Or this can be determined by multiplying the marginals and dividing by total N.
cell a: 200*300/500=120
cell b: 200*200/500=80
cell c: 300*300/500=180
cell d: 200*300/500=120
Difference
cell observed expected difference squared X 2
a 150 120 30 900 900/120 = 7.5
b 50 80 -30 900 900/80=11.25
c 150 180 -30 900 900/180=5
d 150 120 30 900 900/120=7.5
X 2 = 15+5+11.25 =31.25
At 1 DF for a 2-tailed test, the cv is 3.84. Since the chi-square value is greater than the CV, reject.
#2. Is method A better at helping those with depression than method B? Test this at the 5% level
of
significance.
Success Failure Total
Method A 25 75 100
Method B 40 160 200
Total 65 235 300
Answer:
Calculating the Chi-Square value:
Cell Observed value Expected value Difference
a 25 21.67=100*65/300 3.33
b 75 78.33=235*100/300 -3.33
c 40 43.33=65*200/300 -3.33
d 160 156.67=235*200/300 3.33
For
cell a: (3.33)2 /21.67=0.5117
cell b: (-3.33)2 /78.33=0.1416
cell c: (-3.33)2 /43.33=0.2559
cell d: (3.33)2 /156.67=0.0708
Add these up: 0.98. Since we need a value of 2.71 to find significance at the 5% level (for a 1-tailed
test), we again
find that there is not a relationship between success and the method of treatment used. We thus will
accept the null
hypothesis.

You might also like