You are on page 1of 3

Type 1 Error and Type 2 Error

The null hypothesis is accepted or rejected on the basis of the value of the test-statistic which is a function of the sample. The test statistic may land in acceptance region or rejection region. If the calculated value of test-statistic, say, is small (insignificant) i.e., is close to zero or we can say lies between and is a two-sided alternative test , the hypothesis is accepted. If the calculated value of the test-statistic is large (significant), is rejected and is accepted. In this rejection plan or acceptance plan, there is the possibility of making any one of the two errors which are called Type I and Type Il-errors.

Type-I Error:
The null hypothesis may be true but it may be rejected. This is an error and is called Type-I error. When is true, the test-statistic, say, can take any value between to . But we reject when lies in the rejection region while the rejection region is also included in the interval to. In a two-sided (like), the hypothesis is rejected when is less than or is greater than. When is true, can fall in the rejection region with a probability equal to the rejection region. Thus it is possible that is rejected while is true. This is called Type 1 error. The probability is that is accepted when is true. It is called correct decision. We can say that Type I error has been committed when: an intelligent student is not promoted to the next class. a good player is not allowed to play the match. an innocent person is punished. a driver is punished for no fault of him. a good worker is not paid his salary in time. These are the examples from practical life. These examples are quoted to make a point clear to the students

Alpha:
The probability of making Type-I error is denoted by (alpha). When a null hypothesis is rejected, we may be wrong in rejecting it or we may be right in rejecting it. We do not know that is true or false. Whatever our decision will be, it will have the support of probability. A true hypothesis has some probability of rejection and this probability is denoted by. This probability is also called the size of Type-I error and is denoted by.

Type-II Error:
The null hypothesismay be false but it may be accepted. It is an error and is called Type-II error. The value of the test-statistic may fall in the acceptance region when is in fact false. Suppose the hypothesis being tested is and is false and true value of is or. If the difference between and is very large then the chance is very small that (wrong) will be accepted. In this case the true sampling distribution of the statistic will be quite away from the sampling distribution under. There will be hardly any test-statistic which will fall in the acceptance region of. When the true distribution of the test-statistic overlaps the acceptance region of, then is accepted though is false. If the difference between and is small, then there is a high chance of accepting. This action will be an error of Type-II.

Beta:
The probability of making Type II error is denoted by. Type-II error is committed when is accepted while is true. The value of can be calculated only when we happen to know the true value of the population parameter being tested

Type I error
A type I error, also known as an error of the first kind, is the wrong decision that is made when a test rejects a true null hypothesis (H0). A type I error may be compared with a so called false [1] positive in other test situations. Type I error can be viewed as the error of excessive credulity. In terms of folk tales, an investigator may be "crying wolf" (raising a false alarm) without a wolf in sight (H0: no wolf). The rate of the type I error is called the size of the test and denoted by the Greek letter (alpha). It usually equals the significance level of a test. In the case of a simple null hypothesis is the probability of a type I error. If the null hypothesis is composite, is the maximum (supremum) of the possible probabilities of a type I error. [edit]Type

II error

A type II error, also known as an error of the second kind, is the wrong decision that is made when a test accepts a false null hypothesis. A type II error may be compared with a so-called false [1] negative in other test situations. Type II error can be viewed as the error of excessive skepticism. In terms of folk tales, an investigator may fail to see the wolf ("failing to raise an alarm"; see Aesop's story of The Boy Who Cried Wolf). Again, H0: no wolf. The rate of the type II error is denoted by the Greek letter (beta) and related to the power of a test (which equals 1 ). What we actually call type I or type II error depends directly on the null hypothesis. Negation of the null hypothesis causes type I and type II errors to switch roles. The goal of the test is to determine if the null hypothesis can be rejected. A statistical test can either reject (prove false) or fail to reject (fail to prove false) a null hypothesis, but never prove it true (i.e., failing to reject a null hypothesis does not prove it true). [edit]Example

As it is conjectured that adding fluoride to toothpaste protects against cavities, the null hypothesis of no effect is tested. When the null hypothesis is true (i.e., there is indeed no effect, but the data give rise to rejection of this hypothesis, falsely suggesting that adding fluoride is effective against cavities), a type I error has occurred. A type II error occurs when the null hypothesis is false (i.e., adding fluoride is actually effective against cavities, but the data are such that the null hypothesis cannot be rejected, failing to prove the existing effect). In colloquial usage type I error can be thought of as "convicting an innocent person" and type II error "letting a guilty person go free". Tabularised relations between truth/falseness of the null hypothesis and outcomes of the test:

Null hypothesis (H0) is true Null hypothesis (H0) is false

Reject null hypothesis

Type I error False positive

Correct outcome True positive

Fail to reject null hypothesis

Correct outcome True negative

Type II error False negative

[edit]Understanding

Type I and Type II errors

From the Bayesian point of view, a type I error is one that looks at information that should not substantially change one's prior estimate of probability, but does. A type II error is that one looks at information which should change one's estimate, but does not. (Though the null hypothesis is not quite the same thing as one's prior estimate, it is, rather, one's pro forma prior estimate.) Hypothesis testing is the art of testing whether a variation between two sample distributions can be explained by chance or not. In many practical applications type I errors are more delicate than type II errors. In these cases, care is usually focused on minimizing the occurrence of this statistical error. Suppose, the probability for a type I error is 1% , then there is a 1% chance that the observed variation is not true. This is called the level of significance, denoted with the Greek letter (alpha). While 1% might be an acceptable level of significance for one application, a different application can require a very different level. For example, the standard goal of six sigma is to achieve precision to 4.5 standard deviations above or below the mean. This means that only 3.4 parts per million are allowed to be deficient in a normally distributed process

You might also like