TS - Error Types: What Can Go Wrong? Lesson
Error Types: What Can Go Wrong? Lesson
Even though care is taken in every step of performing a significance test, things can still go wrong! Decision makers always hope for correct decisions based on careful statistical procedures but errors are always possible. Our goal in this chapter is to generally reject the null hypothesis H0 in favor of the alternate or alternative hypothesis Ha. We may do calculations correctly, try to minimize bias in the sample, and use a large enough sample to be meaningful BUT errors can still occur.
There are two types of errors:
We could REJECT H0 when it is actually TRUE. This is called a TYPE I error.
We might FAIL TO REJECT H0 when it is really false. This is called a TYPE II error.
A Type II Error occurs if we fail to reject the null hypothesis when it is in fact false. Be cautious with the wording: it is not the same thing to say we FAIL TO REJECT and therefore we ACCEPT. We are not trying to prove the null hypothesis. We assume that the null hypothesis is true until we determine that the statistic of interest falls outside the rejection region.
Using the legal system to make analogies:
Since defendants are considered innocent until proven guilty, the null hypothesis would be that the person is innocent.
Committing a TYPE I error is analogous to convicting an innocent defendant. Type I is ONLY possible when we reject the null hypothesis.
Committing a TYPE II error is analogous to freeing a guilty defendant. Type II is ONLY possible when we fail to reject the null hypothesis.
In the real world these error types have potentially serious consequences.
Using the medical system to make analogies:
If we have a null hypothesis of being healthy for an individual and we commit a TYPE I error, we would be falsely rejecting the null hypothesis. In context, we could conclude that the individual is NOT healthy and erroneously requires some medication or treatment. In the medical world this is known as a false positive reading. On the other hand, if we commit a TYPE II error, we would be failing to reject the null hypothesis concluding that the individual is healthy when indeed he is not and should be given treatment. This is called a false negative. Both errors have implications that are serious since a patient's health is at stake.
Different errors result in different outcomes. For example, in business accepting or rejecting raw materials to produce a product is critical to the quality of the product distributed. Putting the error types into context in a business quality control setting:
TYPE I error: accepting a product batch which should have been rejected upsets customers.
TYPE II error: rejecting a product batch which was really good hurts the company and costs money. Neither outcome would be acceptable to the company CEO.
The Type I error rate (probability) is affected by the α level: the lower the α level the lower the Type I error rate. It might seem that α is the probability of a Type I error. More accurately α is the probability of a Type I error given that the null hypothesis is true. If the null hypothesis is false, then it is NOT possible to make a Type I error. Error types are determined by the nature of your conclusion.
Unlike a Type I error, a Type II error is not really an error. When a statistical test is not significant (P-value too high), it means that the data do not provide strong evidence that the null hypothesis is false. Lack of significance does NOT support the conclusion that the null hypothesis is true but rather there is simply a LACK of evidence. We are NEVER attempting to prove the null hypothesis true. Therefore, a researcher would not make the mistake of incorrectly concluding that the null hypothesis is true when a statistical test was not significant. Instead, the researcher would consider the test inconclusive. Contrast this with a Type I error where the researcher erroneously concludes that the null hypothesis is false when, in fact, it is true.
A Type II error can only occur if the null hypothesis is false. If the null hypothesis is false, then the probability of a Type II error is called β. NOTE: we would only know AFTER the fact that our conclusion was incorrect when the TRUE parameter is determined. The probability of correctly rejecting a false null hypothesis equals 1- β and is called POWER. More precisely, error free conclusions would mean rejecting H0 when it is false and failing to reject when it is true. Of course this is the goal all the time...to be correct. A numerical value (POWER) can be associated with this ability to be correct.
In order to have a high power we attempt to reject ALL false null hypotheses. That would be accomplished by having a LARGE REJECTION REGION or large alpha level. The AREA for alpha = .05 is greater than the AREA of an alpha level of .02. So a higher alpha level would make it EASIER to reject the false hypothesis and increase the power.
IMAGES CREATED BY GAVS