- Type I Error
- What is a Type I Error?
- How to Avoid a Type I Error?
- Example of a Type I Error
- Additional Resources
- Type one error probability
- What are Type I Errors?
- Causes of Type I Error
- Risk Factor and Probability of Type I Error
- Consequences of a Type I Error
- What are Type II Errors?
- Causes of Type II Error
- Probability of Type II Error
- Consequences of a Type II Error
- How to Avoid Type I and II errors
- How to Detect Type I and Type II Errors in Data
- Key Differences between Type I & II Errors
- Examples of Type I & II errors
- Type I error examples
- Type II error examples
- Frequently Asked Questions about Type I and II Errors
- Conclusion

## Type I Error

«False positive» error

## What is a Type I Error?

In statistical hypothesis testing, a Type I error is essentially the rejection of the true null hypothesis. The type I error is also known as the false positive error. In other words, it falsely infers the existence of a phenomenon that does not exist.

Note that the type I error does not imply that we erroneously accept the alternative hypothesis of an experiment.

The probability of committing the type I error is measured by the significance level (α) of a hypothesis test. The significance level indicates the probability of erroneously rejecting the true null hypothesis. For instance, a significance level of 0.05 reveals that there is a 5% probability of rejecting the true null hypothesis.

### How to Avoid a Type I Error?

It is not possible to completely eliminate the probability of a type I error in hypothesis testing. However, there are opportunities to minimize the risks of obtaining results that contain a type I error.

One of the most common approaches to minimizing the probability of getting a false positive error is to minimize the significance level of a hypothesis test. Since the significance level is chosen by a researcher, the level can be changed. For example, the significance level can be minimized to 1% (0.01). This indicates that there is a 1% probability of incorrectly rejecting the null hypothesis.

However, lowering the significance level may lead to a situation wherein the results of the hypothesis test may not capture the true parameter or the true difference of the test.

### Example of a Type I Error

Sam is a financial analyst. He runs a hypothesis test to discover whether there is a difference in the average price changes for large-cap and small-cap stocks.

In the test, Sam assumes that the null hypothesis is that there is no difference in the average price changes between large-cap and small-cap stocks. Thus, his alternative hypothesis states that the difference between the average price changes does exist.

For the significance level, Sam chooses 5%. This means that there is a 5% probability that his test will reject the null hypothesis when it is actually true.

If Sam’s test incurs a type I error, the results of the test will indicate that the difference in the average price changes between large-cap and small-cap stocks exists while there is no significant difference among the groups.

### Additional Resources

CFI is the official provider of the global Business Intelligence & Data Analyst (BIDA)® certification program, designed to help anyone become a world-class financial analyst. To keep learning and advancing your career, the additional CFI resources below will be useful:

## Type one error probability

- Home
- Research
- Type I vs Type II Errors: Causes, Examples & Prevention

There are two common types of errors, type I and type II errors you’ll likely encounter when testing a statistical hypothesis. The mistaken rejection of the finding or the null hypothesis is known as a type I error. In other words, type I error is the false-positive finding in hypothesis testing . Type II error on the other hand is the false-negative finding in hypothesis testing.

To better understand the two types of errors, here’s an example:

Let’s assume you notice some flu-like symptoms and decide to go to a hospital to get tested for the presence of malaria. There is a possibility of two errors occurring:

- In type I error (False positive): The result of the test shows you have malaria but you actually don’t have it.
- Type II error (false negative): The test result indicates that you don’t have malaria when you in fact do.

Type I error and Type II error are extensively used in areas such as computer science, Engineering, Statistics, and many more.

The chance of committing a type I error is known as alpha (α), while the chance of committing a type II error is known as beta (β). If you carefully plan your study design, you can minimize the probability of committing either of the errors.

## What are Type I Errors?

Type I error is an omission that happens when a null hypothesis is reprobated during hypothesis testing. This is when it is indeed precise or positive and should not have been initially disapproved. So if a null hypothesis is erroneously rejected when it is positive, it is called a Type I error.

What this means is that results are concluded to be significant when in actual fact, it was obtained by chance.

When conducting hypothesis testing, a null hypothesis is determined before carrying out the actual test. The null hypothesis may presume that there is no chain of circumstances between the items being tested which may cause an outcome for the test.

When a null hypothesis is rejected, it means a chain of circumstances has been established between the items being tested even though it is a false alarm or false positive. This could lead to an error or many errors in a test, known as a Type I error.

It is worthy of note that statistical outcomes of every testing involve uncertainties, so making errors while performing these hypothesis testings is unavoidable. It is inherent that type I error may be considered as an error of commission in the sense that the producer or researcher mistakenly decides on a false outcome.

#### Causes of Type I Error

- When a factor other than the variable affects the variables being tested. This factor that causes the effect produces a result that supports the decision to reject the null hypothesis.
- When the result of a hypothesis testing is caused by chance, it is a Type I error.
- Lastly, because a null hypothesis and the significance level are decided before conducting a hypothesis test, and also the sample size is not considered, a type I error may occur due to chance.

#### Risk Factor and Probability of Type I Error

- The risk factor and probability of Type I error are mostly set in advance and the level of significance of the hypothesis testing is known.
- The level of significance in a test is represented by α and it signifies the rate of the possibility of Type I error.
- While it is possible to reduce the rate of Type I error by using a determined sample size. The consequence of this, however, is that the possibility of a Type II error occurring in a test will increase.
- In a case where Type I error is decided at 5 percent, it means in the null hypothesis ( H 0), chances are there that 5 in the 100 hypotheses even if true will be rejected.
- Another risk factor is that both Type I and Type II errors can not be changed simultaneously. To reduce the possibility of one error occurring means the possibility of the other error will increase. Hence changing the outcome of one test inherently affects the outcome of the other test.

#### Consequences of a Type I Error

A type I error will result in a false alarm. The outcome of the hypothesis testing will be a false positive. This implies that the researcher decided the result of a hypothesis testing is true when in fact, it is not.

For a sales group, the consequences of a type I error may result in losing potential market and missing out on probable sales because the findings of a test are faulty.

## What are Type II Errors?

A Type II error means a researcher or producer did not disapprove of the alternate hypothesis when it is in fact negative or false. This does not mean the null hypothesis is accepted as positive as hypothesis testing only indicates if a null hypothesis should be rejected.

A Type II error means a conclusion on the effect of the test wasn’t recognized when an effect truly existed. Before a test can be said to have a real effect, it has to have a power level that is 80% or more.

This implies the statistical power of a test determines the risk of a type II error. The probability of a type II error occurring largely depends on how high the statistical power is.

Note: Null hypothesis is represented as (H0) and alternative hypothesis is represented as (H1)

#### Causes of Type II Error

- Type II error is mainly caused by the statistical power of a test being low. A Type II error will occur if the statistical test is not powerful enough.
- The size of the sample can also lead to a Type I error because the outcome of the test will be affected. A small sample size might hide the significant level of the items being tested.
- Another cause of Type Ii error is the possibility that a researcher may disapprove of the actual outcome of a hypothesis even when it is correct.

#### Probability of Type II Error

- To arrive at the possibility of a Type II error occurring, the power of the test must be deducted from type 1.
- The level of significance in a test is represented by β and it shows the rate of the possibility of Type I error.
- It is possible to reduce the rate of Type II error if the significance level of the test is increased.
- In a case where Type II error is decided at 5 percent, it means in the null hypothesis ( H 0), chances are there that 5 in the 100 hypotheses even if it is false will be accepted.
- Type I error and Type II error are connected. Hence, to reduce the possibility of one type of error from occurring means the possibility of the other error will increase.
- It is important to decide which error has lesser effects on the test.

#### Consequences of a Type II Error

Type II errors can also result in a wrong decision that will affect the outcomes of a test and have real-life consequences.

Note that even if you proved your test hypothesis, your conversion result can invalidate the outcome unintended. This turn of events can be discouraging, hence the need to be extra careful when conducting hypothesis testing.

## How to Avoid Type I and II errors

Type I error and type II errors can not be entirely avoided in hypothesis testing, but the researcher can reduce the probability of them occurring.

For Type I error, minimize the significance level to avoid making errors. This can be determined by the researcher.

To avoid type II errors, ensure the test has high statistical power. The higher the statistical power, the higher the chance of avoiding an error. Set your statistical power to 80% and above and conduct your test.

Increase the sample size of the hypothesis testing.

The Type II error can also be avoided if the significance level of the test hypothesis is chosen.

## How to Detect Type I and Type II Errors in Data

After completing a study, the researcher can conduct any of the available statistical tests to reject the default hypothesis in favor of its alternative. If the study is free of bias, there are four possible outcomes. See the image below;

If the findings in the sample and reality in the population match, the researchers’ inferences will be correct. However, if in any of the situations a type I or II error has been made, the inference will be incorrect.

## Key Differences between Type I & II Errors

- In statistical hypothesis testing, a type I error is caused by disapproving a null hypothesis that is otherwise correct while in contrast, Type II error occurs when the null hypothesis is not rejected even though it is not true.
- Type I error is the same as a false alarm or false positive while Type II error is also referred to as false negative.
- A Type I error is represented by α while a Type II error is represented by β.
- The level of significance determines the possibility of a type I error while type II error is the possibility of deducting the power of the test from 1.
- You can decrease the possibility of Type I error by reducing the level of significance. The same way you can reduce the probability of a Type II error by increasing the significance level of the test.
- Type I error occurs when you reject the null hypothesis, in contrast, Type II error occurs when you accept an incorrect outcome of a false hypothesis

## Examples of Type I & II errors

#### Type I error examples

To understand the statistical significance of Type I error, let us look at this example.

In this hypothesis, a driver wants to determine the relationship between him getting a new driving wheel and the number of passengers he carries in a week.

Now, if the number of passengers he carries in a week increases after he got a new driving wheel than the number of passengers he carried in a week with the old driving wheel, this driver might assume that there is a relationship between the new wheel and the increase in the number of passengers and support the alternative hypothesis.

However, the increment in the number of passengers he carried in a week, might have been influenced by chance and not by the new wheel which results in type I error.

By this indication, the driver should have supported the null hypothesis because the increment of his passengers might have been due to chance and not fact.

#### Type II error examples

For Type II error and statistical power, let us assume a hypothesis where a farmer that rears birds assumes none of his birds have bird-flu. He observes his birds for four days to find out if there are symptoms of the flu.

If after four days, the farmer sees no symptoms of the flu in his birds, he might assume his birds are indeed free from bird flu whereas the bird flu might have affected his birds and the symptoms are obvious on the sixth day.

By this indication, the farmer accepts that no flu exists in his birds. This leads to a type II error where it supports the null hypothesis when it is in fact false.

## Frequently Asked Questions about Type I and II Errors

**Is a Type I or Type II error worse?**

Both Type I and type II errors could be worse based on the type of research being conducted.

A Type I error means an incorrect assumption has been made when the assumption is in reality not true. The consequence of this is that other alternatives are disapproved of to accept this conclusion. A type II error implies that a null hypothesis was not rejected. This means that a significant outcome wouldn’t have any benefit in reality.

A Type I error however may be terrible for statisticians. It is difficult to decide which of the errors is worse than the other but both types of errors could do enough damage to your research.

**Does sample size affect type 1 error?**

Small or large sample size does not affect type I error . So sample size will not increase the occurrence of Type I error.

The only principle is that your test has a normal sample size. If the sample size is small in Type II errors, the level of significance will decrease.

This may cause a false assumption from the researcher and discredit the outcome of the hypothesis testing.

**What is statistical power as it relates to Type I or Type II errors**

Statistical power is used in type II to deduce the measurement error. This is because random errors reduce the statistical power of hypothesis testing. Not only that, the larger the size of the effect, the more detectable the errors are.

The statistical power of a hypothesis increases when the level of significance increases. The statistical power also increases when a larger sample size is being tested thereby reducing the errors. If you want the risk of Type II error to reduce, increase the level of significance of the test.

**What is statistical significance as it relates to Type I or Type II errors**

Statistical significance relates to Type I error. Researchers sometimes assume that the outcome of a test is statistically significant when they are not and the researcher then rejects the null hypothesis. The fact is, the outcome might have happened due to chance.

A type I error decreases when a lower significance level is set.

If your test power is lower compared to the significance level, then the alternative hypothesis is relevant to the statistical significance of your test, then the outcome is relevant.

### Conclusion

In this article, we have extensively discussed Type I error and Type II error. We have also discussed their causes, the probabilities of their occurrence, and how to avoid them. We have seen that both Types of errors have their usefulness and limitations. The best approach as a researcher is to know which to apply and when.