Used in hypothesis testing, alpha (a ) is the maximum acceptable level of risk for rejecting a true null hypothesis (type I error) and is expressed as a probability ranging between 0 and 1. Alpha is frequently referred to as the level of significance. You should set a before beginning the analysis then compare p-values to a to determine significance:
The most commonly used a-level is 0.05. At this level, your chance of finding an effect that does not really exist is only 5%. The smaller the a value, the less likely you are to incorrectly reject the null hypothesis. However, a smaller value for a also means a decreased chance of detecting an effect if one truly exists (lower power).
Sometimes it may be better to choose a smaller value for a. For example, you are testing samples from a new milling machine to decide whether to purchase a dozen for your plant. You stand to save a large amount of money due to fewer defective products if the new machine is more accurate. However, the cost of purchasing and installing a dozen machines is very high. You want to be sure that the new machine is more accurate before making the purchase. In this case, you might want to select a lower value for a, such as 0.001. That way, you have only a 0.1% chance of concluding the new machine is more accurate, if in fact no difference exists.
On the other hand, sometimes choosing a larger value for a is better. For example, suppose you are a jet engine manufacturer and you are testing the strength of cheaper ball bearings. Saving a small amount of money does not outweigh the potentially disastrous effects if the bearings are weaker. Therefore, you might want to select a higher value for a, such as 0.1. Although this means you will be more likely to reject a true null hypothesis, more importantly, you will also be more likely to detect a real weakness in the cheaper bearings.