In a hypothesis test, the likelihood that you will find a significant effect or difference when one truly exists. Power is the probability that you will correctly reject the null hypothesis when it is false.
A number of factors affect power:
You can calculate power before you collect data (a prospective study) to ensure that your hypothesis test will detect significant differences or effects. For example, a pharmaceutical company wants to see how much power their hypothesis test has to detect differences among three different diabetes treatments. To increase power, they can increase the sample size to get more information about the population of diabetes patients using these medications. Also, they can try to decrease error variance by following good sampling practices.
You can also calculate power to understand the power of tests that you have already conducted (a retrospective study). For example, an automobile parts manufacturer performs an experiment comparing the weight of two steel formulations, and the results are not statistically significant. Using Minitab, the manufacturer can calculate power based on the minimum difference that they would like to see. If the power to detect this difference is low, they may want modify the experimental design to increase the power and continue to evaluate the same problem. However, if the power is high, they may conclude that the two steel formulations are not different and discontinue further experimentation.
Power equals 1- b, where b is the probability of making a Type II error (failing to reject the null hypothesis when it is false). As a (the level of significance) increases, b decreases. Therefore, as a increases, power also increases. Keep in mind that increasing a also increases the probability of Type I error (rejecting the null hypothesis when it is true).