The difference between a sample statistic and a hypothesized parameter
value is statistically significant if a hypothesis test suggests it is
too unlikely to have occurred by chance. You can assess statistical significance
by looking at a test's p-value, which is the probability of obtaining
a test statistic at least as extreme as the one you actually calculated
from your sample, if the null hypothesis is true. If the p-value is below
a specified significance
For example, suppose you want to determine whether the thickness of car windshields exceeds 4mm, as required by safety rules. You take a sample of windshields and conduct a 1-sample t-test with an a-level of 0.05 and the following hypotheses: H0: m = 4 and H1: m > 4. If the test produces a p-value of 0.001, you declare statistical significance and reject the null hypothesis because the p-value is less than your chosen a-level. You conclude in favor of the alternative hypothesis: that the windshield thickness does exceed 4mm.
But if the p-value equals 0.50, you cannot claim statistical significance because the p-value is greater than your chosen a-level. Therefore, you fail to reject the null hypothesis; you do not have enough evidence to claim the average windshield thickness exceeds 4mm.
Statistical significance does not necessarily imply practical significance. A test with extremely high power can declare the slightest difference from the hypothesized value to be statistically significant, although such a small difference may be meaningless in practice. For example, a mixed nuts company claims their jars contain no more than 50% peanuts. If you sample 100,000,000 jars and observe 50.01% peanuts, a hypothesis test will declare this meaningless difference to be statistically significant, solely because of the massive sample size. Therefore, use your specialized knowledge in conjunction with hypothesis tests to draw meaningful conclusions.