Least squares estimates are calculated by fitting a regression line to the points in a probability plot. The line is formed by regressing time to failure or log (time to failure) (X) on the transformed percent (Y).
Maximum likelihood estimates are calculated by maximizing the likelihood function. The likelihood function describes, for each set of distribution parameters, the chance that the true distribution has the parameters based on the sample.
Here are the major advantages of each method:
When possible, both methods should be tried; if the results are consistent, then there is more support for your conclusions. Otherwise, you may want to use the more conservative estimates or consider the advantages of both approaches and make a choice for your problem.
Note |
For some data, the likelihood function is unbounded and therefore yields inconsistent estimates for the three-parameter models. In such cases, the usual maximum likelihood estimation method can break down. When this happens, Minitab assumes a fixed threshold parameter using a bias correction algorithm and finds the maximum likelihood estimates of the other two parameters. See [16], [17], [18], and [19] for more information. |