Least squares estimates versus maximum likelihood estimates
main topic
 

Least squares estimates are calculated by fitting a regression line to the points in a probability plot. The line is formed by regressing time to failure or log (time to failure) (X) on the transformed percent (Y).

Maximum likelihood estimates are calculated by maximizing the likelihood function. The likelihood function describes, for each set of distribution parameters, the chance that the true distribution has the parameters based on the sample.

Here are the major advantages of each method:

Least squares (LSXY)

·    Better graphical display to the probability plot because the line is fitted to the points on a probability plot.

·    For samples with little censoring, LSXY is more accurate than MLE, especially for small samples.

Maximum likelihood (MLE)

·    Distribution parameter estimates are more precise than least squares (XY).

·    For samples with heavy censoring, MLE is more accurate than LSXY.

·    MLE allows you to perform an analysis when there are no failures. When there is only one failure and some right-censored observations, the maximum likelihood parameter estimates may exist for a Weibull distribution.

·    The maximum likelihood estimation method has attractive mathematical qualities.

When possible, both methods should be tried; if the results are consistent, then there is more support for your conclusions. Otherwise, you may want to use the more conservative estimates or consider the advantages of both approaches and make a choice for your problem.

Note

For some data, the likelihood function is unbounded and therefore yields inconsistent estimates for the three-parameter models. In such cases, the usual maximum likelihood estimation method can break down. When this happens, Minitab assumes a fixed threshold parameter using a bias correction algorithm and finds the maximum likelihood estimates of the other two parameters. See [16], [17], [18], and [19] for more information.