Multiple regression models are used in applied agricultural economics research with two main
purposes: forecasting and making statistical inferences about the effect of exogenous variables on the
dependent variable. Efficient estimation of the model coefficients is important in both cases. Slope
parameter estimators with lower standard errors represent more precise measurements of the magnitude of
the impacts of the exogenous variables on the dependent variable, and produce more reliable predictions.
Ordinary least squares (OLS) is widely used in empirical work because if the model’s error term is
normally, independently and identically distributed (n.i.i.d.), OLS yields the most efficient unbiased
estimators for the model’s coefficients, i.e. no other technique can produce unbiased slope parameter
estimators with lower standard errors. Maximum likelihood (ML) based on the n.i.i.d. assumption is
equivalent to OLS. Generalized least squares (GLS) can be used to improve estimating efficiency relative
to OLS when the error term is heteroskedastic, autocorrelated or both. Even more efficient slope parameter
estimators can be obtained in this case through ML, by specifying a likelihood function in which the error
term is assumed normal, but not i.i.d. (Judge et al.). This is commonly known as “correcting” for
heteroskedasticity or autocorrelation.
If the dependent variable and, thus, the error term is a continuous but not normally distributed
variable, however, OLS (in the i.i.d.-error case) or normal-error ML (in the non-i.i.d.-error case) is not the
most efficient way of estimating the slope parameters of a multiple regression model (Judge et al.). Since
non-normal dependent variables are not uncommon in applied modeling work, Goldfeld and Quandt argue
vehemently against the continued reliance on the assumption of error term normality for estimating
regressions. Three approaches are currently available for estimating multiple regression models under non-
normality: robust, partially adaptive, and adaptive estimation (McDonald and White).
Robust estimators are based on minimizing some function of a scale-adjusted error term that gives
less weight to large error values. They can be asymptotically more efficient than OLS when the tails of the
underlying error term distribution are “thicker” than the normal (McDonald and Newey). Least Absolute
Deviation (LAD) is an example of a robust estimator that is asymptotically more efficient than OLS for