and LAD estimators. Further, they encompass several well-known robust
methods including least median of squares (LMS) and least trimmed squares
(LTS): whereas the first defines the scale statistics s2{η(z, b)} as the me-
dian of squared residuals η(z, b), the latter used the scale defined by the sum
of h smallest residuals η(z, b). In order to appreciate the difference to M-
estimators, it is worth pausing for a moment and to present LMS, the most
prominent representative of S-estimators, in the location case:
arg min med{(x1 - θ)2, . . . , (xn - θ)2}.
θ
Due to its definition, the S-estimators have the same influence function
as the M -estimator constructed from the same function ρ. Contrary to M-
estimators, they can achieve the highest possible breakdown point ' = 0.5.
For example, this is the case of LMS and LTS. For Gaussian data, the most
efficient (in the sense of ARE (7) among the S-estimators with ■ = 0.5 is
however the one corresponding to K = 1.548 and ρ being the Tukey biweight
function, see Table 1. Given the HBP of S-estimators, their maximum-bias
behavior is of interest too. Although it depends on the function ρ and con-
stant K (Berrendero and Zamar, 2001), Yohai and Zamar (1993) proved that
LMS minimizes maximum bias among a large class of (residual admissible)
estimators, which includes most robust methods.
An important shortcoming of HBP S-estimation is however its low ARE:
under Gaussian data, efficiency relative to LS varies from 0% to 27%. Thus,
S-estimators are often used as initial estimators for other, more efficient
14