1. Introduction
Situations abound in which economists, decision makers, and other interested par-
ties desire a ranking of some set according to a chosen metric. Academic departments
are ranked according to research output, perceived quality of faculty, and/or reputation.
Hospitals are ranked according to mortality rates (often adjusted for severity of the in-
juries they treat). Firm’s are ranked relative to intra-industry competitors on the basis
of technical efficiency. In all these situations, in addition to the desired ranking, it would
be beneficial to provide information on the precision of the rankings. In laymen’s terms,
can we truly differentiate the units of observation or are we more accurately perhaps only
separating them into groups? In extreme cases, a set of firms might be ranked by efficiency,
yet the most and least efficient firms might not truly be distinguishable due to a lack of
statistical precision. In such a case, the ranking would be best suppressed.
A huge literature exists on measuring the relative efficiency of a set of firms, in both
allocative and technical senses. A segment of this literature uses data envelopment analysis
(DEA), creating relative efficiency rankings that are nonstochastic and thus cannot be
evaluated according to the precision of the rankings. A parallel literature uses econometric
techniques, such as stochastic production frontiers or estimation of distance functions,
providing at least the possibility of computing the precision of the resulting efficiency
rankings. Recently, Horrace and Schmidt (2000) have applied sampling theoretic statistical
techniques known as multiple comparisons with control (MCC) and multiple comparisons
with the best (MCB) to the issue of measuring the precision of efficiency rankings. This
technique allows researchers and users of such rankings to discover the precision with
which certain firms can be ranked above others, along with discovering sets of firms that