Sargent: Different people mean and do different things by calibration.
Some people mean ‘use an extraneous estimator’. Take estimates from some
previous study and pretend that they are known numbers. An obvious diffi-
culty of this procedure is that often those extraneous estimates were prepared
with an econometric specification that contradicts your model. Treating those
extraneous parameters as known ignores the clouds of uncertainty around them,
clouds associated with the estimation uncertainty conveyed by the original re-
searcher, and clouds from the ‘specification risk’ associated with putting your
faith in the econometric specification that another researcher used to prepare
his estimates.
Other people, for example Larry Christiano and Marty Eichenbaum, by cal-
ibration mean GMM estimates using a subset of the moment conditions for the
model and data set at hand. Presumably, they impose only a subset of the
moment conditions because they trust some aspects of their model more than
others. This is a type of robustness argument that has been pushed furthest by
those now doing semiparametric GMM. There are ways to calculate the stan-
dard errors to account for vaguely specified or distrusted aspects of the model.
By the way, these ways of computing standard errors have a min-max flavor
that reminds one of the robust control theory that Lars Hansen and I are using.
Evans and Honkapohja: We know what question maximum likelihood
estimates answers, and the circumstances under which maximum likelihood esti-
mates, or Bayesian counterparts to them, have good properties. What question
is calibration the answer to?
Sargent: The best answer I know is contained in work by Hansen and
others on GMM. They show the sense in which GMM is the best way to estimate
trusted features of a less than fully trusted model.
Evans and Honkapohja: Do you think calibration in macroeconomics
was an advance?
Sargent: In many ways, yes. I view it as a constructive response to Bob’s
remark that ‘your likelihood ratio tests are rejecting too many good models’.
In those days, the rational expectations approach to macroeconomics was still
being challenged by influential people. There was a danger that skeptics and
opponents would misread those likelihood ratio tests as rejections of an entire
class of models, which of course they were not. (The internal logic of the likeli-
hood function as a complete model should have made that clear, but apparently
it wasn’t at the time!) The unstated case for calibration was that it was a way
to continue the process of acquiring experience in matching rational expecta-
tions models to data by lowering our standards relative to maximum likelihood,
and emphasizing those features of the data that our models could capture. In-
stead of trumpeting their failures in terms of dismal likelihood ratio statistics,
celebrate the features that they could capture and focus attention on the next
unexplained feature that ought to be explained. One can argue that this was a
sensible response to those likelihood ratio tests. It was also a response to the