Evans and Honkapohja: Why did you get interested in non-rational
learning theories in macroeconomics?
Sargent: Initially, to strengthen the case for and extend our understand-
ing of rational expectations. In the 1970s, rational expectations was severely
criticized because, it was claimed, it endowed people with too much knowledge
about the economy. It was fun to be doing rational expectations macro in the
mid 70s because there was lots of skepticism, even hostility, toward rational
expectations. Critics claimed that an equilibrium concept in which everyone
shared ‘God’s model’ was incredible. To help meet that criticism, I enlisted in
Margaret Bray’s and David Kreps’s research program. Their idea was to push
agents’ beliefs away from a rational expectations equilibrium, then endow them
with learning algorithms and histories of data. Let them adapt their behavior
in a way that David Kreps later called ‘anticipated utility’ behavior: here you
optimize, taking your latest estimate of the transition equation as though it were
permanent; update your transition equation; optimize again; update again; and
so on. (This is something like ‘fictitious play’ in game theory. Kreps argues that
while it is ‘irrational’, it can be a smart way to proceed in contexts in which it
is difficult to figure out what it means to be rational. Kreps’s Schwartz lecture
has some fascinating games that convince you that his anticipated utility view is
attractive.) Margaret Bray, Albert Marcet, Mike Woodford, you two, Xiaohong
Chen and Hal White, and the rest of us wanted to know whether such a system
of adaptive agents would converge to a rational expectations equilibrium. To-
gether, we discovered a broad set of conditions on the environment under which
beliefs converge. Something like a rational expectations equilibrium is the only
possible limit point for a system with adaptive agents. Analogous results prevail
in evolutionary and adaptive theories of games.
Evans and Honkapohja: What do you mean ‘something like’ ?
Sargent: The limit point depends on how much prompting you give agents
in terms of functional forms and conditioning variables. The early work in the
least squares learning literature initially endowed agents with wrong coefficients,
but with correct functional forms and correct conditioning variables. With
those endowments, the systems typically converged to a rational expectations
equilibrium. Subsequent work by you two, and by Albert Marcet and me,
withheld some pertinent conditioning variables from agents, e.g., by prematurely
truncating pertinent histories. We found convergence to objects that could
be thought of as ‘rational expectations equilibria with people conditioning on
restricted information sets’. Chen and White studied situations in which agents
permanently have wrong functional forms. Their adaptive systems converge to
a kind of equilibrium in which agents’ forecasts are optimal within the class of
information filtrations that can be supported by the functional forms to which
they have restricted agents.
Evans and Honkapohja: How different are these equilibria with subtly
misspecified expectations from rational expectations equilibria?