However, while it is common to use unconditional loss when assessing timeless perspective
policy performance (Jensen and McCallum, 2002), there are good reasons for not doing so.
Aside from the most obvious point, which is that the discretionary problem, the optimal
commitment problem, and the timeless perspective problem are all explicitly conditioned on
an observed known initial state, xt, it is well known that ignoring transition dynamics and
evaluating policies according to their asymptotic behavior can lead to spurious welfare reversals
(Kim, Kim, Schaumburg, and Sims, 2005).
Although Figure IA shows that discretion can be superior to timeless perspective policy-
making, neither equation (14) nor its unconditional expectation seems entirely satisfactory for
quantifying timeless perspective policy performance: the former depends on auxiliary state
variables, here y~γ, while the latter ignores initial conditions and transition dynamics. To ad-
dress this issue, in the next section I develop a measure of performance suitable for evaluating
timeless perspective policies.
4.2 Evaluating policy performance
For the general linear-quadratic control problem described by equations (12) through (14), the
three policy approaches examined above have equilibria that can be written in the form
(29)
(30)
(31)
st+ι — Msss⅛ + Nε⅛+ι,
yt — HsSt,
ut — FsSt,
where st ≡ [ x't qt ] .
commitment policy qt —
For the discretionary policy qt is the null vector, for the optimal
Pt, and for the timeless perspective policy qt — [ xt ∣ dt-ɪ ] .
Now, for arbitrary period t, equations (29) through (31) allow the loss function, conditional
on st, to be expressed as
Lt — StPst + ɪɪ^tr [n'PN∑] , (32)
where
P — W + ^MSsPMss, (33)
W ≡ ⅛WHs + ⅛UFs + Fsu' Hs + FsRFs. (34)
13