Define the following sequence of transforming functions Tk(∙):3
Tk (Z) = ekL
Since Tk(∙) is increasingly convex as k increases, a Bayesian with objective
Tk (Z) will become increasingly risk averse in terms of the coefficient of absolute
risk aversion. As a result, the value of the transformed loss Tk(Z) increases
disproportionately with the size of the loss Z.
Intuitively, this implies that the largest of all losses Z associated with some
action x obtains increasing relative weight. This should move the solution to the
Bayesian decision problem closer and closer to the robust solution. Proposition
1 below confirms this intuition:
Proposition 1 Let xk denote the solution to the transformed Bayesian decision
problem (6) with prior probabilities pi > 0 (i = l,...n). Let xtr denote the
solution to the robust decision problem (f). Then
Iim ∣∣χk - <11 = 0
k→∞
The proof of proposition 1 can be found in the appendix. Proposition 1
shows that robust decisions can be interpreted as decisions of a Bayesian with
an infinite degree of risk-aversion and arbitrary strictly positive priors over the
domain to which the robust decision maker cannot assign prior probabilities.
In Bayesian terms the desire for robustness represents a choice of a particular
objective function, which has the property that optimal decisions are robust to
the assignment of prior probabilities.
The next subsection illustrates proposition 1 using a univariate example.
3.1 An Example
Consider the following simple loss function, which has been considered amongst
others by Brainard (1967) and Onatski (2000):
Z(x, s) = (sx — π*)2
The variable π* denotes an inflation target pursued by the central bank while
sx denotes the inflation rate that results when the decision maker chooses policy
3The particular sequence Tk is just chosen for convenience and other sequences might give
the same result.