Example 2 Let the optimal robust decision be given by x,r' = [x,rl0,xtrl1,... )
and the state S{ (i = 1, 2) that maximizes the loss for this and any neighboring
decisions is given by si. Next consider the decision
x' = x* + (d', O, O, O ... )
which is equal to xtr, except for the first period. Suppose that altering the decision
from xtr to x causes the loss in period zero to increase by γ1 > O units in state
sɪ. This causes x to be suboptimal for the robust decision maker.
Next, consider a Bayesian decision maker with objective (8) who considers
a deviation from xtr to x. The change ∆⅛ in the first period loss is given by
∆k = (ek(z0r*.o,sB+7j
- efez(≈^,O>sl))p1 + I efc(z(≈r,0.s2)+72)
- efeZ(^r,0.S2)
(9)
where γ2 = l(x*r0 + d,sfi) - l(xtr0,s2). Suppose γ2 < O and l(x*r0,s2) >
l(xtr0,sι) + Y1 > O, which cannot be excluded, then
Iim ∆k = -∞
k→∞
which indicates that a Bayesian with objective function (8) will prefer x to xr∙
for all sufficiently large k.
To obtain a convergence result similar to the one in section 3 one has to
define the transformed loss function as
Tk(T(x, s)) = ek(∑∞ e‘z(^t,sf)) (10)
and let the Bayesian minimize
I
min V ek(∑ ∑o .
(11)
{≈t∣≈t∈Ωt} 2-'
i=1
where pi are prior probabilities.
Proposition 3 below shows that, as k increases without bound, the Bayesian
solution to problem (11) converges to the robust solution in terms of the follow-
ing vector norm:
∞
llxl∣β = V βtx'txt
(12)
¢=0
10