3 Linking Bayesian and Robust Decision Prob-
lems
The objective of this section is to establish a link between the Bayesian and the
robust decision problems described in the previous section. The main idea is to
change the objective function of the Bayesian decision problem in a way that
the Bayesian’s objective will have the same minimum as the robust objective.
Since the Bayesian’s loss function depends on the action x, altering the loss
function is a back-door through which one can cause the Bayesian to behave as
if her priors were changing across actions. In particular, if the Bayesian was to
maximize a transformed loss function T(L(x,s)) with the property that
T(L(x, s)) = L(x, s) ∙ ——-,-—ɪ (5)
Ps
where ps is the prior probability for state s, then the Bayesian problem would
be identical to the robust decision problem:
min E [T(L(x, s))]
nΩ1
= min ^ L(x,si)1 (x,Sl)pi
^∈Ωs , pi
i=l
= min R(x)
x∈Ωx
Of course, such a transformed ’loss function’ is not a loss function in the strict
sense since it depends on prior probabilities.
Given that direct equivalence between the two problems requires a Bayesian
loss that depends on priors, the strategy is to construct a sequence of trans-
formed loss functions Tk(L(x,s)) for the Bayesian problem with the property
that these transformed loss functions are independent of the prior. At the same
time the solution to
min E [Tk(L(x, s))] (6)
■ ω
which is denoted by x*k should converge to the robust solution x* as к increases
without bound, i.e.
jɪɪɪɪ ι∣xk -x* и =°.
k→∞