Constraints. The constraints have to be dealt
with “manually”. The local instantaneous con-
straints on controls uι(t) and U2{t) (27) can be
immediately expressed in discrete time as
θ ≤ uι,e ≤ ɪ, and C2,./ ≥ 0. (33)
The portfolio admissibility condition, which in the
continuous version amounts to x(t) ≥ 0, Vt ∈
[0,T], [4], cannot however be directly replaced by
X( ≥ 0. This is (mainly) because X( is expressible
in terms of uɪ,/-i and C2,/-ɪ yet we need a
condition valid for time I. It appears (from [10] )
that the best discrete-time counterpart of x{t) ≥ 0
is
X((1 + δ(r + uιrt(a — r))) — δU2rι ≥ 0 (34)
where δ is the time discretisation step (see Section
2.2). It has to be borne in mind that (34) is
an approximation to the portfolio admissibility
condition and that it depends on the time dis-
cretisation step.
Technical hints. A cautionary remark about
numerical optimisation is in place. Most optimi-
sation methods work (much) more efficiently if
the solution vector components are of comparable
magnitudes. This is not the case of the control
variables uɪ and C2. Indeed, uɪ is bounded be-
tween 0 and 1 but, U2 is practically unbounded
from above, see Figure 2. This caused some (not
insuperable) difficulties in [6] in obtaining accu-
rate approximating solutions. In this paper, such
difficulties were avoided through re-scaling of the
model. It follows from (32) that U2 (t) is linear in
the state x{t). All U2{t) were then replaced by
u2(t)x(t) in the optimisation problem max (28)
subject to (26). Consequently, the numerical rou-
tines were looking for u2(t) that was not greater
than 12 for most of the cases solved. Moreover,
because of the above transformation the strategy
graphs will no longer be linear as in Figure 2 low
panel but horizontal (as in Figure 7).
Important software control parameters are the
time discretisation step δ and the state space
grid width h. To get an idea of their range val-
ues, necessary for an accurate approximation, a
deterministic portfolio control problem (T = 2)
was solved: first analytically, then, the discretised
model solutions were computed. Figure 5 shows
the results.
The plot coordinates are the time discretisation
step δ and a utility measure. The horizon is T = 2;
the remaining model parameters are as in Section
4.1.
Discrete ModelUtility Measures(T=2)
500
480
460
440
<-continUous time model optimal Utility"
420
h=500 :
t h=500
400
h=20Q00'
+ h=20000
t h=20000
380
0 02 04 0.6 0.8 1 1.2 14 1.6
6
Fig. 5. Discretised model utility realisations.
The point denoted “*” (0,422) is the continuous
model optimal utility. The discrete time model
utility values converge toward this point as <) → O.
Notice that they are greater than the continuous
model utility. This is because (in the rectangular
method) the integration error grows in δ. The
points denoted “+” correspond to utility reali-
sations of a model discretised both in time and
space (Markov chain). It is clear from the figure
that reasonable utility approximations can only
be obtained for δ < .1 and h < 500.
The impact of the length of the time step δ on the
solution accuracy in a stochastic model is shown
in Figure 6.
Consider time I {l = 0,1...N — 1) and u±r( to be
applied at this time. Assuming that the choice of
U2rι is made optimal
Uιι∕ = arg max
^δγ∕U2rι + e βig(l + <5)Eλ∕x∕+5^
= arg max (Eλ∕x∕+⅛) , (35)
see (29). The expected value in (35) was computed
using a Taylor series (second order) expansion and
presented as a function of strategy uɪ in Figure 6.
The strategy domain was “extended” beyond the
feasible range [0 1] to show the utility measure
shapes. Notice that, for the feasible uɪ ∈ [0, 1],
the utility measures would all look flat.