A MARKOVIAN APPROXIMATED SOLUTION TO A PORTFOLIO MANAGEMENT PROBLEM



preserves the first two moments of the original dis-
tribution so that the overall discretisation scheme
is
weakly consistent in the sense of [9] 6 .

Constraints. It has to be borne in mind that the
discretisation of a constraint is always sensitive
to the choice of the discretisation steps
δ and
h, and has to be dealt with “carefully”. E.g., a
local constraint w(t) > 
a for t ∈ [t1,t2] can be
meaningfully translated as
vι > a only if bothe δ
is small in comparison to [tɪ, t2] and the state grid
step
h < ∣α∣. See inequality (34) for an example of
how a state variable constraint can be represented
in discrete time.

Transition Rewards. Let the control strategy
be Markovian (2) and action at state
I computed
as

ul = μ(i,Yι), Y1 e Xl, t = 0,1, ..N - 1. (22)
Recalling (4), note that for the approximating
problem, the decision maker receives a reward
that depends on the state at stage
I and on the
action

l(Yt,ui,t) = δ g(Yt,ul,(i), (23)

£ = 0,1,2,... N — 1. The overall reward for the
Markov decision chain
Y, starting from Tq =
Xo ɪo and controlled by и = {uo,Uι,.. . u ∙.-∣}
can be determined as

J(O,xo-,u)=             (24)

CW-I

S


7(yf,uf,-f) + s(Yn)


∣'*''o = Xo^


Finally, the problem:

' max J(O,xo',u)

< subject to                 (25)

, Y,e+1 = Y1 + δfe + b(W(,

with the transition probabilities defined as above
is the
Markov decision chain approximating the

β See [9], page 1002. Conditions “1” and “2” (about con-
tinuity of the Markov chain expected value and variance)
are easy to prove. However, because in this discretisation
method the state step is independent of the time step,
condition “3” (about continuity of the Markov chain in-
crements) can only be satisfied if the grid
Xi is allowed
to become denser. Moreover, consistency fails along the
boundary of the discrete state space so the scheme is
locally
weakly consistent. This is not surprising since it would be
impossible for a system constrained to lie within a finite
space to follow the behaviour of a system which is not
similarly constrained at the points where the constraints
become active. However, this feature is common to all
approximation schemes of this kind.
original continuous-time optimisation problem of
Section 2.1. For convenience, we use the notation
J{δ,X) ≡ J(Q,xo~,u) where û is a maximiser in
the above optimisation.

2.3 Computational Complexity

There are two crucial parameters for the solution
method outlined above: the number of states and
the number of time steps. One expects that in-
creasing these numbers would improve the solu-
tion’s accuracy. However, the computation time
also increases. Recent papers [13] and [14] report
mitigating the curse of dimensionality for a certain
subclass of Markov decision chains through use
of randomisation. The Markovian approximation
defined in this paper leads to a similar conclusion.
Following [7] a result of this nature will be proved.

Claim 1. The computation time needed for the
solution of the Markov decision chain (25) in-
creases approximately linearly in both the number
of states and the number of time steps.

Proof. Suppose a solution is computed by back-
ward induction for a state in stage
I and that
the solution from stage
I + 1 onwards has already-
been determined. The time required to compute
the optimal decision for the current state is
largely
independent of both the number of time steps and
the number of states. Its independence of the num-
ber of states is a consequence of the approximation
scheme scanning only the adjacent states in the
next stage. Doubling the number of states means
that twice the time is taken for each stage and the
computation time doubles. Doubling the number
of time steps leaves the computation time for each
stage fixed but doubles the number of stages and
hence the computation time doubles. This exact
linear relationship reported in [7], see Figure 1, is
spoiled by the vagaries of the computation time
of the numerical maximisation, as well as the load
dependence of the computer performance (see Fig-
ure 1 right panel).                                о

An array of test problems was solved in [7] (and
[8]) using a similar Markovian approximation
method7 . The approximating solutions closely

7 The noise approximation method used in [7], [8] was
different from the weak Taylor approximation used here.
So, only deterministic problems’ solutions there reported
are directly relevant for this paper.



More intriguing information

1. A Critical Examination of the Beliefs about Learning a Foreign Language at Primary School
2. Recognizability of Individual Creative Style Within and Across Domains: Preliminary Studies
3. Luce Irigaray and divine matter
4. Who runs the IFIs?
5. The name is absent
6. A production model and maintenance planning model for the process industry
7. A methodological approach in order to support decision-makers when defining Mobility and Transportation Politics
8. How Offshoring Can Affect the Industries’ Skill Composition
9. The Provisions on Geographical Indications in the TRIPS Agreement
10. The economic doctrines in the wine trade and wine production sectors: the case of Bastiat and the Port wine sector: 1850-1908
11. Are Japanese bureaucrats politically stronger than farmers?: The political economy of Japan's rice set-aside program
12. INTERPERSONAL RELATIONS AND GROUP PROCESSES
13. The name is absent
14. FOREIGN AGRICULTURAL SERVICE PROGRAMS AND FOREIGN RELATIONS
15. Education Responses to Climate Change and Quality: Two Parts of the Same Agenda?
16. Target Acquisition in Multiscale Electronic Worlds
17. The name is absent
18. The name is absent
19. The name is absent
20. The name is absent