(i) Monotonicity: If m(x) ≤ n(x)∀x ∈ R+ then
T(m)
max <^kl — X2 + δ J
m(c(X, l) + v + u)f1(u)du g(v)dv+
∞
∞
δ
Xc
m(c(X, l) + r + v + u)f2(u)du g(v)dv
∞
max <^kl — X2 + δ J
n(c(X, l) + v + u)f1(u)du g(v)dv+
∞
∞
δ
Xc
n(c(X, l) + r + v + u)f2(u)du g(v)dv
∞
as m(x) ≤ n(x) pointwise and the integral is a linear operator
T(n)
(ii) Discounting: For all a ≥ 0 there exits δ < 1 with
T(m + a)
max <^kl — X2 + δ J
[m(c(X, l) + v + u) + a]f1(u)du g (v)dv+
∞
δ [m(c(X, l) + r + v + u) + a]f2(u)du g(v)dv
Xcrit-c(X,l) -∞
max
l
kl
X2 + δ
Xc
-∞
-c(X,l)
m(c(X, l)
-∞
+ v + u)f1 (u)du g(v)dv+
c(X,l)
m(c(X, l)
-∞
+ r + v + u)f2 (u)du g(v)dv
+ δa
T (m) + δa
The second line follows from the fact that the densities f1 (u), f2 (u) integrate to one.
Points (i) and (ii) are sufficient to show that T is a contraction mapping. This implies that we can
start with an arbitrary function m() and repeated application ofT will converge to the unique fixed
point, the true value function.
We will next show that T maps concave functions into concave functions. Hence, if we start
with a concave function m and repeatedly apply T , all resulting functions will be concave as well.
This implies that the unique attractor, the true value function, is concave as well.
Concavity: ∀X1,X2 ∈ R+ define the optimal loading as l1 and l2, respectively. Note that
for the convex combination X3 = θX1 + (1 — θ)X2, where θ ∈ (0, 1), the convex combination
27
More intriguing information
1. Response speeds of direct and securitized real estate to shocks in the fundamentals2. Großhandel: Steigende Umsätze und schwungvolle Investitionsdynamik
3. The name is absent
4. FOREIGN AGRICULTURAL SERVICE PROGRAMS AND FOREIGN RELATIONS
5. Testing the Information Matrix Equality with Robust Estimators
6. Olfactory Neuroblastoma: Diagnostic Difficulty
7. A Brief Introduction to the Guidance Theory of Representation
8. The name is absent
9. The Works of the Right Honourable Edmund Burke
10. The name is absent