1 |
∑ |
' N ∑g(β1,y1)l(φi /y)p(φi) N--E$ [ g (β,y^ i ∑1l (fi/ y )p (φi ) |
2 |
n 1 |
N-1 |
(3.8)
Using this Monte Carlo composition method we get an empirical i.i.d. sample of
the joint density over parameters and predicted with all the neoclassical restrictions.
Furthermore this large sample can be used to approximate the posterior probability that
the properties hold.5 I can apply Monte Carlo integration as shown above to derive:
P[(β,~)∈(Φ×Ω)R / y]= ∫f(~,β∕y,~) dβd~
(Φ×ω) R
= ∫I(β,~y)f(~y,β∕y,~x) dβ d~y (3.9)
φ×ω
1N
≡ N ∑ I (β,~i )
N i =1
as suggested also by Chalfant and Wallace (1992) for Monte Carlo integration with
importance sampling. This is also equal to (approximate) Posterior odds since obviously
P[(β,~y)∈(Φ×Ω)/y]= 1. Using (2.35) we can state the condition that must be
satisfied to accept concavity and monotonicity in a decision theoretic framework with
piecewise continuous loss function. Call lR is the loss incurred accepting incorrectly the
restrictions and lU is the loss if we reject incorrectly the restrictions.6 Then an optimal
decision minimizes the expected loss, hence we reject the restrictions if:
P[(β,y)∈(φ×q)r / y] < rlR-ι- (3.10)
lR+ lU
As 0 ≤ P[(β,~)∈(Φ × Ω)r / y] ≤ 1, loss function dictates the critical value for which
restrictions hold. If we assume a symmetric loss function, i.e. lR = lU , we accept the
restrictions if the posterior probability is larger than 0.5.
4. Data and empirical findings
5 Note that restrictions are imposed at a point. However the approach cen be straightfowardly extended to
impose them on a lattice.
6 For sake of semplicity I have assumed that losses are independent on parameter and predicted.