Furthermore,
Cn(bn) -→a.c C(θ*)
(Proof: See the Appendix)
where
C (θ*) = A(θ*))-1B(θ*)A(θ*))-1 (12)
and
1 1X f' X d2logf(θ,xi) bl ʌ
An(θ) = Σ----∂θ----'
Bn(θ)= (n Σ
d log f (θ,xi) d log f (θ,xi) b2, ʌ
----∂θi---∞- fn(xi)
∂θj
A(θ)=E
∂2 log f (θ,xi)
∂θi∂θj
g(xi)
B(θ)=E
∂ log f (θ,Xi)
∂θi
d log f (θ, xi) 2(
∂θj g (x
(13)
(14)
(15)
(16)
It is important to point out that also in this framework, similar to White(1982) in the context of QMLE,
in the presence of misspecification the covariance matrix C,(θ*) no longer equals the inverse of Fisher’s
Information (FI).
4.2 Asymptotic distribution of KI: heuristic approach
In order to obtain the weights in the models combination, as indicated by the formula (11), we need to
derive the asymptotic distribution of K[Ij , the random variable that measures the ignorance about the true
structure.
The purpose of this section is to provide a sketch of the proof (developed in the Appendix), in order to
give the main intuition and to convey two main pieces of information. First, the effect of estimating the true
model g by fj (bθ, x) on the limiting distribution of KcIj . Second, how and which of the different components
of the KI affect the mean and variance of the asymptotic distribution.
To simplify the notation I drop the index j and we rewrite fj (bθ, x)=fbθ , fcn(x) = fcn and g(x) = g, then
KI is given by the following formula:
—-~.
ln fbθ )fn dx =
KcI=KI(fcn,fbθ)= (ln fcn
x
= (ln fcn
x
ln g)dFbn - (ln fbθ
x
lng)dFn=KI1
KcI2,
(17)
where the definition of KI1 and KI2 is clear from the previous expression.
1) KI1 can be approximated in the following way14 :
.—.
14This can be easily seen by rewriting fn in the following way:
.—- .—-
fn -g+g = 1 + fngg = 1 + Y, then ln(1 + γ) ' γ
2 γ2.