bear regime, where values very close to zero are obtained for almost all considered values of the risk aversion
ranging from 1 to 20.
Now let me show the values of the optimal share obtained using the combination of double Gamma and
Gaussian distributions described in section 6.4. Since the expected utility is given by a linear combination
of the MGF of each single distribution, then the optimal share is given by a linear combination of the shares
found using each candidate models included in M (θMj ).
N=7282 |
a* |
R.A=2 |
Л |
R.A=6 |
0.6792 |
R.A=10 |
0.4204 |
E, N=5921 |
a* |
R.A=2 |
Л |
R.A=6 |
0.8794 |
R.A=10 |
0.6546 |
Table X
C, N=1361 |
a* |
R.A=2 |
0 |
R.A=6 |
0 |
R.A=10 |
-0- |
This implies that the investor, who fears misspecification and accounts explicitly for it through the model
combination, invests more in the risky asset than what he would have invested using the double Gamma
distribution as the unique probabilistic belief . This is due to the fact that now using a similarity-weighted
distribution, the investor no longer assigns a probability of one to the mixture of Gamma, and hence does
not overestimate the precision of the forecast provided by this model.
8 Conclusions
This paper proposes a method to estimate the probability density of a random variable of interest in the
presence of model ambiguity. The first step consists in estimating and ranking the candidate parametric
models minimizing the Kullback-Leibler ‘distance’ (KLD) between the nonparametric fit and the parametric
fit. In the second step, the information content of the KLD is used to determine the weights in the model
combination, even when the true structure does not necessarily belong to the set of candidate models.
This approach has the following features. First, it provides an explicit representation of model uncertainty
exploiting models’ misspecification. Second, it overcomes the necessity to have a specific prior over the
set of models and about parameters belonging to each of the models under consideration. Finally, it is
computationally extremely easy.
The NPQMLE estimator obtained in the first step is root-n consistent and asymptotically normally
distributed. Thus, it preserves the same asymptotic properties of a full parametric estimator. Furthermore,
when the misspecified model is used, it delivers ‘better’ finite sample performances than QMLE. However,
it is important to bear in mind that such result is completely determined by the smoothing parameter.
To implement the model combination, using the technical machinery provided by previous studies on
nonparametric entropy-based testing, I derive the asymptotic distribution of the Kullback-Leibler information
between the nonparametric density and the candidate parametric model. Since the approximation error
affects the asymptotic mean of the KLD’s distribution, the latter varies with the underlying parametric model.
Then, to determine the same distribution for all candidate models, employing an assumption technically
equivalent to a Pitman alternative, I center the resulting Normal on the average performance of all plausible
models. Consequently, the weights in the model combination are determined by the probability of obtaining a
performance worse than that actually achieved, relatively to that attained on average by the other competing
models.
21