1 Introduction
“Prediction may be regarded as a special type of decision making under uncertainty : the acts
available to the predictor are the possible predictions, and the possible outcomes are success (for
a correct prediction) and failure (for a wrong one). In a more general model, one may also rank
predictions on a continuous scale, measuring the proximity of the prediction to the eventuality
that actually transpires, allow set-valued predictions, probabilistic predictions, and so forth.”1
Econometric models are implemented in order to deal with uncertainty and guide decisions. Nevertheless,
very often econometric models are developed without any reference to the “uncertainty about the model”
that characterizes the decision context. To this end, because of the complexity of the decision-setting and
the level of approximation embodied in a simple model, I contemplate the presence of model ambiguity. In
other words, instead of specifying a unique statistical structure and treat it as the true model, I consider a
set of competing models.
Empirical models are based on the idea that the occurrence of events (i.e. the data) reveals information.
Typically, although the available database is not sufficient to choose a unique well-defined model, it still
provides relevant knowledge that can be used to differentiate among priors. In this study, a pilot nonpara-
metric density, summarizing all the information contained in the data, is used to estimate and rank candidate
parametric models.
Furthermore, since the model classes can be large due to high uncertainty, it is necessary to develop a
tool to combine the different models in a weighted predictive distribution, where the weights are determined
by the ignorance about the true structure. This model combination provides an explicit representation of
uncertainty across models and allows to extract information from ‘all’ plausible ones.
It is sensible to think that, since we do not know the true model and we approximate it by choosing among
a set of candidate models, at most we can aspire to estimate its best approximation. Because parsimony and
computational simplicity are desirable characteristics of an econometric model, typically the set of competing
models consists of simple parametric alternatives, even when a better infinite-dimensional approximation is
available. This implies that most likely, the true model does not even belong to the set of candidates and
that more than one model can perform fairly well, such that it can be hard to discard one of them. In these
cases, the models combination could provide a better hedge against the lack of knowledge of the correct
structure and outperform each competing model including the best one.
This modelling approach will permit to study and exploit model misspecification which is defined as the
discrepancy between the candidate and the actual model. Since probabilistic models are often used as the
belief of an “expected utility maximizer”, ignoring this misspecification will cause a higher risk of the optimal
decision. For this reason, this study focuses on the formation of an econometric model as a general-purpose
tool: to quantify the plausibility of different probabilistic models, to combine them in a unique distribution
and to explore the impact of the latter on the derivation of optimal choice under uncertainty.
The selection of parametric candidate models in combination with the simple device developed to deter-
mine their probability of being correct, provides a closed form solution of the optimal choice even when the
predictive density is the models combination. This simplicity has little cost in terms of information, since
through the models’ weights we are still able to account for model misspecification and to extract information
from a nonparametric estimate.
1Gilboa I. and D. Schmeidler ; “A theory of Case-Based Decisions”, 2001, pp 59-60.