summed up for the models that contain the variable of interest to obtain the posterior inclusion
probability of this variable.
. « P(M T-kj/2SSE-t/2
P ( Xm |y ) =Σ √-j------j---
j=1 ∑ P (M1 ) T - ki /2 SSE - t /2
i=1 (2b)
Where P(Xm∖y) is the posterior inclusion probability of a given variable. j denotes the models
that include variable Xm and n equals 2K /2 . If the posterior inclusion probability is higher than
the prior inclusion probability, one can conclude that the specific variable should be included in
the estimated models. Since here all possible combinations of the explanatory variables are
estimated, the prior inclusion probability is 0.50.
The posterior mean conditional on inclusion ( E (^∣ y )) is the average of the individual OLS
estimates weighted by P(Mj ∣y). The unconditional posterior mean considers all regressions,
even those without the variable of interest. Hence, the unconditional posterior mean of any given
variable can be derived as the product of the conditional posterior mean and the posterior
inclusion probability. The posterior variance of β ( Var(^∣y) ) can be calculated as follows:
2K 2K
Var(β∖y) = ∑ P(Mj ∣y)Var(^∣y,Mj ) + ∑ P(Mj ∣y)(∕?j - E(β∖y))2
j=1 j=1 (3)
The posterior mean and the square root of the variance (standard error) conditional on inclusion
can be used to obtain t-statistics and to determine the significance of the individual variables upon
inclusion.
Model averaging is vulnerable to the violation of the basic assumption of homoscedasticity and to
the presence of outliers (Doppelhofer and Weeks, 2008). Thus White’s heteroscedasticity-
corrected standard errors are used not only for the full sample but also for sub samples that
exclude one country at a time. This makes it possible to evaluate the impact of individual
countries on the robustness of the results and to eliminate potential outliers.
16