determine the predicted rating. Table 15 presents an overall summary of the prediction
errors, for the three agencies and for the several methods using the respective restricted
specifications.
[Insert Table 15 here]
The first conclusion is that the random effects model including the estimated country
effect is the method with the best fit. On average for the three agencies, it correctly
predicts 70 per cent of all observations and more than 95 per cent of the predicted
ratings lie within one notch (99 per cent within two notches). This is not surprising, the
country errors capture factors like political risk, geopolitical uncertainty and social
tensions that are likely to systematically affect the ratings, therefore, such term acts like
a correction for these factors.
This additional information cropping up from the random effects estimation with the
country specific effect can be very useful if we want to work with countries that belong
to our sample. But if we want to make out of sample predictions we will not have this
information. In that case, only the random effects estimation excluding the country error
is comparable to the OLS specification, to the ordered probit and to the random effects
ordered probit. We can see that in general both ordered probit and random effects
ordered probit have a better fit than the pooled OLS and random effects for all three
agencies, though not as clearly for Fitch. Overall, the simple ordered probit seems the
best method as far as prediction in levels is concerned as it predicts correctly around 45
per cent of all observations and more then 80 per cent within one notch.
Another interesting aspect to notice is that the OLS and the random effects
specifications are biased downward, while the ordered probit and random effects
ordered probit ones are slightly biased upward. The explanation for this turns out to be
simple if we look at Figures 3 to 5, were we present a map of predicted versus actual
rating for every category using the four estimation methods. We can see that both the
OLS and the random effects specifications tend to under predict actual AAA’s (Aaa)
while both ordered probit models and random effects ordered probit tend to over predict
the actual rating in the top categories, attributing many AAA’s (Aaa) to countries with
actual lower rating. In the bottom end of the rating scale the opposite happens, OLS and
24