This is repeated throughout the sample, moving ahead one semester, and it gives a forecast
evaluation period equal to 20 observations.
Furthermore, for each new sample (which increases by one observation in each iteration)
the model specification (e.g. number of lags, and, or of principal components) is selected
through BIC. When using stochastic simulation to produce probability forecasts, for each
of the 20 periods, we carry Monte Carlo stochastic simulation of the ARDL model specifi-
cation described in (??) in order to generate the alternative scenarios corresponding to the
model chosen using the BIC criterion. The probability forecasts are obtained by counting
the number of times the prediction given by any of the forecasting models employed is equal
or above a specific threshold. The resulting number is then divided by the total number of
scenarios (e.g. 10000). The probability forecast obtained via Probit modelling are simply
given by computing Φ prrr=1 γifi,t^ where Φ is the cumulative Gaussian distribution func-
tion and the ^is are the coefficients estimated by maximising the log-likelihood given in (??).
In order to evaluate the accuracy of the probability forecasts, we employ the Kuipers
Score (see Granger and Pesaran, 2000) based on the definition of two states as two different
indications given by the model: currency crisis and no currency crisis. We assume that the
model signals a crisis when the predicted probability is larger than 0.5. Therefore, one can
calculate event forecasts (Et) : Et= 1 when Pt > 0.5 and Et= 0 when Pt ≤ 0.5. Comparing
these events forecasts with the actual outcomes Rt, the following contingency matrix can be
written:
Forecasts/Outcomes |
crisis( Rt = 1) |
no crisis(Rt = 0) |
crisis |
Hits |
False Alarms |
no crisis |
Misses |
Correct Rejections |
The Kuipers score is defined as the difference between the proportion of crises that were
correctly forecasted, H = hits/(hits + misses) and the proportion of no crisis that were
incorrectly forecasted, FA = f alse alarms/(f alse alarms + correct rejections):
KS = H - FA (11)
Positive values for the KS scores imply that, at least, one crisis event is correctly signalled
and that the model generates proportionally more hits than false alarms. We also use, for the
purpose of evaluation the accuracy of probability forecasts, the Matthews (1975) correlation
coefficient which is widely applied in biology:
11