five candidate indicators of inflation sentiment, stj , j=Med , Diff , Mom ,
WMed , tr20 , which are described in section 2. Thus, equation 3.1 changes to
πth+h-πt =φj +βj(L)∆xt+δj(L)stj + εtj+h. (3.2)
In all estimates the lag length is chosen to minimize the Schwartz information
criterion, respectively. This criterion has been used for model selection, since our
simulations indicate that a parsimonious specification with relative small lag
length produces the smallest out-of-sample forecast errors. The Schwartz criterion
punishes additional coefficients more heavily than for instance the Akaike infor-
mation criterion.
In-sample fit is not necessarily a good indicator of predictive power. Therefore,
we evaluate the alternative specifications (3.2) on the basis of the out-of-sample
forecast accuracy following Stock and Watson (1999). For that purpose, we gen-
erate a series of out-of-sample forecasts by estimating our equations for an ex-
panding sample size and forecasting the average change in inflation over the next
h periods for each of these samples, with h ranging from 1 to 8 for the US, and
from 1 to 4 for Germany, respectively.
Thus, in any prediction we exclusively use the data available at the start of the
respective forecast period. For instance, our first estimation for the US uses the
sample 1978:1 to 1984:4 and forecasts inflation for h quarters starting with
1985:1. For the second estimate the sample is extended to 1978:1 to 1985:1 and a
forecast is constructed of the average change in the annualized inflation rate for h
quarters starting 1985:2. For Germany the initial sample is 1985:1 to 1991:4 for
West Germany and 1993:1 to 1998:4 for re-unified Germany, respectively.
To evaluate the forecasts three tests are used. First of all, we calculate the root
mean squared forecast errors (RMSFE) and use the Diebold-Mariano test to
check whether the differences in the forecast accuracy of the various specifica-
tions are significant. Secondly, we employ an encompassing test to verify whether
forecast generated by one specification adds information to the forecast generated
by another, and thirdly, we test for a forecast breakdown, probing whether the out-
of-sample accuracy differs significantly from the in-sample fit.
Differences in forecast accuracy
The RMSFE for each forecast, πt+h , is defined as:
nι ∕<'∕.√.' / 1 X ' j jhjk Л ∖2 / 1 X ' j ..jh ∖^ ,ɔ ɔʌ
RMSFEjh = y∑ ∑ t (пt + h - πt+h ) = y∑ '∑∣ t ( et + h ) , (3'3)