28
Stata Technical Bulletin
STB-20
Development of the test
Consider the standard linear regression model
Vt = x'tβ + et,
E(et I xt) = О,
-e(⅛) = σL
1 X о
⅛f∑σf = σ , t = 1,...,T
'→∞ t=ι
where xt is a K v 1 vector containing the t-th observation on K regressors. The null hypothesis is that the model’s parameters,
β and σ2, are constant.
The first-order conditions for the least squares estimates of the parameters are
τ
E fa =°’ i = l,---,K + l
t=ι
where
( xitèt,
i = l,...,K
i = K + 1.
The first K first-order conditions determine β. The (K + l)-st equation defines σ2 to be the maximum-likelihood (rather than
the unbiased) estimator of the error variance.
Hansen’s test statistic for the ith parameter is
1 τ
Li = γy. YYα,
where Sll is the cumulative first-order condition through period f for the ith parameter, that is,
t
Sit = Y
fij,
3=1
and where
T
Vi = Yfif
t=l
The joint test statistic for a change in the model’s parameters is
1 τ
Lc = -γS'tV~1St,
t=ι
where
ft = (fit, ∙ ∙ ∙ , fκ+l,tf,
St = (Su,..., Sκ+ljtf, and
τ
v = Yftfl.
t=ι
Note that the ¾ above are just the diagonal elements of the matrix V.
Parameter stability is rejected if the test statistics Lt and Lc are large. The test statistics are basically averages of the
squared cumulative first-order conditions, Зц. The cumulative first-order condition for the entire sample, S,τ, equals zero by
construction. Intuitively, the cumulative first-order conditions for subsamples ending in period j, j < T, should wander around