1.5 Estimation of parameters
15
three iterations are needed. The speed of the convergence, however, depends
on the initial estimates and the convergence criterion.
The regression method is based on the following observations concerning the
characteristic function φ(t). First, from (1.2) we can easily derive:
ln( — ln ∣φ(t)12) = ln(2σα) + αln ∣t∣. (1.13)
The real and imaginary parts of φ(t) are for α 6= 1 given by
K{φ(t)} = exp( —\at∣α) cos ^tt + ∣σt∣αβsign(t) tan —] ,
and
πα
{φ(t)} = exp( — |σt|α)sιn μt + ∣σt∣αβsign(t) tan — .
The last two equations lead, apart from considerations of principal values, to
arctan ^{ф(t)} ) = μt + βσα tan π2αsign(t)∣t∣α.
(1.14)
Equation (1.13) depends only on α and σ and suggests that we estimate these
parameters by regressing y = ln(— ln ∣φn(t)∣2) on w = ln ∣t∣ in the model
Ук = m + αwk + ek, k = 1, 2 ,...,K, (1.15)
where tk is an appropriate set of real numbers, m = ln(2σα), and ek denotes an
error term. Koutrouvelis (1980) proposed to use tk = ∏5k, k = 1, 2,..., K; with
K ranging between 9 and 134 for different estimates of α and sample sizes.
Once α and σ have been obtained and α and σ have been fixed at these values,
estimates of β and μ can be obtained using (1.14). Next, the regressions are
repeated with α, σ, β and μ as the initial parameters. The iterations continue
until a prespecified convergence criterion is satisfied.
Kogon and Williams (1998) eliminated this iteration procedure and simplified
the regression method. For initial estimation they applied McCulloch’s (1986)
method, worked with the continuous representation (1.3) of the characteristic
function instead of the classical one (1.2) and used a fixed set of only 10 equally
spaced frequency points tk . In terms of computational speed their method
compares favorably to the original method of Koutrouvelis (1980). It has a
significantly better performance near α = 1 and β 6= 0 due to the elimination
of discontinuity of the characteristic function. However, it returns slightly worse
results for very small α.