5.2 Comparison of denoising functions
We next compare DSS schemes based on source-variance estimates to the clas-
sical tanh-based approach in symmetrical separation of artificial signals. The
signals were generated as follows. First, six signals were generated by modulat-
ing Gaussian noise with slowly changing envelope. Then the signals were divided
into two subspaces, three signals in each. In each of the subspaces, the signals
were modulated by another envelope common to all the signals in the subspace.
The common envelopes of the subspaces were stronger than the individual en-
velopes of the sources. Finally, the unit-variance signals were mixed linearly
(with M = N). Mixing coefficients were sampled from normal distribution and
Gaussian noise with variance σν2 = 0.09 was added.
One hundred different data sets were generated and DSS was used to separate
the sources with three different denoising functions. Two methods were based
on smoothed estimate of source variance. Either the whitening scheme described
in Sec. 4 or tanh-based scheme were used in order to promote separation, the
tanh-mask being 1 — tanh[σtot(t)]∕σtot(t). If σ2ot(t) = s2 (t), this reduces to the
popular tanh-nonlinearity. With these methods, spectral shift was computed by
assuming that the mask does not significantly depend on any individual source
value, i.e. —β equals to the average of elements of the mask. The third method
was the popular tanh-nonlinearity with FastICA-type spectral shift. The step
size was adapted by the 179-rule.
As before, the algorithms were run until convergence. The average SNRs of
the separation over the one hundred runs are shown in Fig. 2b. Smoothing the
variance estimate clearly improves the SNR with tanh-nonlinearity. Variance
whitening achieved comparable SNR but used significantly less iterations.
5.3 MEG signal separation
Finally, we used the DSS algorithms and acceleration methods studied in the
previous sections to separate sources from rhythmic MEG data. The whole data
set (M = 122 and T = 65536) was used and 30 components were extracted using
the same denoising functions as in the previous section. Both the 179-rule and
the predictive rule (11) were tested. The number of iterations was taken to be
the limit where the projection vector w of the slowest converging component
reaches 0.1o of the final projection. Enhanced spectrograms of some interesting
components extracted by the variance-whitening DSS are depicted in Fig. 3a.
Tanh-nonlinearity with smoothed variance estimate extracted similar compo-
nents, but the usual tanh-nonlinearity without smoothing seemed to have trouble
in finding the weak steady frequencies shown in the bottom row of Fig. 3a. The
processing times of different denoising functions and different step size adapta-
tions are shown in Fig. 3b. Since the computational complexity of one iteration
depends on the denoising function, the total CPU-time is reported. Compared to
the variance-whitening DSS, the tanh-nonlinearities used more than two times
more processing time, independent of the step-size adaptation. Compared to the
179-rule, the adaptive γ reduced the total processing time by 20-50 %, depending