Draft of paper published in:



Context-Dependent Thinning 12

fixed order1 until the density of the output vector in fout becomes M/N. Let us recall that bundles of shift
permutive connections are used in algorithmic implementations.

4.4. Subtractive CDT procedure

Let us consider another version of the CDT procedure. Rather than masking z with the straight
disjunction of permuted versions of
z, as in additive thinning, let us mask it with the inverse of that
disjunction:

{z} = z -(k zk~) = z -k (Pkz). (4.15)

If we choose K to make the number of 1s in {z} equal to M, then this procedure will satisfy the
requirements of section 3. Therefore the density of superimposed permuted versions of
z before
inversion should be 1-1/
S (compare to equation 4.13). Thus the number K of permuted vectors to be
superimposed in order to obtain the required density (taking into account "absorption" of 1s) is
determined from:
1-1/
S = 1 - (1-pS)K. (4.16)
Then, for
pS << 1

KlnS/(pS). (4.17)

Algorithmic implementations of this subtractive CDT procedure (Kussul, 1988; Kussul &
Baidyk, 1990; Kussul, 1992) are analogous to those presented for the additive CDT procedure in section
4.3.2. A neural-network implementation is shown in Figure 6.

Since the value of lnS/S is approximately the same at S=2,3,4,5 (Table 4), one can choose the K
for a specified density of component codevectors p(x) as:

K0.34/p(x). (4.18)

At such K and S, p ((z)) p (x). Therefore the number K of permutive connection bundles in Figure 6 can
be fixed, and their sequential activation is not needed. So each neuron of
Fout may be considered as
connected to an average of
K randomly chosen neurons of Fin1 by inhibitory connections. More precise
values of
K (obtained as exact solution of equation 4.16) for different values of p are presented in Table
4.

Table 4. The function lnS/S and the number K of permutations of an input codevector that produces the
proper density of the thinned output codevector in the subtractive version of Context-Dependent
Thinning procedure. (
K should be rounded to the nearest integer).

p

Number S of component codevectors in the input
codevector

2

0.347

3

0.366

4
ln
S/S
0.347

5

0.322

6

0.299

7

0.278

0.001

346.2

365.7

345.9

321.1

297.7

277.0

0.005

69.0

72.7

68.6

63.6

58.8

54.6

0.010

34.3

36.1

34.0

31.4

29.0

26.8

This version of the CDT procedure has been originally proposed under the name "normalization
procedure" (Kussul, 1988; Kussul & Baidyk, 1990; Amosov et al., 1991). We have used it in the
multilevel APNN for sequence processing (Rachkovskij, 1990b; Kussul & Rachkovskij, 1991). We have
also used it for binding of sparse codes in perceptron-like classifiers (Kussul, Baidyk, Lukovich, &
1As noted by Kanerva (personal communication), all
Kmax bundles could be activated in parallel, if the
weight of the
k-th bundle is set to be 2-k and the common threshold of fint neurons is adjusted dynamically
so that
fout has the desired density of 1s.



More intriguing information

1. RETAIL SALES: DO THEY MEAN REDUCED EXPENDITURES? GERMAN GROCERY EVIDENCE
2. Federal Tax-Transfer Policy and Intergovernmental Pre-Commitment
3. Human Resource Management Practices and Wage Dispersion in U.S. Establishments
4. Innovation in commercialization of pelagic fish: the example of "Srdela Snack" Franchise
5. Improving Business Cycle Forecasts’ Accuracy - What Can We Learn from Past Errors?
6. ‘I’m so much more myself now, coming back to work’ - working class mothers, paid work and childcare.
7. EXECUTIVE SUMMARY
8. The name is absent
9. The name is absent
10. Target Acquisition in Multiscale Electronic Worlds