Context-Dependent Thinning 11
Table 3. The number K of permutations of an input codevector that produces the proper density of the
thinned output codevector in the additive version of Context-Dependent Thinning. (K should be rounded
to the nearest integer).
Density p of |
Number S of component codevectors in the | ||||
2 |
3 |
4 5 |
6 |
7 | |
0.001 |
346.2 |
135.0 |
71.8 44.5 |
30.3 |
21.9 |
0.005 |
69.0 |
26.8 |
14.2 8.8 |
6.0 |
4.3 |
________0.010________ |
34.3 |
13.3 |
7.0 4.4 |
2.9 |
2.1 |
4.3.1. Meeting the requirements to the CDT procedures
Since the configuration of each k-th permutation is fixed, the procedure of additive CDT is deterministic
(3.1). The input vector z to be thinned is the superposition of component codevectors. The number of
these codevectors may be variable, therefore the requirement of variable number of inputs (3.2) holds.
The output vector is obtained by conjunction of z (or its reversible permutation) with the
independent vector ∨k(Pkz). Therefore the 1s of all codevectors superimposed in z are equally
represented in {z) and both the sampling of inputs and the proportional sampling requirements (3.3 - 3.4)
hold.
Density control of the output codevector for variable number and density of component
codevectors is realized by varying K (Table 3). Therefore the density requirements (3.5 - 3.6) hold. Since
the sampling of inputs and proportional sampling requirements (3.3-3.4) hold, the codevector {z) is
similar to all component codevectors xs, and the requirement of unstructured similarity (3.7) holds. The
more similar are the components of one composite item to those of another, the more similar are their
superimposed codevectors z. Therefore the more similar are the vectors - disjunctions of K fixed
permutations of z, and the more similar representations of each component codevector will remain after
conjunction (equation 4.10) with z. Thus the similarity of subsets requirement (3.8) holds.
Characteristics of this similarity will be considered in more detail in section 7.
Since different combinations of component codevectors produce different z and therefore
different codevectors of K permutations of z, representations of certain component codevector in the
thinned codevector will be different for different combinations of component items, and the binding
requirement (3.10) holds. The more similar are representations of each component in the output vector,
the more similar are output codevectors (the requirement of structured similarity 3.9 holds).
4.3.2. An algorithmic implementation
As mentioned before, shift is an easily implementable permutation. Therefore an algorithmic
implementation of this CDT procedure may be as in Figure 4A. Another example of this procedure does
not require preliminary calculation of K (Figure 4B). In this version, conjunctions of the initial and
permuted vectors are superimposed until the number of 1s in the output vector becomes equal to M.
4.3.3. A neural-network implementation
A neural-network implementation of the first example of the additive CDT procedure (Figure 4A) is
shown in Figure 5.
To choose K depending on the density of z, the neural-network implementation should
incorporate some structures not shown in the figure. They should determine the density of the initial
pattern z and "activate" (turn on) K bundles of permutive connections from their total number Kmax.
Alternatively, these structures should actuate the bundles of permutive connections one-by-one in the