Context-Dependent Thinning 10
statistically the same fraction of 1s is left from each component xs. Therefore the requirements of
sampling of inputs (3.3) and of proportional sampling (3.4) hold.
For S sparse codevectors of equal density p(x)<<1
p(z)≈Sp(x), (~4.8)
p(z') = p(z)p(z ) ≈ S2p2(x). (4.9)
To satisfy the requirements of density (3.5-3.6), p(z')=p(x) should hold for various S. It means that p(x)
should be equal to 1/S2. Therefore at fixed density p(x) the density requirements (3.5-3.6) are not
satisfied for variable number S of component items. The similarity and binding requirements (3.7-3.10)
hold. In particular, the requirement of similarity of subsets (3.8) holds because the higher the number of
identical items, the more identical conjunctions are superimposed in equation 4.7.
A neural-network implementation of permutive conjunctive thinning is shown in Figure 3. In the
neural network terms, units are called "neurons", and their pools are called "neural fields". There are two
input fields fin1 and fin2 and one output field fout consisting of N binary neurons each. fin1 is connected to
fout by a bundle of N direct projective (1-to-1) connections. Each connection of this bundle connects
neurons of the same number. fin2 is connected to fout by the bundle of N permutive projective connections.
Each connection of this bundle connects neurons of different numbers. "Synapses" of the neurons of the
output field have weights of +1. The same pattern of superimposed component codevectors is activated
in both input fields. Each neuron of the output field summarizes the excitations of its two inputs
(synapses). The output is determined by comparison of the excitation level with the threshold θ = 1.5.
Therefore each output neuron performs conjunction of its inputs. Thus the activity pattern of the output
field corresponds to bit conjunction of pattern present in the input fields and its permutation. Obviously,
there are a lot of different configurations of permutive connections. Permutation by shift is particularly
attractive as it is simple and fast to implement in computer simulations.
4.3. Additive CDT procedure
Though for permutive conjunctive thinning the density of resulting codevectors is closer to the density of
each component codevector than for direct conjunction, it varies with the number and density of
component codevectors.
Let us make a codevector z by the disjunction of S component codevectors xs, as in equation 4.4.
Since the density of component codevectors is low and their number is small, then the "absorption" of
common 1s is low and, according to equations 4.8 and 4.9, p(z') is approximately S2p(x) times p(x). For
example, if p(x)=0.01 and S=5, then p(z') ≈ (1/4)p(x).
Therefore, to make the density of the thinned codevector equal to the density of its component
codevectors, let us superimpose an appropriate number K of independent vectors with the density p(z'):
<z> = ∨k (z ∧ zk~) = z ∧ (∨k zk~) (4.10)
Here {z} is the thinned output vector, zk~ is a unique (independent stochastic) permutation of elements of
vector z, fixed for each k. In vector-matrix notation, we can write:
{z} = ∨k (z ∧ Pkz) = z ∧ ∨k (Pkz) (4.10a)
The number K of vectors to be superimposed by disjunction can be determined as follows. If the
density of the superposition of permuted versions of z is made
p(∨k(Pkz)) = 1/S, (4.11)
then after conjunction with z (equations 4.10, 4.10a) we will get the needed density of (z):
p (<z>) = p (z)/S ≈ Sp (x)/S = p (x). (4.12)
Taking into account the "absorption" of 1s in disjunction of K permuted vectors z~, equation (4.11) can
be rewritten:
1/S = 1-(1-pS)K. (4.13)
Then K=ln(1-1/S)/ln(1-pS). (4.14)
The dependence K(S) at different p is shown in Table 3.