Draft of paper published in:



Context-Dependent Thinning 10

statistically the same fraction of 1s is left from each component xs. Therefore the requirements of
sampling of inputs (3.3) and of proportional sampling (3.4) hold.

For S sparse codevectors of equal density p(x)<<1

p(z)Sp(x), (~4.8)

p(z') = p(z)p(z ) S2p2(x). (4.9)

To satisfy the requirements of density (3.5-3.6), p(z')=p(x) should hold for various S. It means that p(x)
should be equal to 1/
S2. Therefore at fixed density p(x) the density requirements (3.5-3.6) are not
satisfied for variable number
S of component items. The similarity and binding requirements (3.7-3.10)
hold. In particular, the requirement of similarity of subsets (3.8) holds because the higher the number of
identical items, the more identical conjunctions are superimposed in equation 4.7.

A neural-network implementation of permutive conjunctive thinning is shown in Figure 3. In the
neural network terms, units are called "neurons", and their pools are called "neural fields". There are two
input fields
fin1 and fin2 and one output field fout consisting of N binary neurons each. fin1 is connected to
fout by a bundle of N direct projective (1-to-1) connections. Each connection of this bundle connects
neurons of the same number.
fin2 is connected to fout by the bundle of N permutive projective connections.
Each connection of this bundle connects neurons of different numbers. "Synapses" of the neurons of the
output field have weights of +1. The same pattern of superimposed component codevectors is activated
in both input fields. Each neuron of the output field summarizes the excitations of its two inputs
(synapses). The output is determined by comparison of the excitation level with the threshold
θ = 1.5.
Therefore each output neuron performs conjunction of its inputs. Thus the activity pattern of the output
field corresponds to bit conjunction of pattern present in the input fields and its permutation. Obviously,
there are a lot of different configurations of permutive connections. Permutation by shift is particularly
attractive as it is simple and fast to implement in computer simulations.

4.3. Additive CDT procedure

Though for permutive conjunctive thinning the density of resulting codevectors is closer to the density of
each component codevector than for direct conjunction, it varies with the number and density of
component codevectors.

Let us make a codevector z by the disjunction of S component codevectors xs, as in equation 4.4.
Since the density of component codevectors is low and their number is small, then the "absorption" of
common 1s is low and, according to equations 4.8 and 4.9,
p(z') is approximately S2p(x) times p(x). For
example, if
p(x)=0.01 and S=5, then p(z') (1/4)p(x).

Therefore, to make the density of the thinned codevector equal to the density of its component
codevectors, let us superimpose an appropriate number
K of independent vectors with the density p(z'):
<z> = k (z zk~) = z (k zk~) (4.10)

Here {z} is the thinned output vector, zk~ is a unique (independent stochastic) permutation of elements of
vector
z, fixed for each k. In vector-matrix notation, we can write:

{z} = k (z Pkz) = z k (Pkz) (4.10a)

The number K of vectors to be superimposed by disjunction can be determined as follows. If the
density of the superposition of permuted versions of
z is made
p(k(Pkz)) = 1/S, (4.11)

then after conjunction with z (equations 4.10, 4.10a) we will get the needed density of (z):

p (<z>) = p (z)/SSp (x)/S = p (x). (4.12)

Taking into account the "absorption" of 1s in disjunction of K permuted vectors z~, equation (4.11) can
be rewritten:

1/S = 1-(1-pS)K. (4.13)

Then K=ln(1-1/S)/ln(1-pS). (4.14)

The dependence K(S) at different p is shown in Table 3.



More intriguing information

1. The name is absent
2. Evolving robust and specialized car racing skills
3. Strategic Effects and Incentives in Multi-issue Bargaining Games
4. Changing spatial planning systems and the role of the regional government level; Comparing the Netherlands, Flanders and England
5. Improvement of Access to Data Sets from the Official Statistics
6. PACKAGING: A KEY ELEMENT IN ADDED VALUE
7. Ability grouping in the secondary school: attitudes of teachers of practically based subjects
8. The name is absent
9. The name is absent
10. Graphical Data Representation in Bankruptcy Analysis
11. The Veblen-Gerschenkron Effect of FDI in Mezzogiorno and East Germany
12. Does Presenting Patients’ BMI Increase Documentation of Obesity?
13. The Employment Impact of Differences in Dmand and Production
14. Expectation Formation and Endogenous Fluctuations in Aggregate Demand
15. PRIORITIES IN THE CHANGING WORLD OF AGRICULTURE
16. Applications of Evolutionary Economic Geography
17. Business Cycle Dynamics of a New Keynesian Overlapping Generations Model with Progressive Income Taxation
18. The name is absent
19. The name is absent
20. Constructing the Phylomemetic Tree Case of Study: Indonesian Tradition-Inspired Buildings