Draft of paper published in:



Context-Dependent Thinning 3

combinations of distributedly encoded items by distinct distributed patterns. Such a representation of
bound items is generally described by tensor products (Smolensky, 1990) and requires an exponential
growth of the number of binding units with the number of bound items. However, as was already
mentioned, for recursive structures it is desired that the dimensionality of the patterns representing
composite items is the same as of the component items’ patterns. Besides, the most important property of
distributed representations - their similarity for similar structures - should be preserved.

The problem was discussed by Hinton (1990), and a number of mechanisms for construction of
his
reduced descriptions has been proposed. Hinton (1990), Pollack (1990), Sperduti (1994) get the
reduced description of a composite pattern as a result of multilevel perceptron training using back-
propagation algorithm. However, their patterns are low dimensional. Plate (1991, 1995) binds high-
dimensional patterns with gradual (real-valued) elements on the fly, without increase of the
dimensionality, using the operation of circular convolution. Kanerva (1996) uses bitwise XOR to bind
binary vectors with equal probability of 0s and 1s. Binary distributed representation are especially
attractive, because binary bitwise operations are enough to handle them, providing the opportunity for
significant simplification and acceleration of algorithmic implementations.

Sparse binary representations (with small fraction of 1s) are of special interest. The sparseness
of codevectors allows a high storage capacity of distributed associative memories (Willshaw, Buneman,
& Longuet-Higgins, 1969; Palm, 1980; Lansner & Ekeberg, 1985; Amari, 1989) which can be used for
their storage, and still further acceleration of software and hardware implementations (e.g. Palm &
Bonhoeffer, 1984; Kussul, Rachkovskij, & Baidyk, 1991a; Palm, 1993). Sparse encoding has also
neurophysiological correlates (Foldiak & Young, 1995). The procedure for binding of sparse distributed
representations ("normalization procedure") was proposed by Kussul as one of the features of the
Associative-Projective Neural Networks (Kussul, 1988, 1992; Kussul, Rachkovskij, & Baidyk, 1991). In
this paper, we describe various versions of such a procedure, its possible neural-network
implementations, and provide examples of its use for the encoding of complex structures.

In section 2 we discuss representational problems encountered in the Associative-Projective
Neural Networks and an approach for their solution. In section 3 the requirements on the Context-
Dependent Thinning procedure for binding and normalization of binary sparse codes are formulated. In
section 4 several versions of the thinning procedure along with their algorithmic and neural-network
implementations are described. Some generalizations and notations are given in section 5. In section 6
retrieval of individual constituent codes from the composite item code is considered. In section 7 the
similarity characteristics of codes obtained by Context-Dependent Thinning procedures are examined. In
section 8 we show examples of encoding various structures using Context-Dependent Thinning. Related
work and general discussion are presented in section 9, and conclusions are given in section 10.

2. Representation of composite items in the APNN: the problems and the answer

2.1. Features of the APNN

The Associative-Projective Neural Networks (APNN) is the name of a neural-network architecture
proposed by Kussul in 1983 for the AI problems which require efficient manipulation of hierarchical
data structures. Fragments of the architecture implemented in software and hardware were also used for
solution of pattern-recognition tasks (Kussul, 1992, 1993). APNN features of interest here are as follows
(Kussul, 1992; Kussul, Rachkovskij, & Baidyk, 1991a; Amosov et al., 1991):

- Items of any complexity (an elementary feature, a relation, a complex structure, etc.) are represented by
stochastic distributed activity patterns over the neuron field (pool of units);

- The neurons are binary and therefore patterns of activity are binary vectors;

- Items of any complexity are represented over the neural fields of the same high dimensionality N;

- The number M of active neurons in the representations of items of various complexity is approximately
(statistically) the same and small compared to the field dimensionality
N. However M is large enough to
maintain its own statistical stability.



More intriguing information

1. The name is absent
2. Yield curve analysis
3. Monetary Policy News and Exchange Rate Responses: Do Only Surprises Matter?
4. Modelling the Effects of Public Support to Small Firms in the UK - Paradise Gained?
5. Place of Work and Place of Residence: Informal Hiring Networks and Labor Market Outcomes
6. PER UNIT COSTS TO OWN AND OPERATE FARM MACHINERY
7. The name is absent
8. Fiscal Sustainability Across Government Tiers
9. On Dictatorship, Economic Development and Stability
10. A model-free approach to delta hedging
11. The name is absent
12. From Aurora Borealis to Carpathians. Searching the Road to Regional and Rural Development
13. Optimal Private and Public Harvesting under Spatial and Temporal Interdependence
14. POWER LAW SIGNATURE IN INDONESIAN LEGISLATIVE ELECTION 1999-2004
15. Innovation in commercialization of pelagic fish: the example of "Srdela Snack" Franchise
16. ISSUES AND PROBLEMS OF IMMEDIATE CONCERN
17. Convergence in TFP among Italian Regions - Panel Unit Roots with Heterogeneity and Cross Sectional Dependence
18. The name is absent
19. The Employment Impact of Differences in Dmand and Production
20. Robust Econometrics