Draft of paper published in:



Context-Dependent Thinning 20

Chunking. The problem of chunking remains one of the least developed issues in existing representation
schemes.

In the HRRs and BSCs chunks are normalized superpositions of stand-alone component
codevectors and their bindings. In its turn, the codevector of a chunk can be used as one of the
components for binding. Thus, chunking allows structures of arbitrary nesting or composition level to be
built. Each chunk should be stored in a clean-up memory. When complex structures are decoded by
unbinding, noisy versions of chunk codevectors are obtained. They are used to retrieve pure versions
from the clean-up memory, which can be decoded in their turn.

In those schemes, the codevectors of chunks are not bound. Therefore they can not be
superimposed without the risk of structure loss, as it was repeatedly mentioned in this paper. In the
APNN-CDT scheme, any composite codevector after thinning represents a chunk. Since the component
codevectors are bound in the chunk codevector, the latter can be operated as a single whole (an entity)
without confusion of components belonging to different items.

When a compositional structure is constructed using HRRs or BSCs, the chunk codevector is
usually the filler which becomes bound with some role codevector. In this case, in distinction to the
APNN-CDT scheme, the components
a, b, c of the chunk become bound with the role rather than with
each other:

role*(a + b + c) = role*a + role*b + role*c. (9.1)

Again, if the role is not unique, it can not be determined to which chunk the binding role*a belongs.
Also, the role codevector should be known for unbinding and subsequent retrieving of the chunk.

Thus in the representation schemes of HRRs and Binary Spatter Codes each of the component
codevectors belonging to a chunk binds with (role) codevectors of other hierarchical levels not belonging
to that chunk. Therefore such bindings may be considered as "vertical". In the APNN-CDT scheme, a
"horizontal" binding is essential: the codevectors of the chunk components are bound with each other.

In the schemes of Plate, Kanerva, and Gayler, the vertical binding chain role_upper_level *
(
role_lower_level * filler) is indistinguishable from role_lower_level * (role_upper_level *
filler),because their binding operations are associative and commutative. For the CDT procedure, in
contrast, 2(1(
a b) c) ≠ 2(a 1(b c)), and also «a b) c) ≠ (a (b c)).

Gayler (1998) proposes to bind a chunk codevector with its permuted version. It resembles the
version of thinning procedure from section 4.2, but for real-valued codevectors. Different codevector
permutations for different nesting levels allow the components of chunks from different levels to be
distinguished, in a similar fashion as using different configurations of thinning connections in the CDT.
However since the result of binding in the scheme of Gayler and in the other considered schemes (with
the exception of APNN-CDT) is not similar to the component codevectors, in those schemes decoding of
the chunk codevector created by binding with a permutation of itself will generally require exhaustion of
all combinations of component codevectors.

This problem with the vertical binding schemes of Plate, Kanerva, and Gayler can be rectified
by using a binding operation that, prior to a conventional binding operation, permutes its left and right
arguments differently (as discussed on p. 84 in Plate (1994)).

The obvious problem of Tensor Product representation is the growth of dimensionality of the
resulting pattern obtained by the binding of components. If it is not solved, the dimensionality will grow
exponentially with the nesting depth. Halford, Wilson, & Phillips (in press) consider chunking as the
means to reduce the rank of tensor representation. To realize chunking, they propose to use the
operations of convolution, concatenation, superposition, as well as some special function that associates
the outer product with the codevector of lower dimension. However the first three operations do not rule
out confusion of grouping or ordering of arguments inside chunk, (i.e., different composite items may
produce identical chunks). And the special function (and its inverse) requires concrete definition.
Probably it could be done using associative memory, e.g. of the sigma-pi type proposed by Plate (1998).

In (L)RAAMs the chunks of different nesting levels are encoded in the same weight matrix of
connections between the input layer and the hidden layer of a multilayer perceptron. It may be one of the
reasons for poor generalization. Probably if additional multilayer perceptrons are introduced for each



More intriguing information

1. Fighting windmills? EU industrial interests and global climate negotiations
2. Why Managers Hold Shares of Their Firms: An Empirical Analysis
3. Picture recognition in animals and humans
4. The name is absent
5. The name is absent
6. The name is absent
7. Clinical Teaching and OSCE in Pediatrics
8. The name is absent
9. The name is absent
10. Behaviour-based Knowledge Systems: An Epigenetic Path from Behaviour to Knowledge
11. Protocol for Past BP: a randomised controlled trial of different blood pressure targets for people with a history of stroke of transient ischaemic attack (TIA) in primary care
12. LIMITS OF PUBLIC POLICY EDUCATION
13. The name is absent
14. Tourism in Rural Areas and Regional Development Planning
15. Modelling the Effects of Public Support to Small Firms in the UK - Paradise Gained?
16. Yield curve analysis
17. The name is absent
18. The effect of classroom diversity on tolerance and participation in England, Sweden and Germany
19. Evaluating the Success of the School Commodity Food Program
20. The name is absent