Context-Dependent Thinning 9
P' |
2 |
3 |
4 |
5 |
6 |
K 7 |
8 |
9 |
10 |
11 |
12 |
0.001 |
0.032 |
0.100 |
0.178 |
0.251 |
0.316 |
0.373 |
0.422 |
0.464 |
0.501 |
0.534 |
0.562 |
0.010 |
0.100 |
0.215 |
0.316 |
0.398 |
0.464 |
0.518 |
0.562 |
0.599 |
0.631 |
0.658 |
0.681 |
0.015 |
0.122 |
0.247 |
0.350 |
0.432 |
0.497 |
0.549 |
0.592 |
0.627 |
0.657 |
0.683 |
0.705 |
To thin more than two codevectors, it is natural to generalize equation 4.1:
z = ∧s xs, (4.3)
where 5 = 1... S, S is the number of codevectors to be thinned. Though this operation allows binding of
two or more codevectors, a single vector can not be thinned. The density of resulting codevector z
depends on the densities of x5 and on their number S. Therefore to meet the requirement of uniform low
density (3.5), the densities of x5 should be chosen depending on the number of thinned codevectors.
Also, the requirement of density control (3.6) is not satisfied.
We applied this version of direct conjunctive thinning to encode positions of visual features on a
two-dimensional retina. Three codevectors were bound (S=3): the codevector of a feature, the codevector
of its X-coordinate, and the codevector of its Y-coordinate (unpublished work of 1991-1992 on
recognition of handwritten digits, letters, and words in collaboration with WACOM Co., Japan). Also,
this technique was used to encode words and word combinations for text processing (Rachkovskij,
1996). In so doing, the codevectors of letters comprising words were made bound (S>10). The density of
codevectors to be bound by thinning was chosen so as to provide a specified density of the resulting
codevector (Table 2, K=3...12).
Neural-network implementations of direct conjunctive thinning procedures are rather
straightforward and will not be considered here.
4.2. Permutive conjunctive thinning
The codevectors to be bound by direct conjunctive thinning are not superimposed. Let us consider the
case where S codevectors are superimposed by disjunction:
z = ∨5x5. (4.4)
Conjunction of a vector with itself produces the same vector: z ∧ z = z. So let us modify z by
permutation of all its elements and make conjunction with the initial vector:
z' = z ∧ z~. (4.5)
Here, z~ is the permuted vector. In vector-matrix notation, it can be rewritten as:
z' = z ∧ Pz, (4.5a)
where P is an N x N permutation matrix (each row and each column of P has a single 1, and the rest of
P is 0; multiplying a vector by a permutation matrix permutes the elements of the vector).
Proper permutations are those producing the permuted vector that is independent of the initial
vector, e.g. random permutations or shifts. Then the density of the result is
p(z') = p(z)p(z~) = p(z) p(Pz). (4.6)
Let us consider the composition of the resulting vector:
z' = z ∧ z~ = (x1 ∨... ∨ x5) ∧ z~
= (x1 ∨ ... ∨ x5) ∧ (x1~ ∨ ...∨ x5~)
= x1 ∧ (x1~ ∨ ... ∨ x5~) ∨ ... ∨ x5 ∧ (x1~ ∨ ... ∨ x5~)
= (x1 ∧ x1~) ∨ ... ∨ (x1 ∧ x5~) ∨ ... ∨ (x5 ∧ x1~) ∨ ...∨ (x5 ∧ x5~). (4.7)
Thus the resulting codevector is the superposition of all possible pairs of bitwise codevector
conjunctions. Each pair includes certain component codevector and certain permuted component
codevector.
Because of initial disjunction of component codevectors, this procedure meets more
requirements on the CDT procedures than direct conjunctive thinning. The requirement of variable
number of inputs (3.2) is now fully satisfied. As follows from equation 4.7, each component codevector
x5 is thinned by conjunction with one and the same stochastic independent vector z~. Therefore
More intriguing information
1. The name is absent2. Quality practices, priorities and performance: an international study
3. The name is absent
4. An alternative way to model merit good arguments
5. Does adult education at upper secondary level influence annual wage earnings?
6. THE CHANGING STRUCTURE OF AGRICULTURE
7. Konjunkturprognostiker unter Panik: Kommentar
8. 5th and 8th grade pupils’ and teachers’ perceptions of the relationships between teaching methods, classroom ethos, and positive affective attitudes towards learning mathematics in Japan
9. The name is absent
10. The name is absent