00
0 0.33 0.5 0
0 0.67 0.5 0
0 0.67 1 0 0 1
Ж ж ж ж ж ж ж ж
000
001
010
011
100
101
110
111
Figure 2: Representing overlapping input patterns. A network consisting of two nodes and three inputs
(‘a’, ‘b’, and ‘c’) is wired up so that the first node receives input from ‘a’ and ‘b’ (with weight 1 from
each) and the second node receives input from all three sources (with weight 1 from each). The response
of the network to each possible pattern of inputs is shown. Pre-integration lateral inhibition (lateral weights
have been omitted from the figures) enables each node to respond exclusively to its preferred pattern: i.e.,
either ‘ab’ (110) or ‘abc’ (111). Other input patterns cause a weaker response from that node which has the
closest matching preferred input.
pattern responds since these partial patterns are more similar to ‘ab’ than to ‘abc’. When the input is ‘c’
this partially matches only one of the training patterns and hence the node representing ‘abc’ responds.
Similarly, patterns ‘bc’ and ‘ac’ most strongly resemble ‘abc’ and hence cause activation of that node.
Multiplicity
While it is sufficient in certain circumstances for a single node to represent the input (local coding) it is
desirable in many other situations to have multiple nodes providing a factorial or distributed representation.
As an extremely simple example consider three inputs (‘a’, ‘b’ and ‘c’) each of which is represented by
one of three nodes. Any pattern of inputs can be represented by having zero, one or multiple nodes active.
In this particular case the input to the network provides just as good a representation as the output so there
is little to be gained. However, this example captures the essence of other, more realistic, tasks in which
multiple nodes, each of which represent multiple inputs, may need to be active
Post-integration lateral inhibition can be modified to enable multiple nodes to be active (Frildiak, 1990;
Marshall, 1995) by weakening the strength of the competition between those pairs of nodes that require
to be coactive (the lateral weights need to reach a compromise strength which provides sufficient com-
petition for distinct patterns while allowing multiple nodes to respond to multiple patterns). This either
requires a priori knowledge of which nodes will be coactive or the ability to learn appropriate lateral
weights. However, information locally available at a synapse is insufficient to determine if the correct
compromise weights have been reached (Spratling, 1999) and it is thus necessary to add further constraints
to derive a learning rule. The proposed constraints require that all input patterns occur with equal prob-
ability and that pairs of nodes are coactive with equal frequency (Frildiak, 1990; Marshall, 1995). These
constraints severely restrict the class of problems that can be successfully represented to those in which
all input patterns are mutually exclusive or in which all pairs of input patterns occur simultaneously with
equal frequency. As an example of a case for which these networks would fail, consider using a single
network to represent the color and shape ofan object. At any given time only one node (or group of nodes)
representing a single color and one node (or group of nodes) representing a single shape should be active.
There thus needs to be strong inhibition between nodes representing properties within the same class, and
weak inhibition between nodes representing different properties. This task fails to match the requirements
implicitly defined in the learning rules, and application of those rules would lead to weakening of lateral
inhibition within each class until multiple color nodes and multiple shape nodes were coactive with equal
frequency. Hence, post-integration lateral inhibition, implemented using explicit lateral weights, fails to
provide factorial coding except for the exceptional case in which all pairs of patterns co-occur together, or
in which external knowledge is available to set appropriate lateral weights.
Networks in which competition is implemented using a selection mechanism can also be modified to
allow multiple nodes to be simultaneously active (e.g., k-winners-takes-all). However, these networks also
place restrictions on the types of task that can be successfully represented to those in which a pre-defined
number of nodes need to be active in response to every pattern of stimuli.
In contrast, pre-integration lateral inhibition places no restrictions on the number of active nodes, nor