Figure 1: A network competing through pre-integration lateral inhibition. Nodes are shown as large circles,
excitatory synapses as small open circles and inhibitory synapses as small filled circles.
contribute to this enhancement (Koch et al., 1983; Koch and Segev, 2000; Segev and Rall, 1998). However,
the role of dendritic inhibition in competition between cells and its subsequent effect on neural coding and
receptive field properties has not previously been investigated.
We introduce a neural network model which demonstrates that competition via dendritic inhibition
significantly enhances the computational properties of networks of neurons. As with models of post-
integration inhibition we simplify reality by combining the action of inhibitory interneurons into direct
inhibitory connections between nodes. Furthermore, we group all the synapses contributing to a dendritic
compartment together as a single input. Dendritic inhibition is then modeled as (linear) inhibition of this
input. The algorithm is described fully in the Methods section, but essentially, it operates by causing
each node to attempt to ‘block’ its preferred inputs from activating other nodes. It is thus described as
‘pre-integration inhibition’.
We illustrate the advantages of this form of competition with the aid of a few simple tasks which have
been used previously to demonstrate the pattern recognition abilities required by models of the human
perceptual system (Marshall, 1995; Marshall and Gupta, 1998; Nigrin, 1993). Although these tasks ap-
pear to be trivial, succeeding in all of them is beyond the abilities of single-layer neural networks using
post-integration inhibition. These tasks demonstrate that pre-integration inhibition (in contrast to post-
integration inhibition) enables a neural network to respond simultaneously to multiple stimuli, to distin-
guish overlapping stimuli, and to deal correctly with incomplete and ambiguous stimuli.
Methods
A simple, two-node, neural network in which there is pre-integration inhibition is shown in figure 1. The
essential idea is that each node inhibits other nodes from responding to the same inputs. Hence, if a node
is active and it has a strong synaptic weight to a certain input then it should inhibit other nodes from
responding to that input. A simple implementation of this idea for a two-node network would be:
m
y1 = (wi1xi - αwi2y2)+
i=1
m
y2 = (wi2xi - αwi1y1)+ .
i=1
Where yj is the activation of node j , wij is the synaptic weight from input i to node j , xi is the activation
of input i, α is a scale factor controlling the strength of lateral inhibition, and (v)+ = v if v ≥ 0, (v)+ = 0
otherwise. These simultaneous equations are solved iteratively, with the value of α gradually increasing
at each iteration, from an initial value of zero. Hence, initially each node responds independently to the
stimulus, but as α increases the node activations are modified by competition. Steady-state activity is
reached (at large α) when each individual input contributes to the activation of (at most) a single node.
In order to apply pre-integration lateral inhibition to larger networks a more complex formulation was
used which is suitable for networks containing an arbitrary number of nodes (n) and receiving an arbitrary