Discussion
The above examples have shown that pre-integration lateral inhibition provides useful computational ca-
pacities that can not be generated using post-integration lateral inhibition. A network of neurons competing
through pre-integration lateral inhibition is thus capable of generating correct representations based on the
‘knowledge’ stored in the synaptic weights of the neural network. Specifically, it is capable of generating a
local encoding of individual input patterns as well as responding simultaneously to multiple patterns, when
they are present, in order to generate a factorial or distributed encoding. It can produce an appropriate
representation even when patterns overlap. It is able to respond to partial patterns such that the response is
proportional to how well that input matches the stored pattern, and it can detect ambiguities and suppress
responses to them.
Our algorithm simplifies reality by assuming that the role of inhibitory cells can be approximated by
direct inhibitory weights from excitatory cells, and that these lateral weights have the same strength as
corresponding afferent weights. The latter simplification can be justified since weights that have identical
values also have identical pre- and post-synaptic activation values and hence could be learnt independently.
Such a learning mechanism would require inhibitory synapses contacting the dendrite to be modified as a
function of the local dendritic activity rather than the output activity of the inhibited cell. More complex
models, which include a separate inhibitory cell population, and which use multi-compartmental models
of dendritic processes could relate our proposal more directly with physiology. We hope that our demon-
stration of the computational and representational advantages that could arise from dendritic inhibition will
serve to stimulate such more detailed studies.
Computational considerations have led us to suggest that competition via dendritic inhibition could
significantly enhance the information processing capacities of networks of cortical neurons. This claim
is anatomically plausible since it has been shown that cortical pyramidal cells innervate inhibitory cell
types which in turn form synapses on the dendrites of pyramidal cells (Buhl et al., 1997; Tamas et al.,
1997). However, determining the functional role of these connections will require further experimental
evidence. Our model predicts that it should be possible to find pairs of cortical pyramidal cells for which
action potentials generated by one cell induce inhibitory post-synaptic potentials within the dendrites of the
other. Independent of such experimental support, the algorithm we have presented could have immediate
advantages for a great number of neural network applications in a huge variety of fields.
Acknowledgements
This work was funded by MRC Research Fellowship number G81/512.
Correspondence should be addressed M. W. Spratling, Centre for Brain and Cognitive Development,
Birkbeck College, 32 Torrington Square, London. WC1E 7JL. UK. ([email protected]).
References
Borg-Graham LT, Monier C, Fregnac, Y (1998) Visual input evokes transient and strong shunting inhibition
in visual cortical neurons. Nature 393(6683):369-373.
Buhl EH, Tamas G, Szilagyi T, Stricker C, Paulsen O, Somogyi P (1997) Effect, number and location
of synapses made by single pyramidal cells onto aspiny interneurones of cat visual cortex. J Physiol
500(3):689-713.
Cohen MA, Grossberg S. (1987) Masking fields: a massively parallel neural architecture for learning,
recognizing, and predicting multiple groupings of patterned data. Appl Optics 26:1866-1891.
FoldiakP (1989) Adaptivenetworkforoptimallinearfeatureextraction. In: Proceedings of the IEEE/INNS
International Joint Conference on Neural Networks, volume 1, pp. 401-405. New York: IEEE Press.
Foldiak P (1990) Forming sparse representations by local anti-Hebbian learning. Biol Cybern 64:165-170.
Foldiak P (1991) Learning invariance from transformation sequences. Neural Comput 3:194-200.