Dendritic Inhibition Enhances Neural Coding Properties



number of inputs (m):

( +
1 - α mnax
(k6==j)

Wik__Ук

maxl=1 {wik} maxj==1 {yι}


This formulation was used to produce all the results presented in this paper. Synaptic weights were normal-
ized such that
Pim=1 wij = 1. The value of α was increased from zero to ten in steps of 0.25. Activation
values reached a steady-state at lower alpha (
2) and remained constant from then on. The step size was
found to be immaterial to the final steady-state activation values provided it was less than 0.5.

For the simulation shown in figure 5 a bias was added to the activation of one node. This was imple-
mented by adding 0.1 to the activation of that node during competition. Experiments showed that this bias
could occur at any time (and for any duration) prior to
α reaching a value of 1.5 to generate the same result.

Although results have not been shown here this method is not restricted to working with binary encod-
ings of input patterns and works equally well with analog encodings.

Results

Overlap

In many situations distinct sensory events will share many features in common. If such situations are to be
distinguished it is necessary for different sets of neurons to respond despite this overlap in input features.
As a simple example, consider the task of representing two overlapping patterns: ‘ab’ and ‘abc’. A network
consisting of two nodes receiving input from three sources (labelled ‘a’, ‘b’ and ‘c’) should be sufficient.
However, because these input patterns overlap, when the pattern ‘ab’ is presented the node representing
‘abc’ will be partially activated, while when the pattern ‘abc’ is presented the node representing ‘ab’ will
be fully activated.

When the synaptic weights have certain values both nodes will respond with equal strength to the
same pattern. For example, when the weights are all equal, both nodes will respond to pattern ‘ab’ with
equal strength (Marshall, 1995). Similarly, when the total synaptic weight
from each input is normalized
(‘post-synaptic normalization’) both nodes will respond equally to pattern ‘ab’ (Marshall, 1995). When the
total synaptic weight
to each node is normalized (‘pre-synaptic normalization’) both nodes will respond
to pattern ‘abc’ with equal activation (Marshall, 1995). Under all these conditions the response fails to
distinguish between distinct input patterns and post-integration inhibition can do nothing to resolve the
situation (and will, in general, result in a node chosen at random winning the competition).

Several solutions to this problem have been suggested. Some require adjusting the activations using a
function of the total synaptic weight received by the node (
i.e., using the Webber Law (Marshall, 1995) or a
masking field (Cohen and Grossberg, 1987; Marshall, 1995)). These solutions scale badly with the number
of overlapping inputs, and do not work when (as is common practice in many neural network models)
the total synaptic weight to each node is normalized. Other suggestions have involved tailoring the lateral
weights to ensure the correct node wins the competition (Foldiak, 1990; Marshall, 1995). These methods
work well (Marshall, 1995), but fail to meet other criteria as discussed below.

The most obvious, but most overlooked, solution would be to remove constraints placed on allowable
values for synaptic weights (
e.g., normalization) which serve to prevent the input patterns being distin-
guished in weight space. It is simple to invent sets of weights which unambiguously classify the two
overlapping patterns (
e.g., if both weights to the node representing ‘ab’ are 0.5 and each weight to the node
representing ‘abc’ are 0.4 then each node responds most strongly to its preferred pattern and could then
successfully inhibit the activation of the other node).

Using pre-integration lateral inhibition overlapping patterns can be successfully distinguished even
when normalization is used (either pre- or post-synaptic normalization). Figure 2 shows the response of
such a network to all possible input patterns. The two networks on the right show that the correct response
is generated to input patterns ‘ab’ and ‘abc’. The other networks show that when partial input patterns
are presented the node which represents the most similar pattern is activated in proportion to the degree
of overlap between the partial pattern and the preferred input of that node. Hence, when the input is
‘a’ or ‘b’, which partially matches both of the training patterns, then the node representing the smallest



More intriguing information

1. Strategic Effects and Incentives in Multi-issue Bargaining Games
2. The name is absent
3. The Demand for Specialty-Crop Insurance: Adverse Selection and Moral Hazard
4. The name is absent
5. The name is absent
6. Optimal Taxation of Capital Income in Models with Endogenous Fertility
7. The name is absent
8. Towards a framework for critical citizenship education
9. Studies on association of arbuscular mycorrhizal fungi with gluconacetobacter diazotrophicus and its effect on improvement of sorghum bicolor (L.)
10. Three Policies to Improve Productivity Growth in Canada
11. Economic Evaluation of Positron Emission Tomography (PET) in Non Small Cell Lung Cancer (NSCLC), CHERE Working Paper 2007/6
12. On the Existence of the Moments of the Asymptotic Trace Statistic
13. Assessing Economic Complexity with Input-Output Based Measures
14. Government spending composition, technical change and wage inequality
15. The name is absent
16. The name is absent
17. The name is absent
18. Developmental Robots - A New Paradigm
19. How do investors' expectations drive asset prices?
20. Spatial Aggregation and Weather Risk Management