model of such networks. Simple models of competitive
networks with spiking neurons have been created to ex-
plain such psychological processes as access to conscious-
ness (cf. [8]). Realistic simulations of the attractor dynam-
ics of microcolumns, giving results comparable with exper-
iment, should be possible, although they have not been done
yet. In any case, possible attractor states of neurodynamics
should be identified, basins of attractor outlined and transi-
tion probabilities between different attractors found. In the
olfactory system it was experimentally found [9] that the dy-
namics is chaotic and reaches a cyclic attractor only when a
proper external input is given as a cue. The same may be
expected for the dynamics of a microcolumn. Specific ex-
ternal input provides a proper combination of features acti-
vating a microcolumn that partially codes a category. From
the neurodynamical point of view external inputs push the
system into a basin of one of the attractors.
A good approach connecting neurodynamics with men-
tal events in higher cognition tasks should start from ana-
lysis of neural dynamics, find invariants (attractors) of this
dynamics and represent the basins of attractors in P-space.
Behavioral data may also be used to set a topography of psy-
chological space. In the first step neural responses should be
mapped to stimulus spaces. This may be done by population
analysis or Bayesian analysis of multielectrode responses
[10]. Conditional probabilities of responses P(ri∖s'), і =
1..TV are computed from multi-electrode measurements.
The posterior probability P(s∣r) = P(stimulus∣givenre-
sponse) is computed from the Bayes law:
P(.∣,) =P(,s∖r1,n..rκ) = fW∏.=ιp<'∙.∣>)
Σ,∙ .fV>∏~, P(Φ'>
Representing P (s I r) probabilitiesin psychological spaces
based on the feature of stimuli a number of “objects” repre-
senting recognized categories are created. Psychological re-
search on categorization may provide additional behavioral
data and both types of data may be used in one model.
It would be ideal to construct models of neurodynam-
ics based on experimental data, describing how groups of
neurons learn to categorize, and then to reduce these mod-
els to simplified, canonical dynamics (i.e. the simplest dy-
namics equivalent to the original neurodynamics) in the
low-dimensional psychological space. So far there are no
good neurodynamical spiking neuron models of the cate-
gory learning process, but it is possible to create a simple
attractor network models based on the Hopfield networks
and use these models to understand some aspects of cate-
gory learning in monkeys (cf. [4]). The internal state of
these models is described by the activity of a large number
of neurons. Since the input information O(X) is uniquely
determined by a point X in the psychological space it is
possible to investigate the category that the attractor model
A(O(X)) will assign to each point in the psychological
space. Thus an image of the basins of attractor dynamics in
the psychological space may be formed. Attractors do not
have to be point-like, as long as a procedure to assign cat-
egories, or probability of different categories, to a specific
behavior of the attractor network is defined. To characterize
the attractor dynamics in greater details probabilities Pi(X)
may be defined on the P-space. In a K -category problem
there are K - 1 independent probabilities. Other functions
that one may define on P-space may measure the time the
dynamical system needs to reach the asymptotic categoriza-
tion probability value. Functions on P-spaces may be mod-
eled using conventional feedforward neural networks.
More detailed models of this kind, which I have called
previously [7] “Platonic models” (Plato thought that mind
events are a shadow of ideal reality, here probability max-
ima representing categories of input objects are shadows of
neurodynamics), should also preserve similarities between
categories learned by neural systems. Similarity of cate-
gories represented in feature spaces by peaks of high prob-
ability clusters should be proportional to some measure of
distance between them. In neural dynamics this is deter-
mined by transition probability between different attractor
states, determining how “easy” itisto go from one category
to the other. However, there is no reason why such transition
probabilities should be symmetric. As a consequence dis-
tance d(A, P) between two objects A and P in the feature
space should be different than distance d(B, A). Euclidean
geometry cannot be used in such case. A natural generaliza-
tion of distance is given by the action integral in the Finsler
spaces [11]:
s(A,B)
• Ґ
mm /
JA
L(X(t), dX(t)∕dt)dt
where L(∙) is aLagrangianfunction. Attractorbasins cor-
respond to regions of high values of probability densities in
P-spaces. The dynamics in P-spaces is represented by the
movement of a point called the state S, going from one cat-
egory to another, following the underlying neurodynamics.
Dynamics should slow down or stabilize around probabil-
ity peaks, corresponding to the time that the mind spends
in each category coded by an attractor state of neurodynam-
ics. Only a small part of the overall neurodynamics of the
brain is modeled, the rest acting as a source of noise. Point,
cyclic and strange attractors may be interpreted as recog-
nition of categories. Point attractors correspond to infinite
times spend on one category. The distance between such
categories should in this case grow infinitely - if interac-
tions with other parts of the brain are neglected point attrac-
tors behave like “black holes”, trapping the mind state for-
ever.
The model of forming mental representations proposed
here assumes that categorization is initially done by the
brain using many collaborating microcolumns in the asso-
ciative areas of the cortex or in the hipocampus. This pro-
cess should be described by an attractor network. Catego-
rization, or interpretation of the states of this network, is
done by distal projections to cortical motor areas. Longer
learning leads to development ofa specialized feedforward
neural network that matches higher-level complex features
with categorization decisions of the attractor network. Men-