Connectionism, Analogicity and Mental Content
and, for much of this century at least, unpopular theory of mental representation. In a nutshell, if
H&T’s dynamical characterisation of connectionism is on the right track, then connectionists
must embrace a resemblance theory of mental content.
This is a conclusion for which most philosophers of cognitive science are quite
unprepared. As H&T observe, it has been a working assumption in this field that the issue of
intentionality and its naturalisation are orthogonal to debates about cognitive architecture:
[T]hroughout this book we have taken representation for granted. In doing so, we simply follow
Connectionist (and classicist) practice.... It is assumed that natural cognitive systems have
intentional states with content that is not derived from the interpreting activity of other
intentional agents. But it is not the business of either classical cognitive science or connectionist
cognitive science to say where underived intentionality “comes from” (although each may place
certain constraints on an answer to this question). (1996, p.13)
But it will be the argumentative burden of this paper to show that, along with the many things
that change with connectionism, this assumption no longer holds good. Connectionism adds to
the traditional focus in cognitive science on the processes by which mental representations are
computationally manipulated, a new focus - one on the vehicles of mental representation, on the
entities that carry content through the mind. Indeed, the relationship between computational
processes and representational vehicles is so intimate within connectionist cognitive science, that
the story about the former implies one about the latter. As a consequence, while classicism is
compatible with a range of different theories of mental content, connectionism, in virtue of its
dynamical style of computation, doesn’t have this luxury.
To see why all of this is so, however, we will need to go back to the beginning. And in
the beginning there was computation.
2. What is Computation?
Jerry Fodor is fond of remarking that there is only one important idea about how the mind
works that anybody has ever had. This idea he attributes to Alan Turing:
[G]iven the methodological commitment to materialism, the question arises, how a machine
could be rational?.Forty years or so ago, the great logician Alan Turing proposed an answer to
this question.Turing noticed that it isn’t strictly true that states of mind are the only
semantically evaluable material things. The other kind of material thing that is semantically
evaluable is symbols.. Having noticed this parallelism between thoughts and symbols, Turing
went on to have the following perfectly stunning idea. “I’ll bet”, Turing (more or less) said, “that
one could build a symbol manipulating machine whose changes of state are driven by the material
properties of the symbols on which they operate (for example, by their weight, or their shape, or
their electrical conductivity). And I’ll bet one could so arrange things that these state changes are
rational in the sense that, given a true symbol to play with, the machine will reliably covert it
into other symbols that are also true. (Fodor, 1992, p.6)
The rest, as one says, is history. Turing’s idea very soon led to the development of digital
computers, and the theory of digital computation when applied to the mind leads to classicism.
All of this is old hat. But in the subsequent discussion of Turing’s insight and its implications for
cognitive science, one detail is sometimes overlooked. This is that there are actually two ideas
embodied in this aspect of Turing’s work; two ideas that can and should be distinguished. One is
a general idea about computation; the other is a more specific idea about how to construct a
computational machine.
In 1936, when the perfectly stunning idea alluded to by Fodor was taking shape in
Turing’s mind, the word ‘computer’ meant nothing more than “a person doing mathematical
calculations” (see Hodges, 1983, chp.2). What Turing did, as everyone knows, was to try to
imagine a computational machine: a machine that could perform such calculations