22
exclusively natural era and the era of the artificially natural mind. This I call the
communication-understanding principle (CUP). The next section accounts for the
additional characteristics of the latter era.
2.2. The era of the artificially natural human mind
This section identifies the key characteristic of the artificially natural human
mind and the subsequently derived uniqueness of very modern humans (i.e., modern
humans not later than seven thousand years ago -symbol H7kya). Specifically, I
propose that written human language (WHuLa) is the key characteristic of H7kya. It is
the novel result of the combination of two considerably earlier human traits: speech
and external representations. WHuLA is the necessary breakthrough for the
consequent dawn of the era of the cumulatively artificial (i.e., the ever increasing
totality of the human-made systems and structures).
2.2.1 The nature of human representations
The literature on ‘representation’ is daunting and controversial.39 Dietriech
(2007, p. 1) remarks that “no scientist knows how mental representations represent”.
What is worse though is, that it is still the case that what counts as representation is
unclear (Boden 1994). Actually, the literature on representation conflates human
thinking and mental representations with knowledge representation (KR) schemes.
The result is stalled progress on both. This section identifies both important
similarities and the key difference between mental and external representations. The
identified key difference is partly responsible for the uniqueness of human mind
within the continuity of animal evolution.
All approaches to ‘representation’ draw upon or combine, in varying degrees,
two fundamental ideas: (i) aspects of Peircian semiotics; and (ii) the mathematical
notion of isomorphism.40 Von Eckardt (1992) made explicit the common view of the
nature of representation adopted by the majority of cognitive scientists. That view is a
simplification of Peirce’s theory essentially identifying his notion of “interpretant” to
a “thought or series of thoughts in the mind of the interpreter” (ibid). Computational
formalisms, whether logic- or graph-based, try to explicitly describe the
“interpretant”. Within the theory outlined in 2.1.1.1, the latter is usually a concept and
sometimes a skepsis. To refer to either of these, I use the symbol C (cf. Figure 1).
The connectionists’ representational tools (vector spaces) and the dynamicists’
differential equations share exactly the same objective. Newell (1990) added to this
Peircian view the notion of mathematical isomorphism that he calls “the
representation law”. This is a pretty simple and useful picture but not the whole one.
It is a conception avoiding Peircian complexities but biased towards KR schemes and
tool-based reasoning. This has contributed to relatively powerful computational
models but impoverished conceptual interaction with other fields. The next few
paragraphs clarify the above points.
Consider the following ubiquitous examples of external human ‘representations’
A and try to think of the underlying processes that created them:
a) Designs of all sorts and small scale models like those used to re-present a major
urban development, a spacecraft or a teddy bear.
b) Logic expressions, camera images, geometrical diagrams, computer programs,
equations.