(Dennett 1969) and the associated discussions of levels of descriptions and of the
relation between psychology and physiology (e.g., Anderson 1987; Broadbent 1985;
Changeux & Dehaene 1989; Rumelhart and McClelland 1985).
Nevertheless, such an interface may usefully be employed for AI systems only.
As Newell (1990, p. 86), from a slightly different perspective, put it: “[in] any
analysis of the architectures of natural systems, the concepts available from computer
architectures are useful, although they hardly do the whole job.” Currently, cognitive
architectures are inadequate as tools for a UTM because they are hardly comparable
to either the human nervous system or the individual human architecture at large.
Recent attempts to utilise data from imaging research (e.g., Anderson 2007b;
Anderson et. al. 2004) are highly commendable. Nevertheless, they do not change the
fact of the inadequacy of cognitive architectures as the key tool for unified theories
construction for as long as their specifications fall short of accommodating design
constraints like evolutionary compatibility and the full temporal scale of human
action (in contrast to focusing on the cognitive and rational bands).
With respect to abstraction, neither cognitive architectures nor, more generally,
mathematical modelling is adequate for the current level of development of cognitive
science. It is true that precision, completeness, and self-consistency are the key
advantages of computational modelling and indeed, as Abbott (2008) remarks, of
equations. Nonetheless, a language-based system of time-dependent definitions can
have the same characteristics, while at the same time being enhanced by the
vagueness of human language. As Werner Heisenberg (1959, p. 188) wrote:
"one of the most important features of the development and the analysis of
modern physics is the experience that the concepts of natural language,
vaguely defined as they are, seem to be more stable in the expansion of
knowledge than the precise terms of scientific language, derived as an
idealization from only limited groups of phenomena."
This position should not be seen as being against the use of mathematics or
computational modelling. It is only against their premature use. We first need to sort
out, and most likely expand and extend, our concepts before formalising them. The
human conceptual system is far richer than human language and that in turn is far
richer than our formal systems. We cannot put the cart before the horse.
Seeking a UTM across the full temporal scale of human action demands
interdisciplinarity, and the latter demands in turn field-wide theoretical constructs.
They provide a common reference frame for discussion, facilitate criticism and the
finding of gaps or inconsistencies and minimize potential misunderstandings. We do
not currently have even a partially complete system of such theoretical constructs for
a UTM. The ones proposed in section 2.1 are illustrative of the posits required for
bridging biology and sociology.
Evolutionary compatibility demands the variability of both the space of mental
phenomena and of particular mental phenomena themselves. For instance, written
language was not in the space of mental phenomena of Homo habilis and key
phenomena like thinking have been modified in the course of Homo evolution. In
addition, the rate of human evolutionary change is, in some important respects,
different from that of other animal species. Still, other phenomena like
‘representation’ go back for hundreds of millions of years and therefore have to be
seen in the light of their successive transformations through evolution (cf. Table 3).
Furthermore, transformations of different phenomena influence each other by means
of multiple feedback loops throughout their evolutionary existence. For a UTM such
phenomena include: ‘representation’, ‘thinking’, ‘communication’, and ‘language’