should start thinking how to justify this. Not only to
validate our belief, but for understanding how was this
possible, and to be able to reproduce it. How can
reasoning evolve and develop from adaptive behaviour?
We believe that this can be studied by simulating
reasoned behaviours from a BBS perspective. This
would not be a unification between BBS and KBS, but
a bridge closing the gaps between them. They would be
behaviour-based knowledge systems (BBKS). In other
words, such a system should exhibit knowledge, which
should have been developed, not directly implemented.
In this way, a BBKS would be able to model the
exhibitions of intelligence modelled by BBS and KBS,
also illustrating the relationships between these types
of intelligence: adaptive and cognitive. Also, BBKS are
compatible with the Epigenetic Robotics approach
(Balkenius et. al., 2001; Zlatev, 2001).
We believe that this is a promising line of research.
We argue that this is the most viable path for
understanding most levels of behaviour, and therefore
intelligence: natural and artificial. As the MacGregor-
Lewis stratification of neuroscience notes, “the models
which relate several strata (levels) are most broadly
significant” (MacGregor, 1987, quoted in Cliff, 1991).
Figure 2 shows a graphical representation of the ideas
expressed above.
But is it possible to simulate knowledge from
adaptive behaviour? We believe it is, and in the
following section we describe how we might attempt to
achieve this.
3. Knowledge from Behaviour
As we stated in the previous section, reasoned
behaviours require abstract representations or
concepts of the perceived world, and an accurate
manipulation of these in order to produce a specific
behaviour. How can these abstract representations and
concepts be acquired? It seems that they are learned
from regularities in the perceptions of objects and
events. We believe that this is how concepts are
created. We define a concept as a generalization of
perception(s) or other concept(s)1. This definition
requires and presupposes embodiment and
situatedness (Clark, 1997). This means that if we
intend to simulate these abstractions, our artificial
creatures should be embodied and situated, or at least
virtually embodied and situated (i.e. in simulations). We
should note that in animals concepts are not physical
structures (if we open a brain we will not find any
concept): they emerge from the interactions of the
1This is not the classical notion of concept in the
philosophy of mind literature (e.g. Peacocke, 1992), and it is
not restricted to humans. It is compatible with the use of
Gardenfors (2000).
nervous system with the rest of the body and
environment. We can see them as a metaphor, and
could say that they lie in the eye of the beholder. The
same for other types of representation. And since these
are necessary elements of knowledge, things will be
clearer if we remark that knowledge is not a physical
structure or element either, but an emergent property of
a cognitive system (i.e. an observer needs to perceive
the knowledge).
As an example for showing our use of concepts, a
person begins to develop a concept of “pen” from the
moment she perceives a pen. Then, when she perceives
different instantiations of pens and their uses, all the
regularities will determine her concept of “pen”. We
believe that animals also have such concepts and are
shaped in the same way. A kitten might play with a ball
of paper to explore what can be done with it. Once the
kitten experiences the possibilities of sensation,
perception and use, a concept representing the ball of
paper should have been created, so that the animal will
behave accordingly in future presentations of balls of
paper. Of course, we have a different concept “ball of
paper” than the kitten, because our perceptions (and
the “hardware” we process them with) are different.
But that a creature has different concepts than the
ones we have, does not mean that it does not have
concepts. The popular “problem” of the frog not
having concept of a fly because frogs confuse other
objects with flies (they try to eat them), is a bizarre
anthropomorphization of the mind (based on the
classical experiments by Lettvin et. al. (1959)). The frog
has a concept, but not of a fly. Their perceptual system
simply does not allow them to distinguish flies and
similar objects. They do not need this to survive in
their ecological niche. We observe a similar situation
with fiddler crabs. They do not have a concept of
“predator”. They just have a concept of “something
taller than me”, and they run away from it (Layne,
Land, and Zeil, 1997). This is because animals develop
their intelligence to cope with their environment, not
with ours. And even in humans, recent research (e.g.
O’Regan and Noe, 2001; Clark, in press) shows that
our visual perceptions are not as complete as they
seem to us. We need to be aware of this while studying,
and judging, animal and human intelligence. Concepts
are necessary because it has a huge computational cost
to remember each particular object and to act
accordingly. Generalizations allow the cognitive system
to produce similar actions in similar situations at a low
computational cost.
But, strictly speaking, all humans also have
different concepts for the same objects, since we have
had different experiences of them. It is only because of
language that we can communicate referring to the
same classes of objects even when the mechanisms
which determine in our brains those concepts might be
very different from each other.