3.3 Internal Representation
Autonomous generation of internal representation is
central to mental development. Traditional AI sys-
tems use symbolic representation for internal rep-
resentation and decision making. Is symbolic rep-
resentation suited for a developmental robot? In
the AI research, the issue of representation has not
been sufficiently investigated, mainly due to the tra-
ditional manual development paradigm. There has
been a confusion of concepts in representation, espe-
cially between reality and the observation made by
the agents. To be precise, we first define some terms.
A world concept is a concept about objects in the
external environment of the agent, which includes
both the environment external to the robot and the
physical body of the robot. The mind concept1 is in-
ternal with respect to the nervous system (including
the brain).
Definition 2 A world centered representation is
such that every item in the representation corre-
sponds to a world concept. A body centered rep-
resentation is such that every item in the represen-
tation corresponds to a mind concept.
A mind concept is related to phenomena observable
from the real world, but it does not necessarily reflect
the reality correctly. It can be an illusion or totally
false.
Definition 3 A symbolic representation is about a
concept in the world and, thus, it is world centered.
It is in the form A = (v1, v2, ..., vn) where A (op-
tional) is the name token of the object and v1, v2,
..., vn are the unique set of attributes of the object
with predefined symbolic meanings.
For example, Apple = (weight, color) is a sym-
bolic representation of a class of objects called ap-
ple. Apple-1 = (0.25g, red) is a symbolic represen-
tation of a concrete object called Apple-1. The set
of attributes is unique in the sense that the object’s
weight is given by the unique entry v1 . Of course,
other attributes such as confidence of the weight can
be used. A typical symbolic representation has the
following characteristics:
1. Each component in the representation has a pre-
defined meaning about the object in the external
world.
2. Each attribute is represented by a unique variable
in the representation.
3. The representation is unique for a single corre-
sponding physical object in the external environ-
ment.
1 The term “mind” is used for ease of understanding. We
do not claim that it is similar to the human mind.
World centered symbolic representation has been
widely used in symbolic knowledge representation,
databases, expert systems, and traditional AI sys-
tems.
Another type of representation is motivated by the
distributed representation in the biological brain:
Definition 4 A distributed representation is not
necessarily about any particular object in the envi-
ronment. It is body centered, grown from the body’s
sensors and effectors. It is in a vector form A =
(v1 , v2, ..., vn), where A (optional) denotes the vector
and vi, i = 1, 2, ..., n corresponds to either a sensory
element (e.g., pixel or receptor) in the sensory input,
a motor control terminal in the action output, or a
function of them.
For example, suppose that an image produced by
a digital camera is denoted by a column vector I ,
whose dimension is equal to the number of pixels in
the digital image. Then I is a distributed represen-
tation, and so is f (I) where f is any function. A
distributed representation of dimension n can repre-
sent the response of n neurons.
The world centered and body centered representa-
tions are the same only in the trivial case where the
entire external world is the only single ob ject for cog-
nition. There is no need to recognize different objects
in the world. A thermostat is an example. The com-
plex world around it is nothing more than a tem-
prature to it. Since cognition must include discrim-
ination, cognition itself is not needed in such a triv-
ial case. Otherwise, body centered representation is
very different from a world centered representation.
Some later (later in processing steps) body centered
representations can have a more focused correspon-
dence to a world concept in a mature developmental
robot, but they will never be identical. For example,
the representation generated by a view of a red ap-
ple is distributed over many cortical areas and, thus,
is not the same as a human designed atomic, world
centered symbolic representation.
A developmental program is designed after the
robot body has been designed. Thus, the sensors
and effectors of the robot are known, and so are their
signal formats. Therefore, the sensors and effectors
are two major sources of information for generating
distributed representation.
Another source of information is the internal sen-
sors and effectors which may grow or die according
to the autonomously generated or deleted representa-
tion. Examples of internal effectors include attention
effectors in a sensory cortex and rehearsal effectors
in a premotor cortex. An internal attention effectors
are used for turning on or turning off certain signal
lines for, e.g., internal visual attention. Rehearsal
effectors are useful for planning before an action is
actually released to the motors. The internal sen-
sors include those that sense internal effectors. In