more complex anticipations” (p. 3, ms.). It is thought that
a developmental algorithm incorporating these three
mechanisms could be successively applied to move an
agent from a discovery of initial behaviors (“reflexes”) to
more complex behaviors.
Weng (Weng, 2004; Weng et al., 2001) also
emphasizes the need for robots to autonomously generate
their own task-specific representations in order to cope
with dynamic, unknown, or uncontrolled environments.
“A developmental program for robots must be able to
generate automatically representations for unknown
knowledge and skills” (Weng et al., 2001) so as to adapt
to these environmental variations. An agent with the
capacity to construct its own representations has the
potential of understanding these representations.
“Without understanding, an agent is not able to select
rules when new situations arise, e.g. in uncontrolled
environments” (p. 205, Weng, 2004). These processes are
viewed as open-ended and cumulative. “A robot cannot
learn complex skills successfully without first learning
necessary simpler skills, e.g., without learning how to
hold a pen, the robot will not be able to learn how to
write” (Weng et al., 2001).
Grupen (2003) is similarly concerned with
enabling robots to solve “open tasks in unstructured
environments” (p. 2, ms.). The approach he advocates is
to use “developmental processes [that] construct
increasingly complex mental representations from a
sequence of tractable incremental learning tasks” (p. 1,
ms.). He proposes “computational mechanisms whereby a
robot can acquire hierarchies of physical schemata”
(Grupen, 2003, p. 1, ms.). Physical schemata provide
parameterized, and in that sense reusable, sensorimotor
control knowledge.
Dominey and Boucher (2005) model linguistic
grammar acquisition, based on visual and auditory pre-
processing of sensory inputs and connectionist models.
The authors use the developmental theory of Mandler
(1999), who “suggested that the infant begins to construct
meaning from ... scene[s] based on the extraction of
perceptual primitives. From simple representations such
as contact, support, and attachment . the infant [may]
construct progressively more elaborate representations of
visuospatial meaning” (p. 244, Dominey & Boucher,
2005).
2.2. Synthesis
From this earlier thinking, we wish to synthesize a picture
of what we refer to as ongoing emergence. We propose
six defining criteria for ongoing emergence (see Table 1).
Our first two criteria are: (1) An agent creates new skills
by utilizing its current environmental resources, internal
state, physical resources, and by integrating current skills
from the agent’s repertoire, and (2) These new skills are
incorporated into the agent’s existing skill repertoire and
form the basis from which further development can
proceed. By “skills” we include overt behaviors,
perceptual abilities, and internal representational
schemes.
These first two criteria express the notion that when
we view agents as developing systems, with certain skills
64
in their repertoire, they have the potential to develop
related skills. For example, under this view a
developmental robot that can learn to kick a ball might
then later develop skills for playing soccer. Ongoing
emergence thus has the property of developmental
systematicity1. In developmental systematicity if an agent
demonstrates skill aRb, then we also expect competence
with directly related skills, bRa (i.e., systematicity; Fodor
& Pylyshyn, 1988). Furthermore, we expect the
emergence of developmentally related skills such as f(a)
and g(aRb), where f(x) and g(y) are developmental
processes producing emergent skills in the agent’ s
repertoire over time. This process would in part be based
on earlier skills (e.g., x and y in f(x) and g(y) above). For
example, if a robot exhibits a range of object tracking
behaviors (aRb, bRa) through the composition of blob
tracking skill (a) and motion finding skill (b), and the
robot is a developing agent, we would have further,
developmental expectations about its future behaviors
such as facial tracking and gaze following (e.g., f(a),
g(aRb)).
Another central notion in the work described in
Section 2.1 is that of autonomy: avoidance of
specification of task goals, autonomous generation of
task-specific representations, and the ability to solve open
tasks. We include this as a third criterion for ongoing
emergence: (3) An agent that exhibits ongoing emergence
autonomously develops adaptive skills on the basis of
having its own values (e.g., see Sporns, 2005; Sporns &
Alexander, 2002) and goals, with these values and goals
being developed by the system in a manner similar to its
skills. If an agent develops its own values and goals, it
can use these for self-supervision and to determine the
tasks that need to be solved. In brief, the agent needs
some way to evaluate its own behaviors, and determine
when a particular skill is useful2. This is true in both the
short and long term. For example, in the short-term, a
robotic agent might tradeoff energy output for the gain of
information, while long-term goals might include
improving communication amongst the robot’s cohorts.
To these initial three criteria for ongoing emergence
we add three additional criteria: (4) bootstrapping (when
the system starts, some skills rapidly become available),
(5) stability (skills persist over an interval of time), and
(6) reproducibility (the same system started in similar
initial states and in similar environments also displays
1 We introduce the concept of developmental systematicity to avoid
viewing behavior generation an infinite domain. This is analogous to the
way that Fodor & Pylyshyn (1988) introduced systematicity to avoid
viewing language generation as an infinite domain.
2
We refrain from adopting the idea that skills that emerge in
development should necessarily be more complex (i.e., be more
powerful in some sense) than prior skills. From our view, this criterion
is too strong for several reasons. First, strictly increasing adaptation is
violated in some instances of child development (e.g., the “U” shaped
curves of child performance on various tasks over time; see Siegler,
2004). Second, a view of strictly increasing complexity of skills does
not allow for escape (“detours”) from local maxima, where behavior
needs to get worse before it can get better. Third, strictly increasing skill
complexity may remove the possibility of discovering simpler means to
achieve the same (or similar) ends as existing skills—as in evolution,
“different” is sometimes at least as good as “better.” Relatedly, a strict
view of building complexity does not seem to allow for the loss (e.g.,
forgetting) of some skills over time.