size and shape (Slater & Morison, 1985), remember
objects over time (Slater, Morison, & Rose, 1982), and
perceive similarities and differences between visual
stimuli (Slater, Morison, & Rose, 1984). Newborn infants
are also able to track a moving target with eye and head
movements (albeit in a jerky fashion, e.g., Aslin, 1981;
von Hofsten 1982), and can recognize the constancy of an
object’s identity across transformations in orientation and
movement (Slater, Morison, Town & Rose, 1985).
While constituting a perhaps surprisingly robust set of
initial skills, developing and incorporating these skills
into more complex behaviors takes time. For example, it
is not until 4 months of age that an infant’s muscular
control and object understanding have matured to the
point of allowing an infant to successfully reach for and
grasp an object (e.g., von Hofsten, 1989). Also at 4
months, infants begin to perceive (measured via looking-
time) a partially occluded object as a single unified object
(Kellman & Spelke, 1983; Johnson & Nanez, 1995).
However, it is not until about 6 months of age that infants
combine their object tracking skills, their understanding
of object unity, and their reaching skills to reach for an
object that has been partially obscured from view by an
occluding object (von Hofsten & Lindhagen, 1979;
Shinskey & Munakata, 2001).
Also in the realm of visual-object skills is object
permanence, which relates to the child’s understanding
that an object continues to exist even when the object
cannot be seen. It has been shown that 3.5-month-old
infants show recognition of an impossible object event
(i.e., a violation of object permanence), such as a
drawbridge closing despite a solid object appearing to
have been blocking its path (Baillargeon, 1987, 1993,
1995). However, this sort of “perceptual object
permanence” is not manifested as a behavior indicating
an understanding of physical (i.e., more conventional)
object permanence until much later, when 8- to 10-
month-old infants will begin to search for an object that
has been hidden from view (Piaget, 1954). Still, infant
searching behavior at this age is not free of difficulties
and is subject to the “A-not-B error” (the infant searches
for a hidden object at location A when the object was
initially uncovered at location A but subsequently hidden
at location B). Infants perseverate in this error until
roughly 12 months of age (at which time infants will
correctly search for the hidden object at location B; e.g.,
see Wellman, Cross, & Bartsch, 1986; Newcombe &
Huttenlocher, 2000).
In developing from initial skills of being able to
identify and track objects (birth), to perceptually
distinguishing impossible object events (3.5 months), to
being able to maintain perception of object unity despite
an occlusion (4 months), to successfully reaching for an
object (4 months), to successfully reaching for an object
despite an occlusion (6 months), to searching for a hidden
object (8-10 months), to searching for a hidden object
without displaying the A-not-B error (12 months), infants
demonstrate an ongoing emergence of behavior. Changes
occurring in the visual, conceptual, and motor systems of
the infants interact to produce unique, observable
behaviors at multiple points along the developmental path
of these visual object skills, with each developed skill
being incorporated and providing a contributing factor to
the emergence of subsequent skills.
5. Designing For Ongoing Emergence
Past a theoretical understanding of ongoing emergence,
our most burning question was well-expressed by one of
the anonymous referees of this paper: How can we design
robots so that the behaviors exhibited by the robot
continue to be adaptive and open to further development
throughout their duration of use (e.g., either as models of
infants, or deployed in some industrial environment)?
That is, how do we design robots that exhibit ongoing
emergence? Our thinking here divides broadly into two
possibilities. The first possibility we address is that of
designing robots that exhibit ongoing emergence where
the bootstrapping components (see Criterion 4) of the
system are not generated by ongoing emergence.
Effectively, this corresponds to basing the design of the
robots on existing research (e.g., the robotic systems in
Table 2). The second possibility we address is that of
designing robots that exhibit ongoing emergence where
the bootstrapping components themselves are generated
by processes of ongoing emergence. This corresponds to
discovering a different way of approaching the design of
the initial components of a robotic system.
5.1. Bootstrapping Ongoing Emergence
Without Primitive Ongoing Emergence
Ongoing emergence in humans results in part from the
dynamic integration of multiple skills with the
environment (i.e., Criterion 1). One way to achieve an
analog of this in robots may be to add an integration layer
on top of an existing system or systems (see Table 2),
providing soft-assembly of the component skills. For
example, we might combine robotic behaviors across
several systems, such as the perceptual object
permanence behavior of Cheng and Weng (2004), the
joint attention of Nagai, et al. (2003) and the social skills
of Breazeal and Scassellati (2000). For this integration
layer to satisfy Criterion 1, it would be appropriate for
these skills, in interaction with the environment, to
produce new, adaptive, emergent skill(s). For example,
given the integration of the above three prior research
projects, the integrated system might express surprise
towards a caregiver when an object permanence situation
was violated.
An approach that might be useful for this integration
layer was given by Cheng, Nagakubo, and Kuniyoshi
(2001). These authors proposed an integration
mechanism to combine components in a humanoid
robotic system, involving integrating the results of
various component mechanism, which themselves show
adaptation over time. Combining components involves
weighting the components for their relative contributions,
and such contributions may vary according to factors
such as learning and context. The authors use a sensory-
cue competition approach to integration, and generating
motor outputs. They define the motor output of a robot as
the vector Ui(t) , expressed by equation [1]. Ui(t)
67