Developmental Robots - A New Paradigm
Juyang Weng Yilu Zhang
Department of Computer Science and Engineering
Michigan State University
East Lansing, MI 48824
{weng, zhangyil}@cse.msu.edu
Abstract
It has been proved to be extremely challeng-
ing for humans to program a robot to such
a sufficient degree that it acts properly in a
typical unknown human environment. This
is especially true for a humanoid robot due
to the very large number of redundant de-
grees of freedom and a large number of sen-
sors that are required for a humanoid to work
safely and effectively in the human environ-
ment. How can we address this fundamental
problem? Motivated by human mental devel-
opment from infancy to adulthood, we present
a theory, an architecture, and some experi-
mental results showing how to enable a robot
to develop its mind automatically, through
online, real time interactions with its envi-
ronment. Humans mentally “raise” the robot
through “robot sitting” and “robot schools”
instead of task-specific robot programming.
1. Introduction
The conventional mode of developmental process for
a robot is not automatic - a human designer is in the
loop. A typical process goes like this: Given a robotic
task, it is the human designer who analyzes and un-
derstands the task. Based on his understanding, he
comes up with a representation, chooses a compu-
tational method, and writes a program that imple-
ments his method for the robot. The representation
reflects very much the human designer’s understand-
ing of the robot task. During this developmental pro-
cess, some machine learning might be used, during
which some parameters are adjusted according to the
collected data. However, these parameters are de-
fined by the human designer’s representation for the
given task. The resulting program is for this task
only, not for any other tasks. If the robotic task is
complex, the capability of handling variation of envi-
ronment is very much limited by the human designed
task-specific representation. This manual develop-
ment paradigm has met tremendous difficulties for
tasks that require complex cognitive and behavioral
capabilities, such as many sensing and behavioral
skills that a humanoid must have in order to execute
human high-level commands, including autonomous
navigation, ob ject manipulation, ob ject delivery, tar-
get finding, human-robot interaction through ges-
ture in unknown environment. The high degree of
freedom, the redundant manipulators, and the large
number of effectors that a humanoid has, plus the
multimodal sensing capabilities that are required to
work with humans further increase the above diffi-
culties. The complex and changing nature of human
environment has made the issue of autonomous men-
tal development of robots — the way human mind
develops — more important than ever.
Many robotics researchers may believe that hu-
man brain has an innate representation for the tasks
that humans generally do. However, recent stud-
ies of brain plasticity have shown that our brain is
not as task-specific as commonly believed. There ex-
ist rich studies of brain plasticity in neuroscience,
from varying extent of sensory input, redirecting
input, transplanting cortex, to lesion studies, and
sensitive periods. Redirecting input seems illumi-
nating in explaining how much task-specific our
brain really is. For example, Mriganka Sur and
his coworkers rewired visual input to primate au-
ditory cortex early in life. The target tissue in
the auditory cortex, which is supposed to take au-
ditory representation, was found to take on visual
representation instead (Sur et al., 1999). Further-
more, they have successfully trained the animals to
form visual tasks using the rewired auditory cor-
tex (von Melchner et al., 2000). Why are the self-
organization schemes that guide development in our
brain so general that they can deal with either speech
or vision, depending on what input it takes through
the development? Why are robots that are pro-
grammed using human designed, task-specific rep-
resentation do not do well in complex, changing, or
partially unknown or totally unknown environment?
What are the self-organization schemes that robots
can use to autonomously develop their mental skills
through interactions with the environment? Is it
more advantageous to enable robots to autonomously
develop their mental skills than to program robots
using human-specified, task-specific representation?