Provided by Cognitive Sciences ePrint Archive
in Berthouze, l., κozima, h., prince, c. g., sandiiii, g., stojanov, g., ιvιetta, g., and Baikenius, c. (Eds.)
Proceedings of the Fourth International Workshop on Epigenetic Robotics
Lund University Cognitive Studies, 117, ISBN 91-974741-3-4
Sharing Meaning with Machines
Claire D’Este
School of Computer Science and Engineering
University of New South Wales
Sydney, NSW 2052 Australia
[email protected]
Abstract
Communication can be described as the act
of sharing meaning, so if humans and ma-
chines are to communicate, how is this to
be achieved between such different creatures?
This paper examines what else the communi-
cators need to share in order to share mean-
ing, including perception, categorisation, at-
tention, sociability and consciousness. It com-
pares and takes inspiration from communi-
cations with others with different perception
and categorisation, including the deaf-blind,
the autistic and animals.
1. Introduction
As machines become larger parts of our everyday
lives and begin to share our environment with us,
it has become increasingly desirable to be able to
communicate with them in a natural, open-ended
way. So if communication is the act of sharing mean-
ing (Tubbs and Moss, 1994), then how can this be
achieved between creatures as different as humans
and machines?
There is little doubt that a machine can pro-
cess sensory information, and use some mathematical
technique to sort and generalise this into higher-level
categories. (Steels and Kaplan, 2002) have demon-
strated successful bootstrapping of shared meaning
between robotic agents. However, the agents created
their own language grounded in their specific and
identical perceptual and categorisation systems. We
are still left with the issue of how we make machines
learn our language while made of different stuff.
This problem is not limited to humans and ma-
chines. We also want to communicate with humans
with extreme sensory or mental impairment, and
with animals, despite dissimilar sensory and nervous
systems. This paper will examine how shared mean-
ing is established with these very different conver-
sation partners, and how it highlights the necessary
and sufficient abilities for a communicating machine.
2. Sharing Perception
To give us something to talk about, we need to share
an experience. The focus for machines has been on
the visual experience, as sight appears to be the pri-
mary sense of humans. However, many animals pro-
vide evidence that sight is not necessary for concep-
tualisation, and sensory-impaired children, although
they require a more active and complex teaching pro-
cess, are eventually able to communicate fluently us-
ing the same language.
Teachers of deaf-blind children begin with a
limited set of words and do not give the child
more until they learn them in various contexts
(Witt, 2004). Machines are also usually taught
in a restricted world with a restricted vocabulary
(Steels and Kaplan, 2001). Deaf-blind children can
be taught to speak by reading the vibrations on peo-
ple’s lips and feeling their own vibrations as they
speak. But small children are taught gesticulation
and finger spelling first to learn the meaning of the
word before they are permitted to learn how it is
spoken. Without previous knowledge of the word
the child may have to repeat the utterance up to
ten thousand times before understanding it. Deaf-
blind children have the disadvantage of not being
”bathed in words” from the moment of their birth
(Colligan, 1962). This makes the words far more dis-
tant from their meanings than with actions that are
close to those actions they perform on objects every-
day, or shapes that are similar to the object. The
gestures are then easily mapped to spoken words.
If touch alone can ground language, it is unfortu-
nate that the technology is still limited and expen-
sive, as it would be interesting to clothe a robot in an
artificial skin and guide their sensing. Vision alone
is perhaps too passive.
3. Sharing Concepts
Once the body has sensed things in its environ-
ment the next step is to categorise into concepts.
Meanings can be defined as being concepts alone
(Horwich, 2003). Unfortunately, it is not possible to
directly observe the conceptual systems of humans,
111