Proceedings of the Fourth International Workshop on Epigenetic Robotics



Provided by Cognitive Sciences ePrint Archive

in Berthouze, l., κozima, h., prince, c. g., sandiiii, g., stojanov, g., ιvιetta, g., and Baikenius, c. (Eds.)
Proceedings of the Fourth International Workshop on Epigenetic Robotics

Lund University Cognitive Studies, 117, ISBN 91-974741-3-4

Sharing Meaning with Machines

Claire D’Este

School of Computer Science and Engineering
University of New South Wales
Sydney, NSW 2052 Australia
[email protected]

Abstract

Communication can be described as the act
of sharing meaning, so if humans and ma-
chines are to communicate, how is this to
be achieved between such different creatures?
This paper examines what else the communi-
cators need to share in order to share mean-
ing, including perception, categorisation, at-
tention, sociability and consciousness. It com-
pares and takes inspiration from communi-
cations with others with different perception
and categorisation, including the deaf-blind,
the autistic and animals.

1. Introduction

As machines become larger parts of our everyday
lives and begin to share our environment with us,
it has become increasingly desirable to be able to
communicate with them in a natural, open-ended
way. So if communication is the act of sharing mean-
ing (Tubbs and Moss, 1994), then how can this be
achieved between creatures as different as humans
and machines?

There is little doubt that a machine can pro-
cess sensory information, and use some mathematical
technique to sort and generalise this into higher-level
categories. (Steels and Kaplan, 2002) have demon-
strated successful bootstrapping of shared meaning
between robotic agents. However, the agents created
their own language grounded in their specific and
identical perceptual and categorisation systems. We
are still left with the issue of how we make machines
learn our language while made of different stuff.

This problem is not limited to humans and ma-
chines. We also want to communicate with humans
with extreme sensory or mental impairment, and
with animals, despite dissimilar sensory and nervous
systems. This paper will examine how shared mean-
ing is established with these very different conver-
sation partners, and how it highlights the necessary
and sufficient abilities for a communicating machine.

2. Sharing Perception

To give us something to talk about, we need to share
an experience. The focus for machines has been on
the visual experience, as sight appears to be the pri-
mary sense of humans. However, many animals pro-
vide evidence that sight is not necessary for concep-
tualisation, and sensory-impaired children, although
they require a more active and complex teaching pro-
cess, are eventually able to communicate fluently us-
ing the same language.

Teachers of deaf-blind children begin with a
limited set of words and do not give the child
more until they learn them in various contexts
(Witt, 2004). Machines are also usually taught
in a restricted world with a restricted vocabulary
(Steels and Kaplan, 2001). Deaf-blind children can
be taught to speak by reading the vibrations on peo-
ple’s lips and feeling their own vibrations as they
speak. But small children are taught gesticulation
and finger spelling first to learn the meaning of the
word before they are permitted to learn how it is
spoken. Without previous knowledge of the word
the child may have to repeat the utterance up to
ten thousand times before understanding it. Deaf-
blind children have the disadvantage of not being
”bathed in words” from the moment of their birth
(Colligan, 1962). This makes the words far more dis-
tant from their meanings than with actions that are
close to those actions they perform on objects every-
day, or shapes that are similar to the object. The
gestures are then easily mapped to spoken words.

If touch alone can ground language, it is unfortu-
nate that the technology is still limited and expen-
sive, as it would be interesting to clothe a robot in an
artificial skin and guide their sensing. Vision alone
is perhaps too passive.

3. Sharing Concepts

Once the body has sensed things in its environ-
ment the next step is to categorise into concepts.
Meanings can be defined as being concepts alone
(Horwich, 2003). Unfortunately, it is not possible to
directly observe the conceptual systems of humans,

111




More intriguing information

1. Centre for Longitudinal Studies
2. The Making of Cultural Policy: A European Perspective
3. Der Einfluß der Direktdemokratie auf die Sozialpolitik
4. Asymmetric transfer of the dynamic motion aftereffect between first- and second-order cues and among different second-order cues
5. Thresholds for Employment and Unemployment - a Spatial Analysis of German Regional Labour Markets 1992-2000
6. Tax systems and tax reforms in Europe: Rationale and open issue for more radical reforms
7. Handling the measurement error problem by means of panel data: Moment methods applied on firm data
8. LAND-USE EVALUATION OF KOCAELI UNIVERSITY MAIN CAMPUS AREA
9. The name is absent
10. The name is absent
11. he Virtual Playground: an Educational Virtual Reality Environment for Evaluating Interactivity and Conceptual Learning
12. Does Market Concentration Promote or Reduce New Product Introductions? Evidence from US Food Industry
13. Sector Switching: An Unexplored Dimension of Firm Dynamics in Developing Countries
14. Barriers and Limitations in the Development of Industrial Innovation in the Region
15. Clinical Teaching and OSCE in Pediatrics
16. The name is absent
17. Revisiting The Bell Curve Debate Regarding the Effects of Cognitive Ability on Wages
18. Optimal Taxation of Capital Income in Models with Endogenous Fertility
19. The name is absent
20. The name is absent