or to compare and copy them. Philosophers, psychol-
ogists and neurologists are yet to agree on how hu-
mans store their experience, as pictures in the head,
attribute and value pairs, causal indexes, or abstract
symbols (Peirce, 1897). So unfortunately, we cannot
copy directly from us and be sure the representation
is the same. We can only make inferences from the
usage of a concept, for example if you run from a
tiger we can infer that it’s fangs, teeth and ferocity
are bound up in your concept of it.
When we use language, we are learning to ”clas-
sify the world in a shared and modifiable way”
(Harnad, 1996). This suggests that this classifica-
tion for language is not necessarily the same as our
personal and sub jective categorisation. This can be
biologically determined. The term ’Umwelt’ is used
to describe the species-specific objective world of a
creature (Deely, 2001), or in our case of a machine.
The physical needs of different animals cause them
to concentrate on different parts of their environ-
ment through different sensory capabilities. To es-
tablish shared meaning those involved need a suffi-
cient overlap of Umwelt so they discuss shared ex-
periences. For example, humans and dogs are able
to communicate about items that are part of both
of our Umwelten, such as squeezy toys and walks,
but you may find it more difficult to discuss your
investment portfolio, or the subtleties of urine scent
discrimination. So it is perhaps acceptable to begin
by speaking of things that machines find easy to no-
tice, such as brightly coloured ob jects that are easily
segmented (which invariably happens in robotic ex-
perimentation), as the first words taught to apes are
invariably things like ’banana’.
Even the concepts of two different humans with
fully functioning sensory and cognitive apparatus
can be entirely different depending on their experi-
ence. Someone who has seen a platypus lay eggs will
quickly update the ’bears live young’ part of their
concept of what a mammal is. If their conversation
partner is unaware of this phenomenon then they will
still be able to talk far and wide about mammals as
long as it is never necessary to make this distinction.
It may be said that they do not truly understand one
another, as they are not technically talking about
exactly the same thing, but can it be said that the
second person understands nothing the first is say-
ing? He knows enough to carry out a meaningful
exchange.
So it is possible that we may have machines whose
concept of concepts is incomplete, but can still con-
verse with us meaningfully. As children we create
concepts coarsely until proven otherwise. All fluffy
things are first ’dog’, then ’dog’ and ’kitty’, and as
we experience more, we create more concepts and
discriminations. Our ability to continuously modify
and update our concepts as we have new experiences
makes us an open-ended solution for meaning. Al-
though the concepts are not immediately correct in
the shared use of them (a tiger is not a kitty), it
does not mean that these first utterances are with-
out meaning.
4. Sharing Attention
Another perspective is that the personal percep-
tive and conceptual experience does not need to be
shared, as long as you can establish joint atten-
tion. If we can be sure that we are both talking
about the same thing, it is not important exactly
how the distinctions are being made. Most autistic
children are unable to establish shared visual atten-
tion and this seriously impedes their verbal and non-
verbal communication (Baron-Cohen, 1995). This
can lead to production competence without obvious
meaning. For example, in answer to the question
’How are you?’ they might answer ’How are you?’
(Lovaas, 1977).
Pepperberg (1999) found her parrots were unable
to determine what the word was about if the two
humans involved in the model-rival technique (ex-
plained below) did not share visual attention on the
object. Also, in Steels and Kaplan’s (2001) robotic
experiments, they found communication failure oc-
curred most often because the object of discussion
was simply not in the robots line of sight. When the
input vision was limited to examples that humans
were able to categorise, the success rate improved
dramatically.
We need to be sure the correct association is be-
ing made between symbol and sensation, and draw
the attention of the machine to the ob ject of inter-
est. ’Normal’ children mainly use innate, or early-
learned gaze-following abilities and teachers of blind
children physically guide their hands. Therefore, it
seems mechanisms are required to steer the sens-
ing of the robot and gaze following has been de-
veloped in robotic systems (Kozima and Ito, 1997)
(Breazeal, 2002). But the problem autistic children
have with sharing attention is not with determining
what someone is looking at, but following the gaze
of another to determine the object of their interest.
For example, although they can answer correctly a
question such as ”What is Peter looking at, the blue
ball or the red ball?” they answer randomly ques-
tions such as ”What does Peter want, the blue ball
or the red ball?” (Lovaas 1977). So in addition, the
ability to predict the beliefs and desires of others, or
a ’theory of mind’, seems necessary.
5. Sharing a Mind
Severely autistic children seem unable to view other
humans as intentional beings like themselves, with
beliefs and states of mind (Baron-Cohen, 1995). But,
112