Provided by Cognitive Sciences ePrint Archive
Motivations, Values and Emotions: Three Sides of the same Coin
Stan Franklin and Uma Ramamurthy
Computer Science Department & Institute for Intelligent Systems
The University of Memphis, Memphis, TN 38152, USA.
[email protected] and [email protected]
Abstract
This position paper speaks to the interrelationships
between the three concepts of motivations, values, and
emotion. Motivations prime actions, values serve to
choose between motivations, emotions provide a
common currency for values, and emotions implement
motivations. While conceptually distinct, the three are
so pragmatically intertwined as to differ primarily from
our taking different points of view. To make these
points more transparent, we briefly describe the three in
the context a cognitive architecture, the LIDA model,
for software agents and robots that models human
cognition, including a developmental period. We also
compare the LIDA model with other models of
cognition, some involving learning and emotions.
Finally, we conclude that artificial emotions will prove
most valuable as implementers of motivations in
situations requiring learning and development.
1. Introduction
Motivations, values and emotions have been studied by
philosophers, psychologists and neuroscientists for
decades (Busemeyer, Medin and Hastie 1995;
Dalgleish and Power 1999; Reiss 2001; Bower 1974;
Russell 2003; Aharon et al 2001; Silverta et al 2004;
Rolls 1999; Izard 1993; Davidson et al 2004; McGaugh
2003 and countless others). More recently, roboticists
and artificial intelligence researchers have taken up
these subjects (Sloman 1987; Wright, Sloman and
Beaudoin. 1996; McCauley and Franklin 1998;
Antunes and Coelho 1999; McCauley, Franklin, and
Bogner 2000; Marsella and Gratch 2002; Langley et al
2003; Franklin and McCauley 2004; Avila-Garcia and
Canamero 2005; Shanahan 2005).
Every autonomous agent (Franklin & Graesser 1997),
be it a human, some other animal, a software agent or
an autonomous robot, must come equipped with built-
in primitive motivations. Otherwise, it wouldn’t know
how to decide what to do next. Evolution sees to these
primitive motivations in biological agents; their
designers build them into artificial agents, including
epigenetic robots. Each such agent “lives its life” via a
continual iteration of sense-process-act cycles during
which it samples its environment, decides how best to
respond to the current situation, and acts in response
(Franklin 1995, the action selection paradigm).
Motivations play a primary role in these action
selection processes. Just as goals may have sub-goals in
their service, primitive motivations may have sub-
motivations in theirs. Each motivation in any agent
must be in the service of one or more primitive
motivations.
One primitive motivation for bacteria is to find
nutrients. This motivation is implemented causally
(mechanically) by chemo-taxis, the ability to follow a
positive nutrient gradient (Alon et al 1999). Increasing
concentrations of a particular nutrient molecule in
sensory receptors causes the bacterium to tumble less
and to swim forward more in the direction of the
increasing gradient. This is an example of positive
tropism, the involuntary response of an organism,
orienting it toward an external stimulus. It can be
viewed as a causal implementation of motivation.
A second way of implementing motivation is by
explicitly including primary motivations in the form of
drives in the action selection mechanism itself (Maes
1989). The computational IDA described below was
designed in this manner (Negatu and Franklin 2002).
Yet another way of implementing motivations is with
values. A value reflects an agent’s general preference
for a situation or an action, independent of its current
beliefs or goals, that is, independent of the current
environmental situation and of the agent’s current
intentions (Antunes and Coelho 1999). Values are
often combined into an utility function that is used by
the agent to evaluate options (Wahde 2003). Such
evaluation of options, often referred to as reasoning or
rational agency, can be effected by deliberation, a kind
of internal virtual reality (Sloman 1999; Franklin
2000a). Such deliberation must consider the current
environmental situation and the agent’s current
intentions, as well as its values. The object, of course,
is to select an appropriate action.
Feelings in humans include hunger, thirst, various sorts
of pain, hot or cold, the urge to urinate, tiredness,
depression, etc. Damasio views feelings as somatic
markers (1999). One feels feelings in the body.
Implemented biologically as somatic markers, feelings
typically attach to response options and, so, bias the
agent’s choice of action. We’ll see below how this
occurs in the LIDA model.