4. Related Work
not a conclusive argument that they will do so in
simpler, implemented artificial autonomous agents.
The LIDA architecture differs significantly from other
cognitive architectures such as SOAR (Laird et al.
1987) and ACT-R (Anderson & Lebiere 1998) in that it
is not a unified theory of cognition in the sense of
Newell (1990). Rather, its various modules are
implemented by a variety of different mechanisms
including the Copycat architecture, Sparse Distributed
Memory, Pandemonium Theory and Behavior Nets
(Franklin 2001b). Though the LIDA architecture
contains no production rules and no neural networks, it
does incorporate both symbolic and connectionist
elements. The LIDA architecture allows feelings and
emotions to play a central role in perception, memory,
“consciousness” and action selection. LIDA’s
“consciousness” mechanism, based on Global
Workspace Theory, resembles a blackboard system
(Nii 1986), but there is much more to the LIDA
architecture having to do with the interaction of its
various modules. Much of this interaction is described
in the cognitive cycle detailed above.
The LIDA architecture can be viewed as a specification
of the more general CogAff architecture of Sloman
(Wright et al. 1996). It has reactive and deliberative
mechanisms but, as yet, no meta-management. There is
a superficial resemblance between the computational
IDA and the ACRES system (Moffat et al. 1993) in that
both interact with users in natural language. LIDA and
ACRES are also alike in using emotions to implement
motivations. Rather than viewing emotions as
implementing motivations for the selection of actions
on the external environment, Marsella and Gratch study
their role in internal coping behavior (2002). From our
point of view this is a case of emotions implementing
motivation for internal actions as also occurs in the
LIDA conceptual model. The ICARUS system also
resembles a portion of the LIDA conceptual model in
that it uses affect in the process of reinforcement
learning (Langley et al. 1991).
5. Conclusions
Being generated from order-of-magnitude one hundred
thousand lines of code, IDA is an exceedingly complex
software agent. Thus from the usefulness of artificial
feelings and emotions in the LIDA architecture one
would not jump to the conclusion that they would play
useful roles in a more typical order-of-magnitude
simpler software agent or robotic control structure.
Besides, feelings and emotions are, as yet, only part of
the LIDA conceptual model, and have not been
implemented. Significant difficulties could conceivably
occur during implementation. That artificial feelings
and emotions seem to play significantly useful roles in
the conceptual version of LIDA’s cognitive cycles is
Still the LIDA model suggests that software agents and
robots can be designed to use feelings/emotions to
implement motivations, offering a range of flexible,
adaptive possibilities not available to the usual, more
tightly structured motivational schemes such as causal
implementation, or explicit drives and/or
desires/intentions.
So, what can we conclude? Note that the computational
IDA performs quite well with explicitly implemented
drives rather than with feelings and emotions. It is
possible that a still more complex artificial autonomous
agent with a task requiring more sophisticated decision
making would require them, but we doubt it. Explicit
drives seem likely to suffice for quite flexible action
selection in artificial agents, but not in modeling
biological agents. It appears that feelings and emotions
come into their own in agent architectures requiring
sophisticated learning. This case study of the LIDA
architecture seems to suggest that artificial feelings and
emotions can be expected to be of most use in software
agents or robots in which online learning of objects,
categories, relations, events, facts and/or skills is of
prime importance. If this requirement were present, it
would make sense to also implement primary
motivations by artificial feelings and emotions.
Acknowledgements
The authors are deeply indebted to the members of the
‘Conscious’ Software Research Group at the University
of Memphis for their major contributions to this
research, and to Mike Wintner for reading and
commenting on an early draft.
References
Ahn, H. and R. W. Picard 2006. Affective Cognitive
Learning and Decision Making: The Role of Emotions.
The 18th European Meeting on Cybernetics and
Systems Research (EMCSR 2006), April 18-19, 2006,
Vienna, Austria
Alon, U., M. G. Surette, N. Barkai, and S. Leibler.
1999. Robustness in bacterial chemotaxis. Nature
397:168-171.
Anwar, A., and S. Franklin. 2003. Sparse Distributed
Memory for "Conscious" Software Agents. Cognitive
Systems Research 4:339-354.
Anderson, J. R., and C. Lebiere. 1998. The atomic
components of thought. Mahwah, NJ: Erlbaum.