reasons for being the most used both in biological systems (they are easily evolv-
able) and technical systems (they are easier to design and implement).
Complex controllers organise control loops in hierarchical/heterarchical arrange-
ments that span several dimensions: temporal, knowledge, abstraction, function,
paradigm, etc. Sanz (1990). These organisational aspects lead to the functional
differences offered by the different achitectures (in the line of Dennet’s skinner-
ian/popperian/gregorian creatures Dennett (1996)).
In the performance of any task by an intelligent agent there are three aspects of
relevance: the task itself, the agent performing the task and the environment where
the task is being performed Sanz et al. (2000). In the case of natural systems the
separation between task and agent is not easy to do, but in the case of technical
systems this separation is clearer if we analyse them from the perspective of artifi-
ciality Simon (1981). Artificial systems are made on purpose and the task always
comes from oustide of them: the owner.
The knowledge content of the models in highly autonomous cognitive controllers
should include the three aspects: system, task and environment. Depending on the
situation in the control hierarchy, models may refer to particular subsets of these
aspects (e.g. models used in intelligent sensors do address only a limited part of the
system environment; just environmental factors surrounding the sensor).
System cohesion may be threatened in evolutionary terms and its preservation
becoments a critical integrational requirement. The problem of model coherence
across the different subsystems in a complex control hierarchy is a critical aspect
that is gaining increased relevance due to the new component-based strategies for
system construction. In the case of biological systems and unified engineering arti-
ficial systems the core ontology —whether explicit or assumed— used in the con-
struction of the different elements is the same. But, in systems agregated from com-
ponents coming from different fabrication processes, ontology mismatches produce
undesirable emergent phenomena that lead to faults and even loss of system via-
bility. This is clear in biological systems (e.g. immunity-related phenomena) but is
just becoming clear in complex technical systems during recent times Horn (2001).
This analysis lead us to formulate an additional principle of complex cognitive
systems:
Principle 4: Unified cognitive action generation — Generating action
based on an unified model of task, environment and self is the way for perfor-
mance maximisation.
Modeling the task is, in general, the easiest part8. This has been one of the
traditional focus points of classic AI (obvioulsy with the associated problem solving
methods).
8But representing the task in the internalised model can be extremely complex when task specifi-
cation comes in natural language.
ASLab.org / Principles for Consciousness / A-2007-011 v 1.0 Final