cognitive agent does not need to have an explicit mental (knowledge) representation of goals
and beliefs, because these can emerge14.
We can distinguish two types of social action (Castelfranchi, 1998): weak social action,
which is based on beliefs about the mind of other agents; and strong social action, which is
based on goals about others’ minds and actions. A weak social action considers what another
agent is doing, and might affect the considering agent. A strong social action involves social
goals. A social goal is directed towards another agent. This is, the social goal of one agent is to
influence the mind or actions of another agent.
Social structures and organizations emerge from social actions of the agents, and also
the individual actions of the agents in a society are influenced by social structures and
organizations.
2.3. Previous Work on Artificial Societies
Given our notion of sociality in agents, we can see that many multi agent systems
(Russel and Norvig, 1994) have some kind of sociality (Hogg and Jennings, 1997; Jennings and
Campos, 1997). Here we will just review some of the work in AS that are related with the study
of societies. This is, AS that were created with the purpose of studying social processes. AS that
are a synthetic representation of a society, and not only works involving social agents with other
purposes.
Perhaps the most classical work in AS is the one of Epstein and Axtell (1996). They used
a cellular automaton with “simple” rules, which caused the emergence of complex processes
in the global system, such as population migration, cultural evolution, and warfare (Epstein and
Axtell, 1996). This work was severely criticized by some, and admired by others. This was
because some people did not believe on how such complex and obscure things could be
explained with such simple rules, while others were amazed by it.
Mitchel Resnick (1994) in his book “Turtles, Termites and Traffic Jams” showed clearly
how very complex social phenomena, as ant foraging, termite nest building, and traffic jams,
are caused by very simple local rules. But when thousands of individuals follow simple rules,
the behaviour of the systems turns out to be very complex, with social properties emerging.
The fascinating work of Gode and Sunder (1992) of “zero intelligence” traders shows
that simple agents, without the capability to theorize, recall events, or maximize returns, but
only bidding in ways that would not yield to immediate losses, were 75% efficient. When they
replaced the agents with humans, the efficiency was 76%. This showed that the institutional
settings and constrains of a system may determine the role of the individuals of the system,
instead of the individuals determining it by themselves. This means that in such systems a
human will not perform much better than even a random selection generator.
Artificial societies have also been useful in the social sciences. For example, Jim Doran
has made a simulation where the agents might have collective misbelief (Doran, 1998). He uses
this simulation for studying ideologies in human societies. Dwight Read has studied the
14We will discuss the emergence of cognition from non-cognitive processes in the next chapter.
24