tractable limits of a much more complicated regime.
Since we know nothing about how the cross-talk connec-
tions can occur, we will - at first - assume they are ran-
dom and construct a random graph in the classic Erdos/Renyi
manner. Suppose there are M disjoint cognitive modules - M
elements of the equivalence class algebra of languages dual to
some cognitive process - which we now take to be the vertices
of a possible graph.
For M very large, following Savante et al. (1993), when
edges (defined by establishment of a fixed-strength mutual
information measure between the graph vertices) are added
at random to M initially disconnected vertices, a remarkable
transition occurs when the number of edges becomes approxi-
mately M/2. Erdos and Renyi (1960) studied random graphs
with M vertices and (M∕2)(1 + μ) edges as M → ∞, and
discovered that such graphs almost surely have the follow-
ing properties (Molloy and Reed, 1995, 1998; Grimmett and
Stacey, 1998; Luczak, 1990; Aiello et al., 200; Albert and
Barabasi, 2002):
[1] If μ < 0, only small trees and unicyclic components are
present, where a unicyclic component is a tree with one addi-
tional edge; moreover, the size of the largest tree component
is (μ - ln(1 + μ))-1 + O(loglogn).
[2] If μ = 0, however, the largest component has size of
order M2/3.
[3] If μ > 0, there is a unique giant component (GC) whose
size is of order M ; in fact, the size of this component is
asymptotically αM, where μ = —α-1 [ln(1 — α) — 1], which
has an explicit solution for α in terms of the Lambert W-
function. Thus, for example, a random graph with approxi-
mately M ln(2) edges will have a giant component containing
≈ M/2 vertices.
Such a phase transition initiates a new, collective, cogni-
tive phenomenon. At the level of the individual mind, un-
conscious cognitive modules link up to become the Global
Workspace of consciousness, emergently defined by a set of
cross-talk mutual information measures between interacting
unconscious cognitive submodules. The source uncertainty,
H, of the language dual to the collective cognitive process,
which characterizes the richness of the cognitive language of
the workspace, will grow as some monotonic function of the
size of the GC, as more and more unconscious processes are
incorporated into it. Wallace (2005b) provides details.
Others have taken similar network phase transition ap-
proaches to assemblies of neurons, e.g. neuropercolation
(Kozma et al., 2004, 2005), but their work has not focused
explicitly on modular networks of cognitive processes, which
may or may not be instantiated by neurons. Restricting anal-
ysis to such modular networks finesses much of the underlying
conceptual difficulty, and permits use of the asymptotic limit
theorems of information theory and the import of techniques
from statistical physics, a matter we will discuss later.
5. External forces breaking the symmetry groupoid
Just as a higher order information source, associated with
the GC of a random or semirandom graph, can be constructed
out of the interlinking of unconscious cognitive modules by
mutual information, so too external information sources, for
example in humans the cognitive immune and other physio-
logical systems, and embedding sociocultural structures, can
be represented as slower-acting information sources whose in-
fluence on the GC can be felt in a collective mutual infor-
mation measure. For machines or institutions these would
be the onion-like ‘structured environment’, to be viewed
as among Baars’ contexts (Baars, 1988, 2005; Baars and
Franklin, 2003). The collective mutual information measure
will, through the Joint Asymptotic Equipartition Theorem
which generalizes the Shannon-McMillan Theorem, be the
splitting criterion for high and low probability joint paths
across the entire system.
The tool for this is network information theory (Cover and
Thomas, 1991, p. 388). Given three interacting information
sources, Y1 , Y2 , Z, the splitting criterion, taking Z as the ‘ex-
ternal context’, is given by
I(Y1,Y2|Z) =H(Z)+H(Y1|Z)+H(Y2|Z) —H(Y1,Y2,Z),
(2)
where H (..|..) and H (.., .., ..) represent conditional and joint
uncertainties (Khinchin, 1957; Ash, 1990; Cover and Thomas,
1991).
This generalizes to
I(Y1, ...Yn|Z) = H(Z) + ∑ H(Yj|Z) — H(Yι,...,Yn,Z).
j=1
(3)
If we assume the Global Workspace/Giant Component to
involve a very rapidly shifting, and indeed highly tunable,
dual information source X , embedding contextual cognitive
modules like the immune system will have a set of significantly
slower-responding sources Yj, j = 1..m, and external social,
cultural and other environmental processes will be character-
ized by even more slowly-acting sources Zk, k = 1..n. Math-
ematical induction on equation (3) gives a complicated ex-
pression for a mutual information splitting criterion which we
write as
I(X|Y1, ..,Ym|Z1, ..,Zn).
(4)