Glazebrook (2006) has suggested that, lurking in the back-
ground of this basic construction, is what Bak et al. (2006)
call a groupoid atlas, i.e. an extension of topological manifold
theory to groupoid mappings. Also lurking is identification
and exploration of the natural groupoid convolution algebra
which so often marks these structures (e.g. Weinstein, 1996;
Connes, 1994).
Consideration suggests, in fact, that a path may be mean-
ingful according to the groupoid parametization of all possible
dual information sources, and that tuning is done across that
parametization via a rate distortion manifold.
Implicit, however, are the constraints imposed by machine
history and/or problem definition, in a large sense, which may
limit the properties of R0, i.e. hold the system to a develop-
mentally determined topology leading to a solution as detec-
tion of a singularity.
Here we have attempted to reexpress this trade-off in terms
of a syntactical/grammatical version of conventional signal
theory, i.e. as a ‘tuned meaningful path’ form of the clas-
sic balance between sensitivity and selectivity, as particularly
constrained by the directed homotopy imposed by a machine
experience that is itself the outcome a historical process in-
volving interaction with an external environment defined by
the problem to be solved.
Overall, this analysis is analogous to, but more complicated
than, Wallace’s information dynamics instantiation of Baars’
Global Workspace theory (Wallace, 2005a, b, 2006, 2007). In-
tuitively, one suspects that the higher the dimension of the
second order attentional Rate Distortion Manifold, that is,
the greater the multitasking, the broader the effective band-
width of attentional focus, and the less likely is inattentional
blindness. For a conventional differentiable manifold, a sec-
ond or higher order tangent space would give a better approx-
imation to the local manifold structure than a simple plane
(Pohl, 1962).
Nonetheless, inattentional blindness, while constrained by
multitasking, is not eliminated by it, suggesting that higher
order institutional or machine cognition, the generalization of
individual consciousness, is sub ject to canonical and idiosyn-
cratic patterns of failure analogous to, but perhaps more sub-
tle than, the kind of disorders described in Wallace (2005b,
2006). Indeed, while machines designed along these princi-
ples - i.e. multitasking Global Workspace devices - could be
spectacularly efficient at many complex tasks, ensuring their
stability might be even more difficult than for institutions
having the benefit of many centuries of cultural evolution.
The trick might be to use a well-defined ‘problem groupoid’
as an external goal context to stabilize the machine. Problem
sets without well defined groupoid structures might well lead
to very irregular system behaviors, in this model.
In addition the necessity of interaction - synchronous or
asynchronous - between internal giant components suggests
the possibility of failures governed by the Rate Distortion
Theorem. Forcing rapid communication between internal gi-
ant components ensures high error rates. Recent, and very
elegant, ethnographic work by Cohen et al. (2006) and
Laxmisan et al. (2007) regarding systematic medical error
in emergency rooms focuses particularly on ‘handover’ prob-
lems at shift change, where incoming medical staff are rapidly
briefed by outgoing staff. Systematic information overload in
such circumstances seems almost routine, and is widely rec-
ognized as a potential error source within institutions.
A third failure mode involves the possibility of patholog-
ical resilience states defined by the dynamic groupoid, and
the individual manifold dihomotopy groupoids. Address of
such ‘lock-in’ seems to require action of an external executive
to stabilize these highly parallel, self-programming, adaptive
machines. This seems equivalent to ‘apparently doing every-
thing right but still getting the wrong answer’.
The ‘non-pathological’ set of topological resilience states,
of course, represents the solutions to the computing problem.
This paper generalizes the Global Workspace model of in-
dividual consciousness to an analogous second order treat-
ment of distributed machine cognition, and suggests, in par-
ticular, that multiple workspace multitasking significantly re-
duces, but cannot eliminate, the likelihood of inattentional
blindness, of overfocus on one task to the exclusion of other
powerful patterns of threat or affordance. It further appears
that rate distortion failure in communication between indi-
vidual global workspaces will be potentially a serious prob-
lem for such systems - synchronous or sequential versions of
the telephone game. Thus the multitasking hierarchical cogni-
tive model appropriate to institutional or distributed machine
cognition is considerably more complicated than the equiva-
lent for individual human consciousness, which seems biolog-
ically limited to a single shifting, tunable giant component
structure. Human institutions, by contrast, appear able to
entertain several, and perhaps many, such global workspaces
simultaneously, although these generally operate at a much
slower rate than is possible for individual consciousness. Dis-
tributed cognition machines, according to this model, would
be able to function as efficiently as large institutions, but at
or near the rate characteristic of individual consciousness.
Shared culture, however, seems to provide far more than
merely a shared language for the establishment of the human
organizations which enable our adaptation to, or alteration of,
our varied environments. It also may provide the stabilizing
mechanisms needed to overcome many of the canonical and
idiosyncratic failure modes inherent to such structures - the
embedding directives of law, tradition, and custom which have
evolved over many centuries. Culture is truly as much a part
of human biology as the enamel on our teeth (Richerson and
Boyd, 2004). No such secondary heritage system is available
for machine stabilization.
In sum, this paper contributes to a mathematical formal-
ization of a particular kind of biologically-inspired distributed
machine cognition based on a necessary conditions communi-
cation theory model quite similar to Dretske’s attempts at un-
derstanding high level mental function for individuals. How-
ever, high order, multiple global workspace cognition seems
not only far more complicated than is the case for individual
animal consciousness, but appears prone to particular collec-
tive errors whose minimization, for institutions which operate
along these principles, may have been the subject of a long
13