just in the case of biosystems— the core infrastructures for model construction fall
in three categories:
Built-ins: In the sense described by Conant and Ashby Conant and Ashby (1970),
our feeding, homeostatic and kinestetic mechanisms contain models of the
surounding reality (e.g. genes codifying chemical receptors for the nose).
Learned: The very subject matter of learning from experience.
Cultural: The well known topic of memetics Dawkins (1976); Blackmore (1999) or
—more visually shocking— of Trinity “learning” helicopter piloting expertise
in Wachowskys’ Matrix. 3
The learning and cultural mechanisms have the extremely interesting property
of being open ended. In particular, cultural model transmision is a form of extended
learning, where the cognitive system downloads models learned by others hence
reaching levels of model complexity and perfection that are impossible for an iso-
lated agent4 .
In biological systems, the substrate for learning is mostly neural tissue. Neural
networks are universal approximators that can be tuned to model any concrete ob-
ject or objects+relations set. This property of universal approximation combined
with the potential for unsupervised learning make the neural soup a perfect candi-
date for model boostraping and continuous tuning. The neural net is an universal
approximator; the neural tissue organised as brain is an universal modeller.
These are also the properties that are sought in the field of artificial neural net-
works. It is not necessary to recall here the ample capacities that neural networks
—both artificial and natural— have shown concerning model learning. We may
wonder to what extent model learning of an external reality can be equated to the
advances in modeling external realities demonstrated in the so called hard-sciences
(deep, first principles models).
What is philosophically interesting of this process of scientific model construc-
tion is the fact that reality seems to have a mathematical-relational stucture that
enables the distillation of progressively precise models in closed analytical forms
Wigner (1960).
We may think that culturally learnt first principles models5 are better than neu-
ral network approximative modelling6; there are cases of exact convergence of both
modelling approaches but there are also cases where the mathematical shape of the
principles limits their applicability to certain classes of systems.
3Supervised learning may be considered an hybrid of cultural and learned processes.
4Indeed this is, plainly, the phenomenon of science.
5 Only geniuses do incorporate first principles models by autonomous learning.
6A similar problem to that of having symbolic representations in neural tissue.
ASLab.org / Principles for Consciousness / A-2007-011 v 1.0 Final