paper presents a novel neural network approach for the case of origin constrained or
destination constrained spatial interaction flows. The approach is based on a modular
connectionist architecture that may be viewed as a linked collection of functionally
independent neural modules with identical topologies [two inputs, H hidden product
units and a single summation unit], operating under supervised learning algorithms. The
prediction is achieved by combining the outcome of the individual modules using some
sort of the Bradley-Terry-Luce model as non-linear output transfer function multiplied
with a bias term that implements the accounting constraint.
The efficacy of the model approach is demonstrated for the origin-constrained case by
using interregional telecommunication traffic data for Austria, noisy real world data of
limited record length. The Alopex procedure, a global search procedure, provides an
appropriate optimisation scheme to produce Least Square (LS)-estimates of the model
parameters. The prediction quality is measured in terms of two performance statistics,
average relative variances and the standardised root mean square error. A benchmark
comparison shows that the proposed model outperforms origin-constrained gravity
model predictions and predictions obtained by applying the two-stage neural network
approach suggested by Openshaw (1998).
The reminder of this paper is structured as follows. The next section provides some
background information relevant for spatial interaction modelling first, describes then
the basic features of unconstrained neural spatial interaction models and finally
discusses briefly how a priori information on accounting constraints can be treated from
a neural network perspective. Section 3 presents the network architecture and the
mathematics of the modular product unit neural network model. Moreover, it points to
some crucial issues that have to be addressed when applying the model in a real world
context. Section 4 is devoted to the issue of training the network model. The discussion
starts by viewing the parameter estimation problem of the model as least squares (LS)
learning and continues with a description of the Alopex procedure, a global search
procedure, that provides an appropriate optimising scheme for LS-learning. It is
emphasised that the main goal of network training is to minimise the learning error
while ensuring good network model generalisation. The most common approach in
practice is to check the network performance periodically during training to assure that
further training improves generalisation as well as reduces learning error. Section 5