continued. At some stage a training algorithm may recover from some local attractor
and accomplish further error minimisation, but we require it should occur within a
certain number of iterations. Obviously, when training is stopped, the final set of
network weights does not correspond to the best result found. It is, thus, necessary to
store the parameter values in a separate array every time a successful training step is
made. At the end of the training process the best set of parameter values is then
recalled.
5 Benchmark Comparisons
The attraction of the approach suggested to model the case of singly constrained spatial
interaction depends not only on the awareness of what it can offer, but also on empirical
illustration of what can be gained in comparison to alternative model approaches. The
standard origin-constrained gravity model and the two-stage neural network approach,
suggested by Openshaw (1998) and implemented by Mozolin, Thill and Usery (2000),
are used as benchmark models. All three models were estimated by means of the
Alopex procedure to eliminate the effect of different estimation procedures on the
result. In order to do justice to each model, the δ -parameter was systematically sought
for each model.
5.1 Performance Measures
The ultimate goal of any function approximator is its usefulness to generate accurate
out-of-sample prediction. One way to directly assess the generalisation ability is to
measure how well the approximator predicts the flows for new input data which was
not used to fit the model. For this purpose some performance measure is required.
One needs to be very careful when selecting a measure to compare different models. A
comparison may become meaningless if the performance measure has been utilised to
estimate the parameters in the case of one model, but not in the others. There is some
literature that ignores this. In this study model performance is measured on the testing
[prediction, out-of-sample] data set, say M3 = ∣(xu3,yu3 j with u3 = 1,..,U3}, by means
20