of those studied that allows evolution to reliably produce
good racing controllers uses neural networks informed by
egocentric information from range-finder sensors. Best per-
formance was achieved by making ranges and angles of the
rangefinders evolvable, and providing the network with a
further sensor indicating angle to the next waypoint. The
controllers were only tried on one track, but some noise
was introduced into the environment and the track was
surrounded by impenetrable walls. The best of the evolved
controllers outperformed all of a small sample human com-
petitors.
Stanley et al.[5] used a similar setup - neural networks in-
formed by range-finders - in an experiment aimed at evolving
both controllers and crash-warning systems for subsequently
impaired controllers. The experiment was conducted on a
single track in simulation (using the RARS simulator[6]),
and the track was not surrounded by walls, so the car was
allowed (at a fitness penalty) to venture outside the track.
In another interesting experiment, Floreano et al. evolved
neural networks for simulated car racing using first-person
visual input from the driving simulator Carworld[7][8]. How-
ever, only 5 x 5 pixels of the visual field was used as inputs
for the network; the position of these pixels was dynamically
selected by the network, in a process known as active vision.
A different approach to evolutionary car racing was taken
by Tanev et al., who evolved parameters for a hand-coded
racing car controller, using anticipatory modeling of the
car’s position[9]. While the amount of human input into
the controller design process is arguably higher in this case,
this approach allowed evolution of controllers for real radio-
controlled cars without an intermediary simulation step.
Also related is the work of Wloch and Bentley, who used
a human-designed controller built into a high quality racing
simulator, but used artificial evolution to optimize all physical
and mechanical parameters of the car[10]. Evolution here
managed to come up with car configurations that performed
better than any of the stock cars in the simulator.
2) Supervised learning and real-world applications: Ma-
chine learning techniques have also been used in real-world
car driving and racing applications, though these techniques
have been forms of supervised learning rather than evo-
lutionary learning. Perhaps most well-known of these is
Pomerleau’s use of backpropagation to train neural networks
to associate pre-processed video input with a human driver’s
actions, leading to a controller able to keep a real car on
the road and take curves appropriately[11]. More recently,
the winning team in the DARPA Grand Challenge made
extensive use of machine learning in developing their car
controller.
Going from physical reality to virtual reality, the Mi-
crosoft’s Xbox video game Forza Motorsport is worthy
of mention, as all the opponent car controllers have been
trained by supervised learning of human player data, instead
of the usual racing game technique of blindly following
precalculated racing lines[12]. The player can even train his
own “drivatars” to race tracks in his place, after they have
acquired his or her individual driving style.
Supervised learning, however, ultimately suffers from re-
quiring good training data. Sometimes such training data
is simply not available, at other times it is prohibitively
expensive to obtain, and at yet other times imitating human
drivers is simply not what we want.
B. Motivations for this paper
While the research referred to above has shown the useful-
ness of evolutionary robotics techniques for car racing, the
controllers have in all those cases only been tested on a single
track, and sometimes with severe simplifying assumptions,
such as being able to drive through walls. Thus, the first
objective of the research reported in this paper is to evolve
neural network controllers each capable of competitively and
reliably navigating a variety of different tracks, including
tracks they have not been trained on. Based on the range-
finding and aimpoint sensors proposed in[4], we investigate
which sensor setup and evolutionary sequence allows us to
create such controllers.
A second objective is to investigate whether evolution of
a specialized controller, i.e. one performing very well on a
particular track, can be sped up by starting from an already
evolved “general” controller. Such a process could be useful
for example in a racing game, where users are allowed to
design tracks and a controller providing good performance
on such tracks needs to be created on the fly.
The concrete questions we pose and try to answer are
the following: How robust is the evolutionary algorithm,
that is, how certain can we be that a given evolutionary
run will produce a proficient controller for a given track?
Is the layout of the racing track directly influencing the
fitness landscape so that some tracks are much harder than
others to evolve, while not being impossible to drive? What
is the transferability of knowledge gained in evolving for
one track in terms of performance on other tracks? Can we
evolve controllers that can proficiently race all tracks in our
training set? How? Can such generally proficient controllers
be used to reliably create specialized controllers that perform
well, but only on particular tracks? Finally, can this be done
even for tracks for which it is not possible to evolve a good
controller from scratch?
While this investigation primarily addresses the scalability
of the problem domain (and to some extent of the sen-
sor/network combination), it may also be of use for practical
applications such as racing games to find out the most
reliable ways to evolve proficient controllers.
C. Overview of the paper
The paper is laid out as follows: first, we describe the
characteristics of the car racing simulation we will be using,
including sensor models, tracks, and how this models differs
from the problem of racing real radio-controlled cars. The
next section details the neural networks and evolutionary al-
gorithm we employ. We then proceed to describe experiments
on evolving controllers optimized for the individual tracks
from scratch, followed by a section where we investigate