Can genetic algorithms explain experimental anomalies? An application to common property resources



reinforced. Strategies that perform well over time gradually replace poor-performance ones.
The most common reinforcement rules in the GA literature are pairwise tournament and
biased roulette wheel. We have adopted a pairwise tournament for two reasons. First, it is
ordinal, in the sense that the probabilities are based only on “greater than” comparisons
among strategy payoffs and the absolute magnitude of payoffs is not important for the
reinforcement probability. Being ordinal it does not rely on a “biological” interpretation of
the score as a perfect measure of the relative advantage of one strategy over another. As a
consequence, the simulation results are robust to any strictly increasing payoff
transformation. Second, while in a biased roulette wheel the payoff needs to be positive that
is not the case for pairwise tournament. The reinforcement operates by (1) randomly drawn
with replacement two strategies, a
ikt and aiqt, from a population Ait and by (2) keeping for the
following interaction only the strategy with the highest payoff in the pair: a*
it=argmax{π
(aikt), π(aiqt)}. After each period, these two steps are repeated K times, where K is the
population size.

Simulations are run with an individual learning GA, which is discussed in the remainder of
this Section. When agents do not consider just one strategy at each period in time, but have a
finite collection of strategies from which one is chosen in every period (memory set), the
process is called a multi-population GA (Riechman, 1999, Vriend, 2000, Arifovic and
Ledyard, 2000). A strategy is a real number a
ikt[0,50] that represents the appropriating
effort level of agent i in period t. Each agent is endowed with an individual memory set
Ait={ai1t ,..., aiKt} composed of a number of strategies
K that is constant over time and
exogenously given. If a strategy a
ikt is in the memory set, i.e. it is available, agent i can
choose it for play at time
t. The individual learning Ga was here adopted because it
reproduces the informational conditions of the experiment while the social learning GA does
not. Moreover, it is better suited to study individual behavior as in a social learning GA



More intriguing information

1. La mobilité de la main-d'œuvre en Europe : le rôle des caractéristiques individuelles et de l'hétérogénéité entre pays
2. The open method of co-ordination: Some remarks regarding old-age security within an enlarged European Union
3. Computational Experiments with the Fuzzy Love and Romance
4. The name is absent
5. Are class size differences related to pupils’ educational progress and classroom processes? Findings from the Institute of Education Class Size Study of children aged 5-7 Years
6. Tariff Escalation and Invasive Species Risk
7. Putting Globalization and Concentration in the Agri-food Sector into Context
8. Julkinen T&K-rahoitus ja sen vaikutus yrityksiin - Analyysi metalli- ja elektroniikkateollisuudesta
9. The name is absent
10. The name is absent