Can genetic algorithms explain experimental anomalies? An application to common property resources



reinforced. Strategies that perform well over time gradually replace poor-performance ones.
The most common reinforcement rules in the GA literature are pairwise tournament and
biased roulette wheel. We have adopted a pairwise tournament for two reasons. First, it is
ordinal, in the sense that the probabilities are based only on “greater than” comparisons
among strategy payoffs and the absolute magnitude of payoffs is not important for the
reinforcement probability. Being ordinal it does not rely on a “biological” interpretation of
the score as a perfect measure of the relative advantage of one strategy over another. As a
consequence, the simulation results are robust to any strictly increasing payoff
transformation. Second, while in a biased roulette wheel the payoff needs to be positive that
is not the case for pairwise tournament. The reinforcement operates by (1) randomly drawn
with replacement two strategies, a
ikt and aiqt, from a population Ait and by (2) keeping for the
following interaction only the strategy with the highest payoff in the pair: a*
it=argmax{π
(aikt), π(aiqt)}. After each period, these two steps are repeated K times, where K is the
population size.

Simulations are run with an individual learning GA, which is discussed in the remainder of
this Section. When agents do not consider just one strategy at each period in time, but have a
finite collection of strategies from which one is chosen in every period (memory set), the
process is called a multi-population GA (Riechman, 1999, Vriend, 2000, Arifovic and
Ledyard, 2000). A strategy is a real number a
ikt[0,50] that represents the appropriating
effort level of agent i in period t. Each agent is endowed with an individual memory set
Ait={ai1t ,..., aiKt} composed of a number of strategies
K that is constant over time and
exogenously given. If a strategy a
ikt is in the memory set, i.e. it is available, agent i can
choose it for play at time
t. The individual learning Ga was here adopted because it
reproduces the informational conditions of the experiment while the social learning GA does
not. Moreover, it is better suited to study individual behavior as in a social learning GA



More intriguing information

1. La mobilité de la main-d'œuvre en Europe : le rôle des caractéristiques individuelles et de l'hétérogénéité entre pays
2. Consciousness, cognition, and the hierarchy of context: extending the global neuronal workspace model
3. EXECUTIVE SUMMARY
4. Valuing Access to our Public Lands: A Unique Public Good Pricing Experiment
5. Backpropagation Artificial Neural Network To Detect Hyperthermic Seizures In Rats
6. Stable Distributions
7. The name is absent
8. The name is absent
9. Olive Tree Farming in Jaen: Situation With the New Cap and Comparison With the Province Income Per Capita.
10. Optimal Tax Policy when Firms are Internationally Mobile