Can genetic algorithms explain experimental anomalies? An application to common property resources



reinforced. Strategies that perform well over time gradually replace poor-performance ones.
The most common reinforcement rules in the GA literature are pairwise tournament and
biased roulette wheel. We have adopted a pairwise tournament for two reasons. First, it is
ordinal, in the sense that the probabilities are based only on “greater than” comparisons
among strategy payoffs and the absolute magnitude of payoffs is not important for the
reinforcement probability. Being ordinal it does not rely on a “biological” interpretation of
the score as a perfect measure of the relative advantage of one strategy over another. As a
consequence, the simulation results are robust to any strictly increasing payoff
transformation. Second, while in a biased roulette wheel the payoff needs to be positive that
is not the case for pairwise tournament. The reinforcement operates by (1) randomly drawn
with replacement two strategies, a
ikt and aiqt, from a population Ait and by (2) keeping for the
following interaction only the strategy with the highest payoff in the pair: a*
it=argmax{π
(aikt), π(aiqt)}. After each period, these two steps are repeated K times, where K is the
population size.

Simulations are run with an individual learning GA, which is discussed in the remainder of
this Section. When agents do not consider just one strategy at each period in time, but have a
finite collection of strategies from which one is chosen in every period (memory set), the
process is called a multi-population GA (Riechman, 1999, Vriend, 2000, Arifovic and
Ledyard, 2000). A strategy is a real number a
ikt[0,50] that represents the appropriating
effort level of agent i in period t. Each agent is endowed with an individual memory set
Ait={ai1t ,..., aiKt} composed of a number of strategies
K that is constant over time and
exogenously given. If a strategy a
ikt is in the memory set, i.e. it is available, agent i can
choose it for play at time
t. The individual learning Ga was here adopted because it
reproduces the informational conditions of the experiment while the social learning GA does
not. Moreover, it is better suited to study individual behavior as in a social learning GA



More intriguing information

1. Opciones de política económica en el Perú 2011-2015
2. Does adult education at upper secondary level influence annual wage earnings?
3. EXPANDING HIGHER EDUCATION IN THE U.K: FROM ‘SYSTEM SLOWDOWN’ TO ‘SYSTEM ACCELERATION’
4. Globalization, Redistribution, and the Composition of Public Education Expenditures
5. Optimal Private and Public Harvesting under Spatial and Temporal Interdependence
6. The name is absent
7. Two-Part Tax Controls for Forest Density and Rotation Time
8. The name is absent
9. Literary criticism as such can perhaps be called the art of rereading.
10. DIVERSITY OF RURAL PLACES - TEXAS
11. The name is absent
12. ENVIRONMENTAL POLICY: THE LEGISLATIVE AND REGULATORY AGENDA
13. The Folklore of Sorting Algorithms
14. Wirtschaftslage und Reformprozesse in Estland, Lettland, und Litauen: Bericht 2001
15. Influence of Mucilage Viscosity On The Globule Structure And Stability Of Certain Starch Emulsions
16. Økonomisk teorihistorie - Overflødig information eller brugbar ballast?
17. THE CHANGING RELATIONSHIP BETWEEN FEDERAL, STATE AND LOCAL GOVERNMENTS
18. Globalization, Divergence and Stagnation
19. Should Local Public Employment Services be Merged with the Local Social Benefit Administrations?
20. Asymmetric transfer of the dynamic motion aftereffect between first- and second-order cues and among different second-order cues