Can genetic algorithms explain experimental anomalies? An application to common property resources



reinforced. Strategies that perform well over time gradually replace poor-performance ones.
The most common reinforcement rules in the GA literature are pairwise tournament and
biased roulette wheel. We have adopted a pairwise tournament for two reasons. First, it is
ordinal, in the sense that the probabilities are based only on “greater than” comparisons
among strategy payoffs and the absolute magnitude of payoffs is not important for the
reinforcement probability. Being ordinal it does not rely on a “biological” interpretation of
the score as a perfect measure of the relative advantage of one strategy over another. As a
consequence, the simulation results are robust to any strictly increasing payoff
transformation. Second, while in a biased roulette wheel the payoff needs to be positive that
is not the case for pairwise tournament. The reinforcement operates by (1) randomly drawn
with replacement two strategies, a
ikt and aiqt, from a population Ait and by (2) keeping for the
following interaction only the strategy with the highest payoff in the pair: a*
it=argmax{π
(aikt), π(aiqt)}. After each period, these two steps are repeated K times, where K is the
population size.

Simulations are run with an individual learning GA, which is discussed in the remainder of
this Section. When agents do not consider just one strategy at each period in time, but have a
finite collection of strategies from which one is chosen in every period (memory set), the
process is called a multi-population GA (Riechman, 1999, Vriend, 2000, Arifovic and
Ledyard, 2000). A strategy is a real number a
ikt[0,50] that represents the appropriating
effort level of agent i in period t. Each agent is endowed with an individual memory set
Ait={ai1t ,..., aiKt} composed of a number of strategies
K that is constant over time and
exogenously given. If a strategy a
ikt is in the memory set, i.e. it is available, agent i can
choose it for play at time
t. The individual learning Ga was here adopted because it
reproduces the informational conditions of the experiment while the social learning GA does
not. Moreover, it is better suited to study individual behavior as in a social learning GA



More intriguing information

1. AGRIBUSINESS EXECUTIVE EDUCATION AND KNOWLEDGE EXCHANGE: NEW MECHANISMS OF KNOWLEDGE MANAGEMENT INVOLVING THE UNIVERSITY, PRIVATE FIRM STAKEHOLDERS AND PUBLIC SECTOR
2. The InnoRegio-program: a new way to promote regional innovation networks - empirical results of the complementary research -
3. Categorial Grammar and Discourse
4. The Veblen-Gerschenkron Effect of FDI in Mezzogiorno and East Germany
5. Strategic Investment and Market Integration
6. Momentum in Australian Stock Returns: An Update
7. An Intertemporal Benchmark Model for Turkey’s Current Account
8. Educational Inequalities Among School Leavers in Ireland 1979-1994
9. For Whom is MAI? A theoretical Perspective on Multilateral Agreements on Investments
10. The name is absent