1) = 0. Hence, this deviation reduces B’s payoff by b'(j) compared to
b(j) = 0. A deviation b0(j) > 0 at j > j0 does not change the state in t + 1
compared to b(j) = 0 in the candidate equilibrium, due to the tiebreaking rule
employed. The deviation reduces B’s payoff by b0(j) compared to b(j) = 0.
An equivalent logic applies for α(j) at states j ^ {j0 — 1,j0}.
Turn now to the state j0. In the candidate equilibrium, in state j0 con-
testant A randomizes on the support [0, ^Д2) /Vo 1Za — ^^ (jo 1'Zb]]. All
actions in the equilibrium support for A at j0 yield the same expected payoff
equal to Gj0 (x)⅛j(j0 — 1) + (1 — Gj0(x))0 — x = 0. A possible one-stage
deviation for A at j0 is an α0(j0) > (1-f^2) [^j01Za — Z" ( jo kZb]. Compared
to the action α(j0) = (1-f^2) ∕ljo 1Za — Z" (jo 1'Zb] that is inside A’s equilib-
rium support, this also leads to state j0 — 1, but costs the additional amount
α0(j0) — a(j0) > 0. The deviation is therefore not profitable for A. The same
type of argument applies for b(j0).
A similar argument applies to the state j0 — 1. In the candidate equilib-
rium, in state j0 — 1 contestant A randomizes on the support [0, (1-f^2) Z”' jo Zb—
ZoZa)]. All actions in the equilibrium support for A at j0 — 1 yield the
same expected payoff equal to Gj∙0-1 (x)⅛j(j0 — 2) + (1 — Gj∙o-1(x))0 — x =
ɪɪa[Zo1Za — Z" (jo 1'Zb] = uA(j0 — 1).11 A possible one-stage deviation
for A at j0 — 1 is an α0(j0 — 1) > (1-f^2) Z" jo Zb — 5jo Za]. Compared to the
action α(j0 — 1) = (1-f^2) [5m~joZb — 5joZa] that is the upper bound of A’s
equilibrium support, this also leads to state j0 — 2, but costs the additional
amount α0(j0 — 1) — α(j0 — 1) > 0. The deviation is not profitable for A. The
same type of argument applies for b(j0 — 1). ■
Intuitively, outside of the states j0 — 1 and j0, one of the players is indif-
ferent between winning and losing the component contest. For instance, in
the state j0 — 2, the best that player B could achieve by winning the next
component contest is to enter the state j0 — 1 at which B’s continuation value
is still zero and smaller than player A’s continuation value. As B does not
gain anything from reaching j0 — 1, B should not spend any effort trying to
reach this state. But if B does not spend effort to win, it is easy for A to
win.
The states j0 — 1 and j0 are different. Battle victory or defeat at one of
11More formally, all actions in the support of A’s equilibrium local strategy that are
not mass points of B’s local strategy yield the same expected payoff. Since B has a mass
point at zero, this does not hold at a = 0, but for every a in a neigborhood above zero.
15
More intriguing information
1. The name is absent2. Plasmid-Encoded Multidrug Resistance of Salmonella typhi and some Enteric Bacteria in and around Kolkata, India: A Preliminary Study
3. Large-N and Large-T Properties of Panel Data Estimators and the Hausman Test
4. The name is absent
5. Consumer Networks and Firm Reputation: A First Experimental Investigation
6. A Critical Examination of the Beliefs about Learning a Foreign Language at Primary School
7. Conservation Payments, Liquidity Constraints and Off-Farm Labor: Impact of the Grain for Green Program on Rural Households in China
8. IMPROVING THE UNIVERSITY'S PERFORMANCE IN PUBLIC POLICY EDUCATION
9. Palkkaneuvottelut ja työmarkkinat Pohjoismaissa ja Euroopassa
10. Orientation discrimination in WS 2