ii) If a> 1, then M1 lies below the horizontal line associated with κ =1east from M0 . For any
η ∈ (0, 1], eigenvalues z2, z3 are either complex conjugate with modulus greater than 1, or real with
identical sign in the intervals (-∞,-1) or (1, ∞). Since zɪ = 1 — η < 1, then (k, w, r) is a saddle. (see
figure A8). ■
Proposition 11 If 1 < G1 < 1+a,anda<1, then the set of constant gain parameters η associated
with local stability is larger for h =0 than for h =1.
Proof. If 1< G1 < 1+a,anda<1, local stability under constant gain learning requires η ∈ (ηbH, 1] for
h = 0, and η ∈ (ηH, 1] for h = 1 where ηH = —(1 — Gi)∕Gi and ηH = (1 — Gi)∕(a — G1 ). Under above
conditions on parameters Gi, a, we can easily show that η∏ < ен. (see figure A7) ■
Proposition 12 If 1 < G1 < 1+a,anda<G1 + χ, local instability for any η ∈ (0, 1] under constant
gain learning when h =1 may imply local stability for some values of the constant gain parameter η when
h =0. However the reverse is not true.
Proof. If 1 < G1 < 1 + a, then local instability for any η ∈ (0, 1] under constant gain learning when
h = 1 requires a > 1 according to proposition 10 ii). If a < Gi + χ where χ > 1, then the steady state
under constant gain learning when h = 0 is local stable for η ∈ (ηH,ηF) according to proposition 5.
However the reverse is not true since local instability for any η ∈ (0, 1] under constant gain learning when
h = 0 requires G1 + χ < a according to proposition 5 iii) which violates condition a<1 of proposition
10 ii) for local stability under constant gain learning when h = 1. (see figures A6, A8, A9) ■
Proposition 13 If 1 < G1 < 1+a,andG1 + χ<a, local instability for any η ∈ (0, 1] under constant
gain learning when h = 0 implies local instability for any η ∈ (0, 1] when h =0.
Proof. If 1 < G1 < 1+a, then local instability for any η ∈ (0, 1] under constant gain learning when
h = 0 requires G1 + χ < a according to proposition 5 iii) which implies condition a>1 of proposition
10 ii) for local instability under constant gain learning when h = 1. (see figure A10) ■
Proposition 14 Under constant gain learning with h =1 and G1 ≥ 1+a, then the non-trivial steady
state is locally unstable for any η ∈ (0, 1].
Proof. If G1 ≥ 1+a,thenM0 lies either anywhere between the points (—2 — a, 1+a) and (—2 — a, 1+a),
or at the point (—2 — a, 1+a) on the line segment associated with the equation 1+ε + κ =0.The
point M1 lies anywhere between the horizontal axis and the straight line associated with the equation
1 + ε + κ = 0. For Gi > 1 + a, the line segment MoMi lies below the straight line corresponding to the
equation 1+ε + κ =0. Therefore eigenvalues z2, z3 are real with identical sign: one in the interval (0, 1)
and the other in the interval (1, ∞). Since z1 = 1 — η < 1, then (k, w, r) is a saddle. For Gi = 1 + a, the
line segment M0Mi coincide with the straight line corresponding to the equation 1+ε+ κ =0. Therefore
eigenvalues z2, z3 are real with one equals to 1, and the other in the interval (1, ∞). Since zi =1— η<1,
then (k, w, r) is a saddle. (see figure A11). ■
Remark 3: If Gi ≥ 1 + a, then the non-trivial steady state is locally unstable under constant gain
learning for any η ∈ (0, 1] and h ∈ {0, 1}.
Under the different restrictions imposed on the values of Gi and a when h =1, figure A13 summarizes
in the Gi — a plane the local qualitative properties of the model around the non-trivial steady state.
5 Calibration and Simulations
In this section, I calibrate the model to the U.S. economy and illustrate some of the local stability
results presented earlier by simulating the competitive equilibrium trajectories for different parameter
specifications of the utility function and expectation functions.
I consider a Constant Relative Risk Aversion (CRRA) instantaneous utility function: u(ct) = (cti-γ —
1)/ (1 — γ), where γ denotes a coefficient of relative risk aversion for γ ∈ R+ — {1} with u(ct) = ln ct for
11