The name is absent



34

where Ац is the initial к × к submatrix of A, Bi consists of the first к rows of B,
and Ci consists of the first к columns of C.

The reduced state vector ξ ∈ Rfc has no apparent physiological sense (though
see (Freund, 2008) for details about the interpretation of such low-dimensional state
vectors) until it is fed to the equation for y, but when
к n the computational
savings from this model reduction is great. Moreover, contrary to the brute force
approach of compartment coarsening, we have not sacrificed the rich input structure.
In §2.5.1 we demonstrate that the system in (2.36) allows for numerically exact (near
machine precision) reproduction of the soma potential computed from the quasi-active
system in a fraction of the time.

2.4.2 Iterative Rational Krylov Algorithm

For small systems, Balanced Truncation is a clean, precise, and feasible method,
but as the system dimension grows the cost of computing the gramians becomes
prohibitive. Since the reduction comes after the gramians have been computed, this
requires storage of dense
N × N matrices before the reduction step, meaning that
memory is also an issue.

Some alternative methods for large-scale model reduction of linear systems do ex-
ist, such as sparse Lyapunov solvers, as extensively studied in (Sabino, 2006). How-
ever, in the context of neuronal modeling here, these methods do not perform well
because of the large number of possible inputs. More specifically, the eigenvalue decay



More intriguing information

1. The name is absent
2. The name is absent
3. The name is absent
4. IMPACTS OF EPA DAIRY WASTE REGULATIONS ON FARM PROFITABILITY
5. The Shepherd Sinfonia
6. The Importance of Global Shocks for National Policymakers: Rising Challenges for Central Banks
7. Outline of a new approach to the nature of mind
8. The name is absent
9. The name is absent
10. On s-additive robust representation of convex risk measures for unbounded financial positions in the presence of uncertainty about the market model