that determine λ. When there is no noise in measurements, i.e., Axr = b, the
regularization coefficient λ should be set infinity. As measurement noise increases,
we should relax t2^norm constraint or equivalently decrease λ. In addition, sparse
vectors imply small λ. When it is known that vector x is strongly sparse, one
should relax t⅛-norm constraint (decrease λ) to obtain a very sparse solution for
the problem.
Figure 4.5 shows estimation error for different regularization coefficients, λ.
As explained, for very small λ and very large λ estimation error is high. There is
an optimal regularization coefficient λopt in which the variation estimation error is
minimum. Optimizing Equation 4.13 for λ = λopt leads to the minimum variation
estimation error, λopt is a function of the measurement matrix, measurement
noise, and the true variations xr∙, thus, it is not possible to find λopt exactly.
Applying first-order necessity condition for regularization problem in Equa-
tion 4.13 determines minimum value for λ. Let
j(æ)= ∣∣τ∣∣ι + λ∣∣Ar - b∣∣∣.
The first-order necessity condition for optimal solution implies = 0, i =
1... n. Thus,
g∣∣x∣∣1 _ ∂λ∣∣Ar-⅛∣∣j
∂Xi ∂Xi
52
More intriguing information
1. The name is absent2. The name is absent
3. Chebyshev polynomial approximation to approximate partial differential equations
4. The name is absent
5. WP 1 - The first part-time economy in the world. Does it work?
6. Policy Formulation, Implementation and Feedback in EU Merger Control
7. The Integration Order of Vector Autoregressive Processes
8. The name is absent
9. European Integration: Some stylised facts
10. The name is absent