that determine λ. When there is no noise in measurements, i.e., Axr = b, the
regularization coefficient λ should be set infinity. As measurement noise increases,
we should relax t2^norm constraint or equivalently decrease λ. In addition, sparse
vectors imply small λ. When it is known that vector x is strongly sparse, one
should relax t⅛-norm constraint (decrease λ) to obtain a very sparse solution for
the problem.
Figure 4.5 shows estimation error for different regularization coefficients, λ.
As explained, for very small λ and very large λ estimation error is high. There is
an optimal regularization coefficient λopt in which the variation estimation error is
minimum. Optimizing Equation 4.13 for λ = λopt leads to the minimum variation
estimation error, λopt is a function of the measurement matrix, measurement
noise, and the true variations xr∙, thus, it is not possible to find λopt exactly.
Applying first-order necessity condition for regularization problem in Equa-
tion 4.13 determines minimum value for λ. Let
j(æ)= ∣∣τ∣∣ι + λ∣∣Ar - b∣∣∣.
The first-order necessity condition for optimal solution implies = 0, i =
1... n. Thus,
g∣∣x∣∣1 _ ∂λ∣∣Ar-⅛∣∣j
∂Xi ∂Xi
52
More intriguing information
1. Standards behaviours face to innovation of the entrepreneurships of Beira Interior2. Regional science policy and the growth of knowledge megacentres in bioscience clusters
3. Can genetic algorithms explain experimental anomalies? An application to common property resources
4. Chebyshev polynomial approximation to approximate partial differential equations
5. The name is absent
6. The name is absent
7. Research Design, as Independent of Methods
8. Problems of operationalizing the concept of a cost-of-living index
9. The fundamental determinants of financial integration in the European Union
10. The name is absent