The name is absent



where a is a positive constant.

Note that the zero norm,∣∣ ∙ ∣∣o, in Equation 2.3 measures the number of non-
zero elements of the vector. This objective function is not convex which means
solving the optimization problem in Equation 2.3 is difficult. Instead, Danaho
et al. showed [10,15,24] that one can use the following optimization problem to
approximate the sparse vector
X.

min∣∣X∣∣ι + α∣∣e∣∣2
such that
Y = UX + e.

(2.4)


They proved that, for a Gaussian measurement matrix, an «-sparse vector can
be retrieved via ^ɪ-norm optimization if

S<C K
<Ulog(X/K)’

where C is a constant. Moreover, for a general measurement matrix U, Restricted
Isometry Property (IRP) should be satisfied [15].

Most of the real world vectors have an approximately sparse representation.
A vector Xv ×ι is called approximately «-sparse if it has s large elements and
N — s very small elements. It is also shown that the optimization problem in
Equation 2.4 can be used to recover approximately sparse vectors that fie in weak
lp ball of radius r [15]. i.e.,

I I              — 1

(2.5)


k∣(i) ≤ ri P, 1 ≤ i ≤ N

21



More intriguing information

1. A Hybrid Neural Network and Virtual Reality System for Spatial Language Processing
2. The name is absent
3. DEMAND FOR MEAT AND FISH PRODUCTS IN KOREA
4. The name is absent
5. Education Research Gender, Education and Development - A Partially Annotated and Selective Bibliography
6. The migration of unskilled youth: Is there any wage gain?
7. Prizes and Patents: Using Market Signals to Provide Incentives for Innovations
8. Qualification-Mismatch and Long-Term Unemployment in a Growth-Matching Model
9. Endogenous Heterogeneity in Strategic Models: Symmetry-breaking via Strategic Substitutes and Nonconcavities
10. The name is absent