Performances Fig. 1 Averages SNR of the algorithms vs sparsity, expressed in percentage of the number of equations $n$. The algorithms SL0, LARS, LASSO, MP, OMP, STOMP are the most representative in the area.

Errors Fig. 2 Relative number (in %) of correctly recovered atoms not equal to zero. An atom is considered correctly reconstructed if the deviation from the true value of the estimated value is less than 5%.

Times Fig. 3 Average computational times of the algorithms vs sparsity, expressed in percentage of the number $n$ of equations.

Performances Fig. 4 Each point on the grid shows the cumulative mean square error (MSE) between the original and reconstructed signals. The grid of $\delta\times\rho$ values is done with $\delta$ ranging in the interval [.01, 5] and $\rho$ ranging in [.01, 1], where $\rho=k/n$, $\delta=n/m$ and $n\times m$ represents the dictionary size.

Description

We provide a fast iterative method for finding sparse solutions to underdetermined linear systems. It is based on a fixed-point iteration scheme which combines nonconvex Lipschitzian-type mappings with canonical orthogonal projectors. The former are aimed at uniformly enhancing the sparseness level by shrinkage effects, the latter are used to project back onto the space of feasible solutions. The iterative process is driven by an increasing sequence of a scalar parameter that mainly contributes to approach the sparsest solutions.

Details

We developed two new heuristics (called LiMapS and $k$-LiMapS respectively) for the classical sparse recovery problem \[ \min_{\alpha} || \alpha ||_{0} \quad \mbox{such that}\quad s=\Phi\alpha.\] They rely on a parametric class $\left\{G_\lambda : \lambda \geq 0 \right\}$ of nonlinear mappings which turn to be useful to solve the sparsity minimization problem above by the Picard iterative scheme: \begin{equation} \alpha_{t+1} = G_{\lambda_t}(\alpha_{t}):=\alpha_t - \text{Proj}_{\text{ker} \Phi} [\alpha_t\odot e^{-\lambda|\alpha_t|}] \end{equation} We show that given a sequence $\{\lambda_t\}_{t\geq 0}$, the iterative scheme converges when $\sum_{t} \frac{1}{\lambda_t}<+\infty$. To study this minimization problem, we first introduce a smooth relaxed version, approximating the original problem when $\lambda\to+\infty$: \begin{equation} \min_{\alpha} || \alpha ||_{\langle \lambda \rangle} \quad \mbox{such that}\quad s=\Phi\alpha \end{equation} where the family $\{ || \cdot ||_{\langle\lambda \rangle}:\lambda >0\}$ is a collection of suitable pseudo-norms. We find that, under reasonable assumptions, the local minima of the latter relaxed problem are asymptotically stable fixed points of the Picard scheme $G_\lambda$. We also give some evidence that, for sufficiently large $\lambda$, the sparsest solution is ''close'' to a fixed point of $G_\lambda$.

Limaps Algorithm

This algorithm results in a new regularization method for sparse representation based on the Picard fixed-point iteration schema above which combines two Lipschitzian-type mappings, a nonlinear one aimed to uniformly enhance the sparseness level of a candidate solution and a linear one which projects back into the feasible space of solutions. It is shown that this strategy locally minimizes a problem whose objective function falls into the class of the $\ell_p$-norm and represents an effective approximation of the $\ell_0$-norm in the intractable sparsity minimization problem. Numerical experiments on randomly generated signals using classical stochastic models show better performances of the proposed technique with respect to a wide collection of well known algorithms for sparse representation, as shown in the Figures 1-3.

The Matlab implementation of Limaps Algorithm can be downloaded from the Software Section

$k$-Limaps Algorithm

Analogously to the well known greedy strategy called Orthogonal Matching Pursuit (OMP), k-LiMapS is a new algorithm for solving the sparse approximation problem over redundant dictionaries where the input signal is restricted to be a linear combination of at most k atoms of a fixed dictionary. The basic strategy of the method relies on a family of nonlinear mappings which results to be contractive in a interval close to zero. By iterating contractions and projections the method is able to extract the most significant components also for noisy signal which subsumes an ideal underlying signal having sufficiently sparse representation. For reasonable error level, the fixed point solution of such iterative scheme provides a sparse approximation containing only the nonzero terms characterizing the unique sparsest representation of the ideal noiseless sparse signal. The heuristic method so derived has been applied both to synthetic and real data generated by combining exact signals drawn by usual Bernoulli-Gaussian model and Gaussian noise. The Figure 4 depicts both the k-LiMapS and OMP mean square errors over the data.

The annealing-like behavior exhibited by klimaps hitting the non required coefficients already at the beginning of the first iterations (15 and 30) for a 10-sparse signal, as shown in the following plotting:

ECG Performances

The Matlab implementation of $k$-Limaps Algorithm can be downloaded from the Software Section

References

1) Adamo, Alessandro; Grossi, Giuliano. A fixed-point iterative schema for error minimization in k-sparse decomposition. Signal Processing and Information Technology (ISSPIT), 2011 IEEE International Symposium on, 167-172, 2011, IEEE
2) Adamo, Alessandro; Grossi, Giuliano. Sparsity recovery by iterative orthogonal projections of nonlinear mappings. Signal Processing and Information Technology (ISSPIT), 2011 IEEE International Symposium on, 173-178, 2011, IEEE