On sparse signal representations
2001, Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205)
https://doi.org/10.1109/ICIP.2001.958936…
4 pages
1 file
Sign up for access to the world's latest research
Abstract
An elementary proof of a basic uncertainty principle concerning pairs of representations of ¢ ¡ vectors in different orthonormal bases is provided. The result, slightly stronger than stated before, has a direct impact on the uniqueness property of the sparse representation of such vectors using pairs of orthonormal bases as overcomplete dictionaries. The main contribution in this paper is the improvement of an important result due to Donoho and Huo concerning the replacement of the £ ¥ ¤ optimization problem by a linear programming minimization when searching for the unique sparse representation.
Related papers
Proceedings of the National Academy of Sciences, 2003
Given a dictionary D ؍ {d ᠪ k} of vectors d ᠪ k, we seek to represent a signal S ᠪ as a linear combination S ᠪ ؍ ͚ k ␥(k)d ᠪ k, with scalar coefficients ␥(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered the special case where D is an overcomplete system consisting of exactly two orthobases and has shown that, under a condition of mutual incoherence of the two bases, and assuming that S ᠪ has a sufficiently sparse representation, this representation is unique and can be found by solving a convex optimization problem: specifically, minimizing the ഞ 1 norm of the coefficients ᠪ ␥. In this article, we obtain parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems. We sketch three applications: separating linear features from planar ones in 3D data, noncooperative multiuser encoding, and identification of over-complete independent component models.
2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2004
We consider the problem of enforcing a sparsity prior in underdetermined linear problems, which is also known as sparse signal representation in overcomplete bases. The problem is combinatorial in nature, and a direct approach is computationally intractable even for moderate data sizes. A number of approximations have been considered in the literature, including stepwise regression, matching pursuit and its variants, and recently, basis pursuit ( 1) and also p-norm relaxations with p < 1. Although the exact notion of sparsity (expressed by an 0-norm) is replaced by 1 and p norms in the latter two, it can be shown that under some conditions these relaxations solve the original problem exactly. The seminal paper of Donoho and Huo establishes this fact for 1 (basis pursuit) for a special case where the linear operator is composed of an orthogonal pair. In this paper, we extend their results to a general underdetermined linear operator. Furthermore, we derive conditions for the equivalence of 0 and p problems, and extend the results to the problem of enforcing sparsity with respect to a transformation (which includes total variation priors as a special case). Finally, we describe an interesting result relating the sign patterns of solutions to the question of 1-0 equivalence.
In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that includes two terms: one that measures the signal reconstruction error and another that measures the sparsity. This objective function works well in applications where signals need to be reconstructed, like coding and denoising. On the other hand, discriminative methods, such as linear discriminative analysis (LDA), are better suited for classification tasks. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we present a theoretical framework for signal classification with sparse representation. The approach combines the discrimination power of the discriminative methods with the reconstruction property and the sparsity of the sparse representation that enables one to deal with signal corruptions: noise, missing data and outliers. The proposed approach is therefore capable of robust classification with a sparse representation of signals. The theoretical results are demonstrated with signal classification tasks, showing that the proposed approach outperforms the standard discriminative methods and the standard sparse representation in the case of corrupted signals.
IEEE Transactions on Signal Processing, 2003
Certain sparse signal reconstruction problems have been shown to have unique solutions when the signal is known to have an exact sparse representation. This result is extended to provide bounds on the reconstruction error when the signal has been corrupted by noise, or is not exactly sparse for some other reason. Uniqueness is found to be extremely unstable for a number of common dictionaries.
IEEE Transactions on Signal Processing, 2006
This report is the extension to the case of sparse approximations of our previous study on the effects of introducing a priori knowledge to solve the recovery of sparse representations when overcomplete dictionaries are used . Greedy algorithms and Basis Pursuit Denoising are considered in this work. Theoretical results show how the use of "reliable" a priori information (which in this work appears under the form of weights) can improve the performances of these methods. In particular, we generalize the sufficient conditions established by Tropp [2], [3] and Gribonval and Vandergheynst [4], that guarantee the retrieval of the sparsest solution, to the case where a priori information is used. We prove how the use of prior models at the signal decomposition stage influences these sufficient conditions. The results found in this work reduce to the classical case of [4] and [3] when no a priori information about the signal is available.
IEEE Transactions on Signal Processing - TSP, 2005
and is a full-rank matrix, an infinite number of solutions are available for the representation problem, hence constraints on the solution must be set. The solution with the fewest number of nonzero coefficients is certainly an appealing representation. This sparsest representation is the solution of either
In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data.
IEEE Transactions on Signal Processing, 2000
Let x be a signal to be sparsely decomposed over a redundant dictionary A, i.e. a sparse coefficient vector s has to be found such that x = As. It is known that this problem is inherently unstable against noise, and to overcome this instability, the authors of [1] have proposed to use an "approximate" decomposition, that is, a decomposition satisfying x − As ≤ δ rather than satisfying the exact equality x = As. Then, they have shown that if there is a decomposition with s 0 < (1 + M −1 )/2, where M denotes the coherence of the dictionary, this decomposition would be stable against noise. On the other hand, it is known that a sparse decomposition with s 0 < 1 2 spark(A) is unique. In other words, although a decomposition with s 0 < 1 2 spark(A) is unique, its stability against noise has been proved only for highly more restrictive decompositions satisfying s 0 < (1 + M −1 )/2, because usually (1 + M −1 )/2 1 2 spark(A). This limitation maybe had not been very important before, because s 0 < (1 + M −1 )/2 is also the bound which guaranties that the sparse decomposition can be found via minimizing the 1 norm, a classic approach for sparse decomposition. However, with the availability of new algorithms for sparse decomposition, namely SL0 and Robust-SL0, it would be important to know whether or not unique sparse decompositions with (1+M −1 )/2 ≤ s 0 < 1 2 spark(A) are stable. In this paper, we show that such decompositions are indeed stable. In other words, we extend the stability bound from s 0 < (1+M −1 )/2 to the whole uniqueness range s 0 < 1 2 spark(A). In summary, we show that all unique sparse decompositions are stably recoverable. Moreover, we see that sparser decompositions are 'more stable'.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights.
IEEE Transactions on Information Theory, 2000
Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. We prove the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (6)
- REFERENCES
- D.L.Donoho & P.B Stark (June 1989) Uncertainty Principles and Signal Recovery, SIAM Journal on Applied Mathemat- ics, Vol 49/3,pp 906-931.
- D.L.Donoho & X.Huo (June 1999) Uncertainty Principles and Ideal Atomic Decomposition, www manuscript.
- S. Mallat(1998) A Wavelet Tour of Signal Processing, Aca- demic Press, Second Edition.
- D. Bertsekas (1995) Non-Linear Programming, Athena Publishing
- M. Elad & A.M. Bruckstein (June 2001) A Generalized Un- certainty Principle and Sparse Representation in Pairs of ¢¡ Bases, CIS Report 2001-06, Technion, Israel