Academia.eduAcademia.edu

Outline

Stable Principal Component Pursuit

2010

https://doi.org/10.1109/ISIT.2010.5513535

Abstract

In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a highdimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors. More precisely, our result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level. We present simulation results to support our result and demonstrate that the new convex program accurately recovers the principal components (the low-rank matrix) under quite broad conditions. To our knowledge, this is the first result that shows the classical Principal Component Analysis (PCA), optimal for small i.i.d. noise, can be made robust to gross sparse errors; or the first that shows the newly proposed PCP can be made stable to small entry-wise perturbations.

References (13)

  1. C. Eckart and G. Young, "The approximation of one matrix by another of lower rank," Psychometrika, vol. 1, pp. 211-218, 1936.
  2. I. Jolliffe, Principal Component Analysis. Springer-Verlag, 1986.
  3. E. J. Candès, X. Li, Y. Ma, and J. Wright, "Robust principal component analysis?" preprint, 2009.
  4. V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky, "Rank- sparsity incoherence for matrix decomposition," preprint, 2009.
  5. Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma, "Fast convex optimization algorithms for exact recovery of a corrupted low- rank matrix," in CAMSAP, 2009.
  6. E. J. Candes and T. Tao, "Decoding by linear programming," IEEE Trans. Inform. Theory, vol. 51, no. 12, pp. 4203-4215, 2005.
  7. D. L. Donoho, "For most large underdetermined systems of linear equations the minimal 1 -norm solution is also the sparsest solution," Comm. Pure Appl. Math, vol. 59, pp. 797-829, 2004.
  8. D. L. Donoho, M. Elad, and V. N. Temlyakov, "Stable recovery of sparse overcomplete representations in the presence of noise," IEEE Trans. Inform. Theory, vol. 52, no. 1, pp. 6-18, 2006.
  9. E. J. Candès, J. K. Romberg, and T. Tao, "Stable signal recovery from incomplete and inaccurate measurements," Comm. Pure Appl. Math, vol. 59, no. 8, pp. 1207-1223, 2006.
  10. B. Recht, M. Fazel, and P. A. Parrilo, "Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization," submitted to SIAM Review, 2008.
  11. E. J. Candès and Y. Plan, "Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements," preprint, 2009.
  12. S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu, "A unified framework for high-dimensional analysis of m-estimators with decom- posable regularizers," in NIPS, 2009.
  13. E. J. Candès and Y. Plan, "Matrix completion with noise," Proceedings of IEEE, 2009.