Learning real-time MRF inference for image denoising
2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition
Abstract
Many computer vision problems can be formulated in a Bayesian framework with Markov Random Field (MRF) or Conditional Random Field (CRF) priors. Usually, the model assumes that a full Maximum A Posteriori (MAP) estimation will be performed for inference, which can be really slow in practice. In this paper, we argue that through appropriate training, a MRF/CRF model can be trained to perform very well on a suboptimal inference algorithm. The model is trained together with a fast inference algorithm through an optimization of a loss function on a training set containing pairs of input images and desired outputs. A validation set can be used in this approach to estimate the generalization performance of the trained system. We apply the proposed method to an image denoising application, training a Fields of Experts MRF together with a 1-4 iteration gradient descent inference algorithm. Experimental validation on unseen data shows that the proposed training approach obtains an improved benchmark performance as well as a 1000-3000 times speedup compared to the Fields of Experts MRF trained with contrastive divergence. Using the new approach, image denoising can be performed in real-time, at 8fps on a single CPU for a 256 × 256 image sequence, with close to state-of-the-art accuracy.
References (29)
- Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden Markov SVM. ML Workshop, 20(1):3, 2003.
- J. Besag. On the statistical analysis of dirty pictures. J. Royal Stat. Soc. B, 48(3):259-302, 1986.
- Y. Boykov, O. Veksler, and R. Zabih. Fast approxi- mate energy minimization via graph cuts. IEEE TPAMI, 23(11):1222-1239, 2001.
- A. Buades, B. Coll, and J.M. Morel. A Non-Local Algorithm for Image Denoising. CVPR, 2005.
- T.F. Cootes, G.J. Edwards, and C.J. Taylor. Active appear- ance models. TPAMI, 23(6):681-685, 2001.
- K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Img. Proc., 16(8):2080-2095, 2007.
- M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE T Img. Proc., 15(12), 2006.
- D.E. Goldberg et al. Genetic algorithms in search, optimiza- tion, and machine learning. 1989.
- G.E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence. Neural Computation, 14(8):1771- 1800, 2002.
- S. Kirkpatrick, CD Gelati Jr, and MP Vecchi. Optimiza- tion by Simulated Annealing. Biology and Computation: A Physicist's Choice, 1994.
- S. Kumar and M. Hebert. Discriminative random fields: a discriminative framework for contextual interaction in clas- sification. ICCV, 2003.
- J.D. Lafferty, A. McCallum, and F.C.N. Pereira. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. ICML 2001.
- Y. LeCun and F.J. Huang. Loss functions for discriminative training of energy-based models. AIStats, 3, 2005.
- D. Martin, C. Fowlkes, D. Tal, and J. Malik. A Database of Human Segmented Natural Images and its Application to Evaluating Segmentation Algorithms. ICCV 2001, 2:416- 425.
- A. Pizurica and W. Philips. Estimating the Probability of the Presence of a Signal of Interest in Multiresolution Single- and Multiband Image Denoising. IEEE Trans. Img. Proc., 15(3):654-665, 2006.
- J. Portilla, V. Strela, MJ Wainwright, and EP Simoncelli. Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Trans. Img. Proc., 12(11):1338-1351, 2003.
- RB Potts. Some generalized order-disorder transitions. Proc. Camb. Phil. Soc, 48:106-109, 1952.
- S. Roth and MJ Black. Fields of Experts: a framework for learning image priors. CVPR 2005.
- J. Sun, N.N. Zheng, and H.Y. Shum. Stereo matching using belief propagation. TPAMI, 25:787-800, 2003.
- MF Tappen and WT Freeman. Comparison of graph cuts with belief propagation for stereo, using identical MRF pa- rameters. ICCV, pages 900-906, 2003.
- M.F. Tappen and FL Orlando. Utilizing Variational Opti- mization to Learn Markov Random Fields. CVPR, pages 1- 8, 2007.
- B. Taskar, S. Lacoste-Julien, and M. Jordan. Structured Pre- diction via the Extragradient Method. NIPS, 18:1345, 2006.
- P.D. Turney. Cost-sensitive classification: Empirical evalua- tion of a hybrid genetic decision tree induction algorithm. J. of AI Research, 2:369-409, 1995.
- M. J. Wainwright. Estimating the "wrong" graphical model: Benefits in the computation-limited setting. J. Mach. Learn. Res., 7:1829-1859, 2006.
- J. Weickert. A Review of Nonlinear Diffusion Filtering. Int Conf Scale-Space Th. in Comp. Vis., 1997.
- L. Xu, F. Hutter, H.H. Hoos, and K. Leyton-Brown. SATzilla: Portfolio-based Algorithm Selection for SAT. J. of AI Research, 32:565-606, 2008.
- J.S. Yedidia, W.T. Freeman, and Y. Weiss. Generalized belief propagation. NIPS, 13, 2001.
- Y. Zheng, A. Barbu, B. Georgescu, M. Scheuering, and D. Comaniciu. Four-Chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features. Medical Imaging, IEEE Transactions on, 27(11):1668-1681, 2008.
- S.K. Zhou and D. Comaniciu. Shape Regression Machine. IPMI, pages 13-25, 2007.