Academia.eduAcademia.edu

Outline

Learning Model Complexity in an Online Environment

2009, 2009 Canadian Conference on Computer and Robot Vision

https://doi.org/10.1109/CRV.2009.52

Abstract

In this paper we introduce the concept and method for adaptively tuning the model complexity in an online manner as more examples become available. Challenging classification problems in the visual domain (such as recognizing handwriting, faces and human-body images) often require a large number of training examples, which may become available over a long training period. This motivates the development of scalable and adaptive systems which are able to continue learning at any stage and which can efficiently learn from large amounts of data, in an on-line manner. Previous approaches to on-line learning in visual classification have used a fixed parametric model, and focused on continuously improving the model parameters as more data becomes available. Here we propose a new framework which enables online learning algorithms to adjust the complexity of the learned model to the amount of the training data as more examples become available. Since in online learning the training set expands over time, it is natural to allow the learned model to become more complex during the course of learning instead of confining the model to a fixed family of a bounded complexity. Formally, we use a set of parametric classifiers y

References (15)

  1. I. Biederman. Visual object recognition, In S. F. Kosslyn andD. N. Osherson, editors, An Invitation to Cognitive Sci- ence,volume 2, pages 121165. MIT Press, 2nd edition, 1995.
  2. A. Bordes, S. Ertekin, J. Weston, and L. Bottou. Fast kernel classifiers with online and active learning. Journal of Ma- chine Learning Research, 6:1579-1619, September 2005.
  3. R. Duda and P. Hart. Pattern Classification and Scene Anal- ysis. Wiley, 1973.
  4. R. Kohavi. A study of cross-validation and bootstrap for accuracy estimation and model selection. In IJCAI, pages 1137-1145, 1995.
  5. D. Levi and S. Ullman. Learning to classify by ongoing feature selection. In CRV '06: Proceedings of the The 3rd Canadian Conference on Computer and Robot Vision, page 1, Washington, DC, USA, 2006. IEEE Computer Soci- ety.
  6. G. Loosli, S. Canu, and L. Bottou. Training invariant sup- port vector machines using selective sampling. In L. Bot- tou, O. Chapelle, D. DeCoste, and J. Weston, editors, Large Scale Kernel Machines, pages 301-320. MIT Press, Cam- bridge, MA., 2007.
  7. R. Marc'Aurelio, C. Poultney, S. Chopra, and Y. LeCun. Efficient learning of sparse representations with an energy- based model. In J. P. et al., editor, Advances in Neural Infor- mation Processing Systems (NIPS 2006). MIT Press, 2006.
  8. M. Marszałek, C. Schmid, H. Harzallah, and J. van de Wei- jer. Learning object representations for visual object class recognition, oct 2007. Visual Recognition Challange work- shop, in conjunction with ICCV.
  9. A. N. Loeff, H. Arora and D. Forsyth. Efficient unsupervised learning for localization and detection in object categories, 2005. NIPS.
  10. A. Rakhlin, J. Abernethy, and P. L. Bartlett. Online dis- covery of similarity mappings. In ICML '07: Proceedings of the 24th international conference on Machine learning, pages 767-774, New York, NY, USA, 2007. ACM.
  11. J. Rissanen. Modelling by the shortest data description, 1978.
  12. F. Rosenblatt. The perceptron: a probabilistic model for in- formation storage and organization in the brain, 1958.
  13. B. Scholkopf and A. J. Smola. Learning with Kernels: Sup- port Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001.
  14. A. Torralba, R. Fergus, and W. T. Freeman. Tiny im- ages. Technical Report MIT-CSAIL-TR-2007-024, Com- puter Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, 2007.
  15. V. N. Vapnik. The nature of statistical learning theory. Springer-Verlag New York, Inc., New York, NY, USA, 1995.