PAC-learning is Undecidable
2019
Abstract
The problem of attempting to learn the mapping between data and labels is the crux of any machine learning task. It is, therefore, of interest to the machine learning community on practical as well as theoretical counts to consider the existence of a test or criterion for deciding the feasibility of attempting to learn. We investigate the existence of such a criterion in the setting of PAC-learning, basing the feasibility solely on whether the mapping to be learnt lends itself to approximation by a given class of hypothesis functions. We show that no such criterion exists, exposing a fundamental limitation in the decidability of learning. In other words, we prove that testing for PAC-learnability is undecidable in the Turing sense. We also briefly discuss some of the probable implications of this result to the current practice of machine learning.
References (10)
- Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134-1142, 1984.
- Alan Church. Abstract no. 204. Bull. Amer. Math. Soc, 41:332-333, 1935.
- Alan M Turing. On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London mathematical society, 2(1):230-265, 1937.
- Harry R. Lewis and Christos H. Papadimitriou. Elements of the Theory of Computation. Prentice Hall PTR, Upper Saddle River, NJ, USA, 2nd edition, 1997.
- Jon M Kleinberg. An impossibility theorem for clustering. In Advances in neural information processing systems, pages 463-470, 2003.
- T.M. Mitchell. Machine Learning. McGraw-Hill International Editions. McGraw-Hill, 1997.
- Michael Sipser. Introduction to the Theory of Computation. International Thomson Publishing, 1st edition, 1996.
- Nicolas Le Roux and Yoshua Bengio. Deep belief networks are compact universal approximators. Neural Computation, 22:2192-2207, 2010.
- Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approx- imators. Neural networks, 2(5):359-366, 1989.
- Tomer Galanti, Lior Wolf, and Tamir Hazan. A theoretical framework for deep transfer learning. Information and Inference: A Journal of the IMA, 5(2):159-209, 2016.