Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks
Abstract
We study the training process of Deep Neural Networks (DNNs) from the Fourier analysis perspective. We demonstrate a very universal Frequency Principle (F-Principle) -- DNNs often fit target functions from low to high frequencies -- on high-dimensional benchmark datasets such as MNIST/CIFAR10 and deep neural networks such as VGG16. This F-Principle of DNNs is opposite to the behavior of most conventional iterative numerical schemes (e.g., Jacobi method), which exhibit faster convergence for higher frequencies for various scientific computing problems. With a simple theory, we illustrate that this F-Principle results from the regularity of the commonly used activation functions. The F-Principle implies an implicit bias that DNNs tend to fit training data by a low-frequency function. This understanding provides an explanation of good generalization of DNNs on most real datasets and bad generalization of DNNs on parity function or randomized dataset.
- Publication:
-
Communications in Computational Physics
- Pub Date:
- June 2020
- DOI:
- arXiv:
- arXiv:1901.06523
- Bibcode:
- 2020CCoPh..28.1746X
- Keywords:
-
- Computer Science - Machine Learning;
- Statistics - Machine Learning;
- 68Q32;
- 68T01;
- I.2.6
- E-Print:
- Paper is published in Communications in Computational Physics