摘要
Why heavily parameterized neural networks(NNs) do not overfit the data is an important long standing open question. We propose a phenomenological model of the NN training to explain this non-overfitting puzzle. Our linear frequency principle(LFP) model accounts for a key dynamical feature of NNs: they learn low frequencies first, irrespective of microscopic details. Theory based on our LFP model shows that low frequency dominance of target functions is the key condition for the non-overfitting of NNs and is verified by experiments. Furthermore,through an ideal two-layer NN, we unravel how detailed microscopic NN training dynamics statistically gives rise to an LFP model with quantitative prediction power.
作者
Yaoyu Zhang
Tao Luo
Zheng Ma
Zhi-Qin John Xu
张耀宇;罗涛;马征;许志钦(School of Mathematical Sciences,Institute of Natural Sciences,MOE-LSC,and Qing Yuan Research Institute,Shanghai Jiao Tong University,Shanghai 200240,China;Shanghai Center for Brain Science and Brain-Inspired Technology,Shanghai 200031,China)
基金
Supported by the National Key R&D Program of China(Grant No.2019YFA0709503)
the Shanghai Sailing Program
the Natural Science Foundation of Shanghai(Grant No.20ZR1429000)
the National Natural Science Foundation of China(Grant No.62002221)
Shanghai Municipal of Science and Technology Project(Grant No.20JC1419500)
the HPC of School of Mathematical Sciences at Shanghai Jiao Tong University。