More competent learning models are demanded for data processing due to increasingly greater amounts of data available in applications.Data that we encounter often have certain embedded sparsity structures.That is,if t...More competent learning models are demanded for data processing due to increasingly greater amounts of data available in applications.Data that we encounter often have certain embedded sparsity structures.That is,if they are represented in an appropriate basis,their energies can concentrate on a small number of basis functions.This paper is devoted to a numerical study of adaptive approximation of solutions of nonlinear partial differential equations whose solutions may have singularities,by deep neural networks(DNNs)with a sparse regularization with multiple parameters.Noting that DNNs have an intrinsic multi-scale structure which is favorable for adaptive representation of functions,by employing a penalty with multiple parameters,we develop DNNs with a multi-scale sparse regularization(SDNN)for effectively representing functions having certain singularities.We then apply the proposed SDNN to numerical solutions of the Burgers equation and the Schrödinger equation.Numerical examples confirm that solutions generated by the proposed SDNN are sparse and accurate.展开更多
We consider solving linear ill-posed operator equations. Based on a multi-scale decomposition for the solution space, we propose a multi-parameter regularization for solving the equations. We establish weak and strong...We consider solving linear ill-posed operator equations. Based on a multi-scale decomposition for the solution space, we propose a multi-parameter regularization for solving the equations. We establish weak and strong convergence theorems for the multi-parameter regularization solution. In particular, based on the eigenfunction decomposition, we develop a posteriori choice strategy for multi-parameters which gives a regularization solution with the optimal error bound. Several practical choices of multi-parameters are proposed. We also present numerical experiments to demonstrate the outperformance of the multiparameter regularization over the single parameter regularization.展开更多
基金Y.Xu is supported in part by US National Science Foundation under grant DMS1912958T.Zeng is supported in part by the National Natural Science Foundation of China under grants 12071160 and U1811464+2 种基金by the Natural Science Foundation of Guangdong Province under grant 2018A0303130067by the Opening Project of Guangdong Province Key Laboratory of Computational Science at the Sun Yat-sen University under grant 2021022by the Opening Project of Guangdong Key Laboratory of Big Data Analysis and Processing under grant 202101.
文摘More competent learning models are demanded for data processing due to increasingly greater amounts of data available in applications.Data that we encounter often have certain embedded sparsity structures.That is,if they are represented in an appropriate basis,their energies can concentrate on a small number of basis functions.This paper is devoted to a numerical study of adaptive approximation of solutions of nonlinear partial differential equations whose solutions may have singularities,by deep neural networks(DNNs)with a sparse regularization with multiple parameters.Noting that DNNs have an intrinsic multi-scale structure which is favorable for adaptive representation of functions,by employing a penalty with multiple parameters,we develop DNNs with a multi-scale sparse regularization(SDNN)for effectively representing functions having certain singularities.We then apply the proposed SDNN to numerical solutions of the Burgers equation and the Schrödinger equation.Numerical examples confirm that solutions generated by the proposed SDNN are sparse and accurate.
基金supported in part by the Natural Science Foundation of China under grants 10371137the Foundation of Doctoral Program of National Higher Education of China under grant 20030558008+5 种基金Guangdong Provincial Natural Science Foundation of China under grant 05003308the Foundation of Zhongshan University Advanced Research Centersupported in part by the US National Science Foundation under grant CCR-0407476National Aeronautics and Space Administration under Cooperative Agreement NNX07AC37Athe Natural Science Foundation of China under grants 10371122 and 10631080the Education Ministry of the People's Republic of China under the Changjiang Scholar Chair Professorship Program through Zhongshan University
文摘We consider solving linear ill-posed operator equations. Based on a multi-scale decomposition for the solution space, we propose a multi-parameter regularization for solving the equations. We establish weak and strong convergence theorems for the multi-parameter regularization solution. In particular, based on the eigenfunction decomposition, we develop a posteriori choice strategy for multi-parameters which gives a regularization solution with the optimal error bound. Several practical choices of multi-parameters are proposed. We also present numerical experiments to demonstrate the outperformance of the multiparameter regularization over the single parameter regularization.