Higher order accuracy is one of the well-known beneficial properties of the discontinu-ous Galerkin(DG)method.Furthermore,many studies have demonstrated the supercon-vergence property of the semi-discrete DG method.On...Higher order accuracy is one of the well-known beneficial properties of the discontinu-ous Galerkin(DG)method.Furthermore,many studies have demonstrated the supercon-vergence property of the semi-discrete DG method.One can take advantage of this super-convergence property by post-processing techniques to enhance the accuracy of the DG solution.The smoothness-increasing accuracy-conserving(SIAC)filter is a popular post-processing technique introduced by Cockburn et al.(Math.Comput.72(242):577-606,2003).It can raise the convergence rate of the DG solution(with a polynomial of degree k)from order k+1 to order 2k+1 in the L2 norm.This paper first investigates general basis functions used to construct the SIAC filter for superconvergence extraction.The generic basis function framework relaxes the SIAC filter structure and provides flexibility for more intricate features,such as extra smoothness.Second,we study the distribution of the basis functions and propose a new SIAC filter called compact SIAC filter that significantly reduces the support size of the original SIAC filter while preserving(or even improving)its ability to enhance the accuracy of the DG solution.We prove the superconvergence error estimate of the new SIAC filters.Numerical results are presented to confirm the theoretical results and demonstrate the performance of the new SIAC filters.展开更多
Deep convolutional networks have obtained remarkable achievements on various visual tasks due to their strong ability to learn a variety of features.A welltrained deep convolutional network can be compressed to 20%–4...Deep convolutional networks have obtained remarkable achievements on various visual tasks due to their strong ability to learn a variety of features.A welltrained deep convolutional network can be compressed to 20%–40%of its original size by removing filters that make little contribution,as many overlapping features are generated by redundant filters.Model compression can reduce the number of unnecessary filters but does not take advantage of redundant filters since the training phase is not affected.Modern networks with residual,dense connections and inception blocks are considered to be able to mitigate the overlap in convolutional filters,but do not necessarily overcome the issue.To do so,we propose a new training strategy,weight asynchronous update,which helps to significantly increase the diversity of filters and enhance the representation ability of the network.The proposed method can be widely applied to different convolutional networks without changing the network topology.Our experiments show that the stochastic subset of filters updated in different iterations can significantly reduce filter overlap in convolutional networks.Extensive experiments show that our method yields noteworthy improvements in neural network performance.展开更多
基金Funding for this work was partially supported by the National Natural Science Foundation of China(NSFC)under Grant no.11801062.
文摘Higher order accuracy is one of the well-known beneficial properties of the discontinu-ous Galerkin(DG)method.Furthermore,many studies have demonstrated the supercon-vergence property of the semi-discrete DG method.One can take advantage of this super-convergence property by post-processing techniques to enhance the accuracy of the DG solution.The smoothness-increasing accuracy-conserving(SIAC)filter is a popular post-processing technique introduced by Cockburn et al.(Math.Comput.72(242):577-606,2003).It can raise the convergence rate of the DG solution(with a polynomial of degree k)from order k+1 to order 2k+1 in the L2 norm.This paper first investigates general basis functions used to construct the SIAC filter for superconvergence extraction.The generic basis function framework relaxes the SIAC filter structure and provides flexibility for more intricate features,such as extra smoothness.Second,we study the distribution of the basis functions and propose a new SIAC filter called compact SIAC filter that significantly reduces the support size of the original SIAC filter while preserving(or even improving)its ability to enhance the accuracy of the DG solution.We prove the superconvergence error estimate of the new SIAC filters.Numerical results are presented to confirm the theoretical results and demonstrate the performance of the new SIAC filters.
基金the National Natural Science Foundation of China under Grant No.61702350。
文摘Deep convolutional networks have obtained remarkable achievements on various visual tasks due to their strong ability to learn a variety of features.A welltrained deep convolutional network can be compressed to 20%–40%of its original size by removing filters that make little contribution,as many overlapping features are generated by redundant filters.Model compression can reduce the number of unnecessary filters but does not take advantage of redundant filters since the training phase is not affected.Modern networks with residual,dense connections and inception blocks are considered to be able to mitigate the overlap in convolutional filters,but do not necessarily overcome the issue.To do so,we propose a new training strategy,weight asynchronous update,which helps to significantly increase the diversity of filters and enhance the representation ability of the network.The proposed method can be widely applied to different convolutional networks without changing the network topology.Our experiments show that the stochastic subset of filters updated in different iterations can significantly reduce filter overlap in convolutional networks.Extensive experiments show that our method yields noteworthy improvements in neural network performance.