Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empir...Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empirical likelihood function, which can be used without assuming the distribution of the data. It can effectively avoid the problems caused by the wrong setting of the model. In the variable selection based on Bayesian empirical likelihood, the penalty term is introduced into the model in the form of parameter prior. In this paper, we propose a novel variable selection method, L<sub>1/2</sub> regularization based on Bayesian empirical likelihood. The L<sub>1/2</sub> penalty is introduced into the model through a scale mixture of uniform representation of generalized Gaussian prior, and the posterior distribution is then sampled using MCMC method. Simulations demonstrate that the proposed method can have better predictive ability when the error violates the zero-mean normality assumption of the standard parameter model, and can perform variable selection.展开更多
We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squ...We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squared error and therefore underlies the feasibility and effectiveness of the L1/2 regularization. Interestingly, when applied to compressive sensing, the L1/2 regularization scheme has exhibited a very promising capability of completed recovery from a much less sampling information. As compared with the Lp (0 〈 p 〈 1) penalty, it is appeared that the L1/2 penalty can always yield the most sparse solution among all the Lv penalty when 1/2 〈 p 〈 1, and when 0 〈 p 〈 1/2, the Lp penalty exhibits the similar properties as the L1/2 penalty. This suggests that the L1/2 regularization scheme can be accepted as the best and therefore the representative of all the Lp (0 〈 p 〈 1) regularization schemes.展开更多
The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications...The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications is style transfer.Style transfer is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image.CYCLE-GAN is a classic GAN model,which has a wide range of scenarios in style transfer.Considering its unsupervised learning characteristics,the mapping is easy to be learned between an input image and an output image.However,it is difficult for CYCLE-GAN to converge and generate high-quality images.In order to solve this problem,spectral normalization is introduced into each convolutional kernel of the discriminator.Every convolutional kernel reaches Lipschitz stability constraint with adding spectral normalization and the value of the convolutional kernel is limited to[0,1],which promotes the training process of the proposed model.Besides,we use pretrained model(VGG16)to control the loss of image content in the position of l1 regularization.To avoid overfitting,l1 regularization term and l2 regularization term are both used in the object loss function.In terms of Frechet Inception Distance(FID)score evaluation,our proposed model achieves outstanding performance and preserves more discriminative features.Experimental results show that the proposed model converges faster and achieves better FID scores than the state of the art.展开更多
The integrability of the (2+l)-dimensional Broer-Kaup equation with variable coefficients (VCBK) is verified by finding a transformation mapping it to the usual (2+l)-dimensional Broer-Kaup equation (BK). Th...The integrability of the (2+l)-dimensional Broer-Kaup equation with variable coefficients (VCBK) is verified by finding a transformation mapping it to the usual (2+l)-dimensional Broer-Kaup equation (BK). Thus the solutions of the (2+1)-dimensional VCBK are obtained by making full use of the known solutions of the usual (2+1)dimensional IRK. Two new integrable models are given by this transformation, their dromion-like solutions and rogue wave solutions are also obtained. Further, the velocity of the dromion-like solutions can be designed and the center of the rogue wave solutions can be controlled artificially because of the appearance of the four arbitrary functions in the transformation.展开更多
Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this pa...Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this paper we combine the L1/2regularization method with extreme learning machine to prune extreme learning machine.A variable learning coefcient is employed to prevent too large a learning increment.A numerical experiment demonstrates that a network pruned by L1/2regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2regularization.展开更多
In this article,we study the energy dissipation property of time-fractional Allen–Cahn equation.On the continuous level,we propose an upper bound of energy that decreases with respect to time and coincides with the o...In this article,we study the energy dissipation property of time-fractional Allen–Cahn equation.On the continuous level,we propose an upper bound of energy that decreases with respect to time and coincides with the original energy at t=0 and as t tends to∞.This upper bound can also be viewed as a nonlocal-in-time modified energy which is the summation of the original energy and an accumulation term due to the memory effect of time-fractional derivative.In particular,the decrease of the modified energy indicates that the original energy indeed decays w.r.t.time in a small neighborhood at t=0.We illustrate the theory mainly with the time-fractional Allen-Cahn equation but it could also be applied to other time-fractional phase-field models such as the Cahn-Hilliard equation.On the discrete level,the decreasing upper bound of energy is useful for proving energy dissipation of numerical schemes.First-order L1 and second-order L2 schemes for the time-fractional Allen-Cahn equation have similar decreasing modified energies,so that stability can be established.Some numerical results are provided to illustrate the behavior of this modified energy and to verify our theoretical results.展开更多
文摘Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empirical likelihood function, which can be used without assuming the distribution of the data. It can effectively avoid the problems caused by the wrong setting of the model. In the variable selection based on Bayesian empirical likelihood, the penalty term is introduced into the model in the form of parameter prior. In this paper, we propose a novel variable selection method, L<sub>1/2</sub> regularization based on Bayesian empirical likelihood. The L<sub>1/2</sub> penalty is introduced into the model through a scale mixture of uniform representation of generalized Gaussian prior, and the posterior distribution is then sampled using MCMC method. Simulations demonstrate that the proposed method can have better predictive ability when the error violates the zero-mean normality assumption of the standard parameter model, and can perform variable selection.
基金supported by National Natural Science Foundation of China(Grant Nos.11171212 and60975036)supported by National Natural Science Foundation of China(Grant No.6175054)
文摘We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squared error and therefore underlies the feasibility and effectiveness of the L1/2 regularization. Interestingly, when applied to compressive sensing, the L1/2 regularization scheme has exhibited a very promising capability of completed recovery from a much less sampling information. As compared with the Lp (0 〈 p 〈 1) penalty, it is appeared that the L1/2 penalty can always yield the most sparse solution among all the Lv penalty when 1/2 〈 p 〈 1, and when 0 〈 p 〈 1/2, the Lp penalty exhibits the similar properties as the L1/2 penalty. This suggests that the L1/2 regularization scheme can be accepted as the best and therefore the representative of all the Lp (0 〈 p 〈 1) regularization schemes.
基金This work is supported by the National Natural Science Foundation of China(No.61702226)the 111 Project(B12018)+1 种基金the Natural Science Foundation of Jiangsu Province(No.BK20170200)the Fundamental Research Funds for the Central Universities(No.JUSRP11854).
文摘The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications is style transfer.Style transfer is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image.CYCLE-GAN is a classic GAN model,which has a wide range of scenarios in style transfer.Considering its unsupervised learning characteristics,the mapping is easy to be learned between an input image and an output image.However,it is difficult for CYCLE-GAN to converge and generate high-quality images.In order to solve this problem,spectral normalization is introduced into each convolutional kernel of the discriminator.Every convolutional kernel reaches Lipschitz stability constraint with adding spectral normalization and the value of the convolutional kernel is limited to[0,1],which promotes the training process of the proposed model.Besides,we use pretrained model(VGG16)to control the loss of image content in the position of l1 regularization.To avoid overfitting,l1 regularization term and l2 regularization term are both used in the object loss function.In terms of Frechet Inception Distance(FID)score evaluation,our proposed model achieves outstanding performance and preserves more discriminative features.Experimental results show that the proposed model converges faster and achieves better FID scores than the state of the art.
基金Supported by the National Natural Science Foundation of China under Grant No.10971109K.C. Wong Magna Fund in Ningbo Universitythe Natural Science Foundation of Ningbo under Grant No.2011A610179
文摘The integrability of the (2+l)-dimensional Broer-Kaup equation with variable coefficients (VCBK) is verified by finding a transformation mapping it to the usual (2+l)-dimensional Broer-Kaup equation (BK). Thus the solutions of the (2+1)-dimensional VCBK are obtained by making full use of the known solutions of the usual (2+1)dimensional IRK. Two new integrable models are given by this transformation, their dromion-like solutions and rogue wave solutions are also obtained. Further, the velocity of the dromion-like solutions can be designed and the center of the rogue wave solutions can be controlled artificially because of the appearance of the four arbitrary functions in the transformation.
基金Project supported by the National Natural Science Foundation of China(No.11171367)the Fundamental Research Funds for the Central Universities,China
文摘Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this paper we combine the L1/2regularization method with extreme learning machine to prune extreme learning machine.A variable learning coefcient is employed to prevent too large a learning increment.A numerical experiment demonstrates that a network pruned by L1/2regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2regularization.
基金partially supported by the National Natural Science Foundation of China/Hong Kong RGC Joint Research Scheme(NSFC/RGC 11961160718)the fund of the Guangdong Provincial Key Laboratory of Computational Science And Material Design(No.2019B030301001)+4 种基金supported in part by the Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science under UIC 2022B1212010006supported by the National Science Foundation of China(NSFC)Grant No.12271240supported by NSFC Grant 12271241Guangdong Basic and Applied Basic Research Foundation(No.2023B1515020030)Shenzhen Science and Technology Program(Grant No.RCYX20210609104358076).
文摘In this article,we study the energy dissipation property of time-fractional Allen–Cahn equation.On the continuous level,we propose an upper bound of energy that decreases with respect to time and coincides with the original energy at t=0 and as t tends to∞.This upper bound can also be viewed as a nonlocal-in-time modified energy which is the summation of the original energy and an accumulation term due to the memory effect of time-fractional derivative.In particular,the decrease of the modified energy indicates that the original energy indeed decays w.r.t.time in a small neighborhood at t=0.We illustrate the theory mainly with the time-fractional Allen-Cahn equation but it could also be applied to other time-fractional phase-field models such as the Cahn-Hilliard equation.On the discrete level,the decreasing upper bound of energy is useful for proving energy dissipation of numerical schemes.First-order L1 and second-order L2 schemes for the time-fractional Allen-Cahn equation have similar decreasing modified energies,so that stability can be established.Some numerical results are provided to illustrate the behavior of this modified energy and to verify our theoretical results.