In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluste...In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics.展开更多
In this paper, we built upon the estimating primaries by sparse inversion (EPSI) method. We use the 3D curvelet transform and modify the EPSI method to the sparse inversion of the biconvex optimization and Ll-norm r...In this paper, we built upon the estimating primaries by sparse inversion (EPSI) method. We use the 3D curvelet transform and modify the EPSI method to the sparse inversion of the biconvex optimization and Ll-norm regularization, and use alternating optimization to directly estimate the primary reflection coefficients and source wavelet. The 3D curvelet transform is used as a sparseness constraint when inverting the primary reflection coefficients, which results in avoiding the prediction subtraction process in the surface-related multiples elimination (SRME) method. The proposed method not only reduces the damage to the effective waves but also improves the elimination of multiples. It is also a wave equation- based method for elimination of surface multiple reflections, which effectively removes surface multiples under complex submarine conditions.展开更多
The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications...The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications is style transfer.Style transfer is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image.CYCLE-GAN is a classic GAN model,which has a wide range of scenarios in style transfer.Considering its unsupervised learning characteristics,the mapping is easy to be learned between an input image and an output image.However,it is difficult for CYCLE-GAN to converge and generate high-quality images.In order to solve this problem,spectral normalization is introduced into each convolutional kernel of the discriminator.Every convolutional kernel reaches Lipschitz stability constraint with adding spectral normalization and the value of the convolutional kernel is limited to[0,1],which promotes the training process of the proposed model.Besides,we use pretrained model(VGG16)to control the loss of image content in the position of l1 regularization.To avoid overfitting,l1 regularization term and l2 regularization term are both used in the object loss function.In terms of Frechet Inception Distance(FID)score evaluation,our proposed model achieves outstanding performance and preserves more discriminative features.Experimental results show that the proposed model converges faster and achieves better FID scores than the state of the art.展开更多
Fixed-point continuation (FPC) is an approach, based on operator-splitting and continuation, for solving minimization problems with l1-regularization:min ||x||1+uf(x).We investigate the application of this a...Fixed-point continuation (FPC) is an approach, based on operator-splitting and continuation, for solving minimization problems with l1-regularization:min ||x||1+uf(x).We investigate the application of this algorithm to compressed sensing signal recovery, in which f(x) = 1/2||Ax-b||2M,A∈m×n and m≤n. In particular, we extend the original algorithm to obtain better practical results, derive appropriate choices for M and u under a given measurement model, and present numerical results for a variety of compressed sensing problems. The numerical results show that the performance of our algorithm compares favorably with that of several recently proposed algorithms.展开更多
We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squ...We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squared error and therefore underlies the feasibility and effectiveness of the L1/2 regularization. Interestingly, when applied to compressive sensing, the L1/2 regularization scheme has exhibited a very promising capability of completed recovery from a much less sampling information. As compared with the Lp (0 〈 p 〈 1) penalty, it is appeared that the L1/2 penalty can always yield the most sparse solution among all the Lv penalty when 1/2 〈 p 〈 1, and when 0 〈 p 〈 1/2, the Lp penalty exhibits the similar properties as the L1/2 penalty. This suggests that the L1/2 regularization scheme can be accepted as the best and therefore the representative of all the Lp (0 〈 p 〈 1) regularization schemes.展开更多
Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this pa...Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this paper we combine the L1/2regularization method with extreme learning machine to prune extreme learning machine.A variable learning coefcient is employed to prevent too large a learning increment.A numerical experiment demonstrates that a network pruned by L1/2regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2regularization.展开更多
Truncated L1 regularization proposed by Fan in[5],is an approximation to the L0 regularization in high-dimensional sparse models.In this work,we prove the non-asymptotic error bound for the global optimal solution to ...Truncated L1 regularization proposed by Fan in[5],is an approximation to the L0 regularization in high-dimensional sparse models.In this work,we prove the non-asymptotic error bound for the global optimal solution to the truncated L1 regularized linear regression problem and study the support recovery property.Moreover,a primal dual active set algorithm(PDAS)for variable estimation and selection is proposed.Coupled with continuation by a warm-start strategy leads to a primal dual active set with continuation algorithm(PDASC).Data-driven parameter selection rules such as cross validation,BIC or voting method can be applied to select a proper regularization parameter.The application of the proposed method is demonstrated by applying it to simulation data and a breast cancer gene expression data set(bcTCGA).展开更多
Neural network is widely used in stock price forecasting,but it lacks interpretability because of its“black box”characteristics.In this paper,L1-orthogonal regularization method is used in the GRU model.A decision t...Neural network is widely used in stock price forecasting,but it lacks interpretability because of its“black box”characteristics.In this paper,L1-orthogonal regularization method is used in the GRU model.A decision tree,GRU-DT,was conducted to represent the prediction process of a neural network,and some rule screening algorithms were proposed to find out significant rules in the prediction.In the empirical study,the data of 10 different industries in China’s CSI 300 were selected for stock price trend prediction,and extracted rules were compared and analyzed.And the method of technical index discretization was used to make rules easy for decision-making.Empirical results show that the AUC of the model is stable between 0.72 and 0.74,and the value of F1 and Accuracy are stable between 0.68 and 0.70,indicating that discretized technical indicators can predict the short-term trend of stock price effectively.And the fidelity of GRU-DT to the GRU model reaches 0.99.The prediction rules of different industries have some commonness and individuality.展开更多
文摘In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics.
基金supported by the National Science and Technology Major Project (No.2011ZX05023-005-008)
文摘In this paper, we built upon the estimating primaries by sparse inversion (EPSI) method. We use the 3D curvelet transform and modify the EPSI method to the sparse inversion of the biconvex optimization and Ll-norm regularization, and use alternating optimization to directly estimate the primary reflection coefficients and source wavelet. The 3D curvelet transform is used as a sparseness constraint when inverting the primary reflection coefficients, which results in avoiding the prediction subtraction process in the surface-related multiples elimination (SRME) method. The proposed method not only reduces the damage to the effective waves but also improves the elimination of multiples. It is also a wave equation- based method for elimination of surface multiple reflections, which effectively removes surface multiples under complex submarine conditions.
基金This work is supported by the National Natural Science Foundation of China(No.61702226)the 111 Project(B12018)+1 种基金the Natural Science Foundation of Jiangsu Province(No.BK20170200)the Fundamental Research Funds for the Central Universities(No.JUSRP11854).
文摘The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications is style transfer.Style transfer is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image.CYCLE-GAN is a classic GAN model,which has a wide range of scenarios in style transfer.Considering its unsupervised learning characteristics,the mapping is easy to be learned between an input image and an output image.However,it is difficult for CYCLE-GAN to converge and generate high-quality images.In order to solve this problem,spectral normalization is introduced into each convolutional kernel of the discriminator.Every convolutional kernel reaches Lipschitz stability constraint with adding spectral normalization and the value of the convolutional kernel is limited to[0,1],which promotes the training process of the proposed model.Besides,we use pretrained model(VGG16)to control the loss of image content in the position of l1 regularization.To avoid overfitting,l1 regularization term and l2 regularization term are both used in the object loss function.In terms of Frechet Inception Distance(FID)score evaluation,our proposed model achieves outstanding performance and preserves more discriminative features.Experimental results show that the proposed model converges faster and achieves better FID scores than the state of the art.
基金supported by an NSF VIGRE grant (DMS-0240058)supported in part by NSF CAREER Award DMS-0748839 and ONR Grant N00014-08-1-1101supported in part by NSF Grant DMS-0811188 and ONR Grant N00014-08-1-1101
文摘Fixed-point continuation (FPC) is an approach, based on operator-splitting and continuation, for solving minimization problems with l1-regularization:min ||x||1+uf(x).We investigate the application of this algorithm to compressed sensing signal recovery, in which f(x) = 1/2||Ax-b||2M,A∈m×n and m≤n. In particular, we extend the original algorithm to obtain better practical results, derive appropriate choices for M and u under a given measurement model, and present numerical results for a variety of compressed sensing problems. The numerical results show that the performance of our algorithm compares favorably with that of several recently proposed algorithms.
基金supported by National Natural Science Foundation of China(Grant Nos.11171212 and60975036)supported by National Natural Science Foundation of China(Grant No.6175054)
文摘We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squared error and therefore underlies the feasibility and effectiveness of the L1/2 regularization. Interestingly, when applied to compressive sensing, the L1/2 regularization scheme has exhibited a very promising capability of completed recovery from a much less sampling information. As compared with the Lp (0 〈 p 〈 1) penalty, it is appeared that the L1/2 penalty can always yield the most sparse solution among all the Lv penalty when 1/2 〈 p 〈 1, and when 0 〈 p 〈 1/2, the Lp penalty exhibits the similar properties as the L1/2 penalty. This suggests that the L1/2 regularization scheme can be accepted as the best and therefore the representative of all the Lp (0 〈 p 〈 1) regularization schemes.
基金Project supported by the National Natural Science Foundation of China(No.11171367)the Fundamental Research Funds for the Central Universities,China
文摘Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this paper we combine the L1/2regularization method with extreme learning machine to prune extreme learning machine.A variable learning coefcient is employed to prevent too large a learning increment.A numerical experiment demonstrates that a network pruned by L1/2regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2regularization.
文摘Truncated L1 regularization proposed by Fan in[5],is an approximation to the L0 regularization in high-dimensional sparse models.In this work,we prove the non-asymptotic error bound for the global optimal solution to the truncated L1 regularized linear regression problem and study the support recovery property.Moreover,a primal dual active set algorithm(PDAS)for variable estimation and selection is proposed.Coupled with continuation by a warm-start strategy leads to a primal dual active set with continuation algorithm(PDASC).Data-driven parameter selection rules such as cross validation,BIC or voting method can be applied to select a proper regularization parameter.The application of the proposed method is demonstrated by applying it to simulation data and a breast cancer gene expression data set(bcTCGA).
基金National Defense Science and Technology Innovation Special ZoneProject (No. 18-163-11-ZT-002-045-04).
文摘Neural network is widely used in stock price forecasting,but it lacks interpretability because of its“black box”characteristics.In this paper,L1-orthogonal regularization method is used in the GRU model.A decision tree,GRU-DT,was conducted to represent the prediction process of a neural network,and some rule screening algorithms were proposed to find out significant rules in the prediction.In the empirical study,the data of 10 different industries in China’s CSI 300 were selected for stock price trend prediction,and extracted rules were compared and analyzed.And the method of technical index discretization was used to make rules easy for decision-making.Empirical results show that the AUC of the model is stable between 0.72 and 0.74,and the value of F1 and Accuracy are stable between 0.68 and 0.70,indicating that discretized technical indicators can predict the short-term trend of stock price effectively.And the fidelity of GRU-DT to the GRU model reaches 0.99.The prediction rules of different industries have some commonness and individuality.