期刊文献+
共找到11篇文章
< 1 >
每页显示 20 50 100
Design of Polynomial Fuzzy Neural Network Classifiers Based on Density Fuzzy C-Means and L2-Norm Regularization
1
作者 Shaocong Xue Wei Huang +1 位作者 Chuanyin Yang Jinsong Wang 《国际计算机前沿大会会议论文集》 2019年第1期594-596,共3页
In this paper, polynomial fuzzy neural network classifiers (PFNNCs) is proposed by means of density fuzzy c-means and L2-norm regularization. The overall design of PFNNCs was realized by means of fuzzy rules that come... In this paper, polynomial fuzzy neural network classifiers (PFNNCs) is proposed by means of density fuzzy c-means and L2-norm regularization. The overall design of PFNNCs was realized by means of fuzzy rules that come in form of three parts, namely premise part, consequence part and aggregation part. The premise part was developed by density fuzzy c-means that helps determine the apex parameters of membership functions, while the consequence part was realized by means of two types of polynomials including linear and quadratic. L2-norm regularization that can alleviate the overfitting problem was exploited to estimate the parameters of polynomials, which constructed the aggregation part. Experimental results of several data sets demonstrate that the proposed classifiers show higher classification accuracy in comparison with some other classifiers reported in the literature. 展开更多
关键词 POlYNOMIAl FUZZY neural network ClASSIFIERS Density FUZZY clustering l2-norm regularization FUZZY rules
下载PDF
L1/2 Regularization Based on Bayesian Empirical Likelihood
2
作者 Yuan Wang Wanzhou Ye 《Advances in Pure Mathematics》 2022年第5期392-404,共13页
Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empir... Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empirical likelihood function, which can be used without assuming the distribution of the data. It can effectively avoid the problems caused by the wrong setting of the model. In the variable selection based on Bayesian empirical likelihood, the penalty term is introduced into the model in the form of parameter prior. In this paper, we propose a novel variable selection method, L<sub>1/2</sub> regularization based on Bayesian empirical likelihood. The L<sub>1/2</sub> penalty is introduced into the model through a scale mixture of uniform representation of generalized Gaussian prior, and the posterior distribution is then sampled using MCMC method. Simulations demonstrate that the proposed method can have better predictive ability when the error violates the zero-mean normality assumption of the standard parameter model, and can perform variable selection. 展开更多
关键词 Bayesian Empirical likelihood Generalized Gaussian Prior l1/2 regularization MCMC Method
下载PDF
基于L1/2正则化理论的地震稀疏反褶积 被引量:8
3
作者 康治梁 张雪冰 《石油物探》 EI CSCD 北大核心 2019年第6期855-863,共9页
地震反褶积是一种重要的压缩地震子波、提高薄层纵向分辨率的地震数据处理方法。在层状地层的假设下,反射系数可视作稀疏的脉冲序列,所以地震反褶积可以描述为一个稀疏求解问题,L 1正则化被广泛用于解决稀疏问题,但近年来一些文献证明L ... 地震反褶积是一种重要的压缩地震子波、提高薄层纵向分辨率的地震数据处理方法。在层状地层的假设下,反射系数可视作稀疏的脉冲序列,所以地震反褶积可以描述为一个稀疏求解问题,L 1正则化被广泛用于解决稀疏问题,但近年来一些文献证明L 1正则化的稀疏表达能力不是最优的。针对这一问题,基于快速发展的L 1/2正则化理论,提出将L 1/2正则化作为反射系数的稀疏约束进行地震反褶积处理,并使用其特定的阈值迭代算法进行求解,对单道模型的测试证实了该方法对正则化参数和噪声有较好的适应能力。简单二维模型和Marmousi2模型数据的测试结果表明,基于该方法的反演结果能较好地拟合反射系数振幅,并且对噪声干扰的鲁棒性更强,能够更好地保护弱反射系数。实际数据应用结果表明,该方法能有效消除子波影响,较好地分辨出薄层结构和透镜体结构,为地震数据高分辨处理提供了有力工具。 展开更多
关键词 地震反演 稀疏性 l 1正则化 l 1/2正则化理论 非凸正则化 高分辨率 薄层识别
下载PDF
A Sharp Nonasymptotic Bound and Phase Diagram of L1/2 Regularization 被引量:1
4
作者 Hai ZHANG Zong Ben XU +2 位作者 Yao WANG Xiang Yu CHANG Yong LIANG 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2014年第7期1242-1258,共17页
We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squ... We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization. The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squared error and therefore underlies the feasibility and effectiveness of the L1/2 regularization. Interestingly, when applied to compressive sensing, the L1/2 regularization scheme has exhibited a very promising capability of completed recovery from a much less sampling information. As compared with the Lp (0 〈 p 〈 1) penalty, it is appeared that the L1/2 penalty can always yield the most sparse solution among all the Lv penalty when 1/2 〈 p 〈 1, and when 0 〈 p 〈 1/2, the Lp penalty exhibits the similar properties as the L1/2 penalty. This suggests that the L1/2 regularization scheme can be accepted as the best and therefore the representative of all the Lp (0 〈 p 〈 1) regularization schemes. 展开更多
关键词 l1/2 regularization phase diagram compressive sensing
原文传递
Robust Latent Factor Analysis for Precise Representation of High-Dimensional and Sparse Data 被引量:5
5
作者 Di Wu Xin Luo 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第4期796-805,共10页
High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurat... High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurately represent them is of great significance.A latent factor(LF)model is one of the most popular and successful ways to address this issue.Current LF models mostly adopt L2-norm-oriented Loss to represent an HiDS matrix,i.e.,they sum the errors between observed data and predicted ones with L2-norm.Yet L2-norm is sensitive to outlier data.Unfortunately,outlier data usually exist in such matrices.For example,an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users.To address this issue,this work proposes a smooth L1-norm-oriented latent factor(SL-LF)model.Its main idea is to adopt smooth L1-norm rather than L2-norm to form its Loss,making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix.Experimental results on eight HiDS matrices generated by industrial applications verify that the proposed SL-LF model not only is robust to the outlier data but also has significantly higher prediction accuracy than state-of-the-art models when they are used to predict the missing data of HiDS matrices. 展开更多
关键词 High-dimensional and sparse matrix l1-norm l2 norm latent factor model recommender system smooth l1-norm
下载PDF
I(L)型诱导空间的性质 被引量:1
6
作者 胡兰芳 《江苏师范大学学报(自然科学版)》 CAS 1989年第2期9-16,共8页
本文讨论了Fuzzy拓扑空间的I(L)型诱导空间的闭包和内部运算,并讨论了它的可分性、C_Ⅰ、C_Ⅱ和分离性。
关键词 I(l)型诱导空间 可分空间 C_I空间 C_Ⅱ空间 正则空间 T_i空间(i=0 1 2 3 4)
下载PDF
Generating Cartoon Images from Face Photos with Cycle-Consistent Adversarial Networks 被引量:1
7
作者 Tao Zhang Zhanjie Zhang +2 位作者 Wenjing Jia Xiangjian He Jie Yang 《Computers, Materials & Continua》 SCIE EI 2021年第11期2733-2747,共15页
The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications... The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications is style transfer.Style transfer is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image.CYCLE-GAN is a classic GAN model,which has a wide range of scenarios in style transfer.Considering its unsupervised learning characteristics,the mapping is easy to be learned between an input image and an output image.However,it is difficult for CYCLE-GAN to converge and generate high-quality images.In order to solve this problem,spectral normalization is introduced into each convolutional kernel of the discriminator.Every convolutional kernel reaches Lipschitz stability constraint with adding spectral normalization and the value of the convolutional kernel is limited to[0,1],which promotes the training process of the proposed model.Besides,we use pretrained model(VGG16)to control the loss of image content in the position of l1 regularization.To avoid overfitting,l1 regularization term and l2 regularization term are both used in the object loss function.In terms of Frechet Inception Distance(FID)score evaluation,our proposed model achieves outstanding performance and preserves more discriminative features.Experimental results show that the proposed model converges faster and achieves better FID scores than the state of the art. 展开更多
关键词 Generative adversarial network spectral normalization lipschitz stability constraint VGG16 l1 regularization term l2 regularization term Frechet inception distance
下载PDF
A pruning algorithm with L_(1/2) regularizer for extreme learning machine 被引量:1
8
作者 Ye-tian FAN Wei WU +2 位作者 Wen-yu YANG Qin-wei FAN Jian WANG 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2014年第2期119-125,共7页
Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this pa... Compared with traditional learning methods such as the back propagation(BP)method,extreme learning machine provides much faster learning speed and needs less human intervention,and thus has been widely used.In this paper we combine the L1/2regularization method with extreme learning machine to prune extreme learning machine.A variable learning coefcient is employed to prevent too large a learning increment.A numerical experiment demonstrates that a network pruned by L1/2regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2regularization. 展开更多
关键词 Extreme learning machine(ElM) l1/2 regularizer Network pruning
原文传递
一种基于L_(1/2)正则约束的超分辨率重建算法 被引量:7
9
作者 徐志刚 李文文 +1 位作者 朱红蕾 朱旭锋 《华中科技大学学报(自然科学版)》 EI CAS CSCD 北大核心 2017年第6期38-42,共5页
为了提高重建图像质量,减少处理时间,提出一种基于L_(1/2)正则约束的单帧图像超分辨率重建算法.该算法在稀疏重建字典对训练阶段,为了有效提取低分辨率图像边缘、纹理等特征细节信息,采用小波系数单支重构方法对低分辨率图像进行特征提... 为了提高重建图像质量,减少处理时间,提出一种基于L_(1/2)正则约束的单帧图像超分辨率重建算法.该算法在稀疏重建字典对训练阶段,为了有效提取低分辨率图像边缘、纹理等特征细节信息,采用小波系数单支重构方法对低分辨率图像进行特征提取;而在图像重建阶段,为了解决基于L1正则模型得到的解时常不够稀疏,重建图像质量有待进一步提高的问题,采用L_(1/2)范数代替L1范数构建超分辨率重建模型,并且采用一种快速求解的L_(1/2)正则化算法进行稀疏求解.实验结果表明:与现有算法相比较,该算法在重建图像主观和客观评价指标、算法运行速度等方面均更优. 展开更多
关键词 重建图像 超分辨率 稀疏表示 l(1/2)正则模型 小波系数单支重构
原文传递
深度学习中的正则化方法研究 被引量:3
10
作者 武国宁 胡汇丰 于萌萌 《计算机科学与应用》 2020年第6期1224-1233,共10页
带有百万个参数的神经网络在大量训练集的训练下,很容易产生过拟合现象。一些正则化方法被学者提出以期达到对参数的约束求解。本文总结了深度学习中的L1,L2和Dropout正则化方法。最后基于上述正则化方法,进行了MNIST手写体识别对比数... 带有百万个参数的神经网络在大量训练集的训练下,很容易产生过拟合现象。一些正则化方法被学者提出以期达到对参数的约束求解。本文总结了深度学习中的L1,L2和Dropout正则化方法。最后基于上述正则化方法,进行了MNIST手写体识别对比数值试验。 展开更多
关键词 深度神经网络 过拟合 l1正则化 l2正则化 DROPOUT MNIST
下载PDF
基于小波框架的稀疏正则化方法及其在图像复原中的应用 被引量:1
11
作者 袁存林 宋义壮 《山东师范大学学报(自然科学版)》 2021年第2期155-161,共7页
本文旨在从受模糊和噪声影响的图像中复原原始图像.为此,在小波变换域中图像系数稀疏的先验假设下,通过极小化一个包含数据保真项和基于小波框架的l^(1/2)正则化项的能量泛函,实现图像降噪和去模糊;鉴于该能量泛函是非线性、非凸和不可... 本文旨在从受模糊和噪声影响的图像中复原原始图像.为此,在小波变换域中图像系数稀疏的先验假设下,通过极小化一个包含数据保真项和基于小波框架的l^(1/2)正则化项的能量泛函,实现图像降噪和去模糊;鉴于该能量泛函是非线性、非凸和不可导的,本文使用ADMM型算法极小化该能量泛函.使用数字图像领域的四幅典型图像:Shepp-Logan、Cameraman、Lenna和Fingerprint的仿真实验验证了所提出算法的有效性.本文成果可望应用于诸如医学成像中早期癌症瘤筛查等对成像分辨率有较高需求的现实领域. 展开更多
关键词 图像复原 小波框架 l^(1/2)正则化 稀疏表示
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部