期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
New regularization method and iteratively reweighted algorithm for sparse vector recovery 被引量:1
1
作者 Wei ZHU Hui ZHANG Lizhi CHENG 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI CSCD 2020年第1期157-172,共16页
Motivated by the study of regularization for sparse problems,we propose a new regularization method for sparse vector recovery.We derive sufficient conditions on the well-posedness of the new regularization,and design... Motivated by the study of regularization for sparse problems,we propose a new regularization method for sparse vector recovery.We derive sufficient conditions on the well-posedness of the new regularization,and design an iterative algorithm,namely the iteratively reweighted algorithm(IR-algorithm),for efficiently computing the sparse solutions to the proposed regularization model.The convergence of the IR-algorithm and the setting of the regularization parameters are analyzed at length.Finally,we present numerical examples to illustrate the features of the new regularization and algorithm. 展开更多
关键词 regularization method iteratively reweighted algorithm(IR-algorithm) sparse vector recovery
下载PDF
Iterative-Reweighting-Based Robust Iterative-Closest-Point Method
2
作者 张建林 周学军 杨明 《Journal of Shanghai Jiaotong university(Science)》 EI 2021年第5期739-746,共8页
In point cloud registration applications,noise and poor initial conditions lead to many false matches.False matches significantly degrade registration accuracy and speed.A penalty function is adopted in many robust po... In point cloud registration applications,noise and poor initial conditions lead to many false matches.False matches significantly degrade registration accuracy and speed.A penalty function is adopted in many robust point-to-point registration methods to suppress the influence of false matches.However,after applying a penalty function,problems cannot be solved in their analytical forms based on the introduction of nonlinearity.Therefore,most existing methods adopt the descending method.In this paper,a novel iterative-reweighting-based method is proposed to overcome the limitations of existing methods.The proposed method iteratively solves the eigenvectors of a four-dimensional matrix,whereas the calculation of the descending method relies on solving an eight-dimensional matrix.Therefore,the proposed method can achieve increased computational efficiency.The proposed method was validated on simulated noise corruption data,and the results reveal that it obtains higher efficiency and precision than existing methods,particularly under very noisy conditions.Experimental results for the KITTI dataset demonstrate that the proposed method can be used in real-time localization processes with high accuracy and good efficiency. 展开更多
关键词 point cloud registration iterative reweighting iterative closest-point(ICP) robust localization
原文传递
Dropout training for SVMs with data augmentation 被引量:1
3
作者 Ning CHEN Jun ZHU +1 位作者 Jianfei CHEN Ting CHEN 《Frontiers of Computer Science》 SCIE EI CSCD 2018年第4期694-713,共20页
Dropout and other feature noising schemes have shown promise in controlling over-fitting by artificially corrupting the training data. Though extensive studies have been performed for generalized linear models, little... Dropout and other feature noising schemes have shown promise in controlling over-fitting by artificially corrupting the training data. Though extensive studies have been performed for generalized linear models, little has been done for support vector machines (SVMs), one of the most successful approaches for supervised learning. This paper presents dropout training for both linear SVMs and the nonlinear extension with latent representation learning. For linear SVMs, to deal with the intractable expectation of the non-smooth hinge loss under corrupting distributions, we develop an iteratively re-weighted least square (IRLS) algorithm by exploring data augmentation techniques. Our algorithm iteratively minimizes the expectation of a re- weighted least square problem, where the re-weights are analytically updated. For nonlinear latent SVMs, we con- sider learning one layer of latent representations in SVMs and extend the data augmentation technique in conjunction with first-order Taylor-expansion to deal with the intractable expected hinge loss and the nonlinearity of latent representa- tions. Finally, we apply the similar data augmentation ideas to develop a new IRLS algorithm for the expected logistic loss under corrupting distributions, and we further develop a non-linear extension of logistic regression by incorporating one layer of latent representations. Our algorithms offer insights on the connection and difference between the hinge loss and logistic loss in dropout training. Empirical results on several real datasets demonstrate the effectiveness of dropout training on significantly boosting the classification accuracy of both linear and nonlinear SVMs. 展开更多
关键词 DROPOUT SVMS logistic regression data aug- mentation iteratively reweighted least square
原文传递
Adaptive sparse and dense hybrid representation with nonconvex optimization
4
作者 Xuejun WANG Feilong CAO Wenjian WANG 《Frontiers of Computer Science》 SCIE EI CSCD 2020年第4期65-78,共14页
Sparse representation has been widely used in signal processing,pattern recognition and computer vision etc.Excellent achievements have been made in both theoretical researches and practical applications.However,there... Sparse representation has been widely used in signal processing,pattern recognition and computer vision etc.Excellent achievements have been made in both theoretical researches and practical applications.However,there are two limitations on the application of classification.One is that sufficient training samples are required for each class,and the other is that samples should be uncorrupted.In order to alleviate above problems,a sparse and dense hybrid representation(SDR)framework has been proposed,where the training dictionary is decomposed into a class-specific dictionary and a non-class-specific dictionary.SDR putsℓ1 constraint on the coefficients of class-specific dictionary.Nevertheless,it over-emphasizes the sparsity and overlooks the correlation information in class-specific dictionary,which may lead to poor classification results.To overcome this disadvantage,an adaptive sparse and dense hybrid representation with non-convex optimization(ASDR-NO)is proposed in this paper.The trace norm is adopted in class-specific dictionary,which is different from general approaches.By doing so,the dictionary structure becomes adaptive and the representation ability of the dictionary will be improved.Meanwhile,a non-convex surrogate is used to approximate the rank function in dictionary decomposition in order to avoid a suboptimal solution of the original rank minimization,which can be solved by iteratively reweighted nuclear norm(IRNN)algorithm.Extensive experiments conducted on benchmark data sets have verified the effectiveness and advancement of the proposed algorithm compared with the state-of-the-art sparse representation methods. 展开更多
关键词 sparse representation trace norm nonconvex optimization low rank matrix recovery iteratively reweighted nuclear norm
原文传递
Nonconvex Sorted l1 Minimization for Sparse Approximation
5
作者 Xiao-Lin Huang Lei Shi Ming Yan 《Journal of the Operations Research Society of China》 EI CSCD 2015年第2期207-229,共23页
The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to ap... The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to apply nonconvex penalties such as lp norm.As one method for solving lp minimization problems,iteratively reweighted l1 minimization updates the weight for each component based on the value of the same component at the previous iteration.It assigns large weights on small components in magnitude and small weights on large components in magnitude.The set of the weights is not fixed,and it makes the analysis of this method difficult.In this paper,we consider a weighted l1 penalty with the set of the weights fixed,and the weights are assigned based on the sort of all the components in magnitude.The smallest weight is assigned to the largest component in magnitude.This new penalty is called nonconvex sorted l1.Then we propose two methods for solving nonconvex sorted l1 minimization problems:iteratively reweighted l1 minimization and iterative sorted thresholding,and prove that both methods will converge to a local minimizer of the nonconvex sorted l1 minimization problems.We also show that both methods are generalizations of iterative support detection and iterative hard thresholding,respectively.The numerical experiments demonstrate the better performance of assigning weights by sort compared to assigning by value. 展开更多
关键词 iteratively reweighted1 minimization Iterative sorted thresholding Local minimizer Nonconvex optimization Sparse approximation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部