期刊文献+
共找到14篇文章
< 1 >
每页显示 20 50 100
A note on a family of proximal gradient methods for quasi-static incremental problems in elastoplastic analysis
1
作者 Yoshihiro Kanno 《Theoretical & Applied Mechanics Letters》 CAS CSCD 2020年第5期315-320,共6页
Accelerated proximal gradient methods have recently been developed for solving quasi-static incremental problems of elastoplastic analysis with some different yield criteria.It has been demonstrated through numerical ... Accelerated proximal gradient methods have recently been developed for solving quasi-static incremental problems of elastoplastic analysis with some different yield criteria.It has been demonstrated through numerical experiments that these methods can outperform conventional optimization-based approaches in computational plasticity.However,in literature these algorithms are described individually for specific yield criteria,and hence there exists no guide for application of the algorithms to other yield criteria.This short paper presents a general form of algorithm design,independent of specific forms of yield criteria,that unifies the existing proximal gradient methods.Clear interpretation is also given to each step of the presented general algorithm so that each update rule is linked to the underlying physical laws in terms of mechanical quantities. 展开更多
关键词 Elastoplastic analysis Incremental problem Nonsmooth convex optimization First-order optimization method proximal gradient method
下载PDF
An Accelerated Proximal Gradient Algorithm for Hankel Tensor Completion
2
作者 Chuan-Long Wang Xiong-Wei Guo Xi-Hong Yan 《Journal of the Operations Research Society of China》 EI CSCD 2024年第2期461-477,共17页
In this paper,an accelerated proximal gradient algorithm is proposed for Hankel tensor completion problems.In our method,the iterative completion tensors generated by the new algorithm keep Hankel structure based on p... In this paper,an accelerated proximal gradient algorithm is proposed for Hankel tensor completion problems.In our method,the iterative completion tensors generated by the new algorithm keep Hankel structure based on projection on the Hankel tensor set.Moreover,due to the special properties of Hankel structure,using the fast singular value thresholding operator of the mode-s unfolding of a Hankel tensor can decrease the computational cost.Meanwhile,the convergence of the new algorithm is discussed under some reasonable conditions.Finally,the numerical experiments show the effectiveness of the proposed algorithm. 展开更多
关键词 Hankel tensor Tensor completion Accelerated proximal gradient algorithm
原文传递
L_(1)-Smooth SVM with Distributed Adaptive Proximal Stochastic Gradient Descent with Momentum for Fast Brain Tumor Detection
3
作者 Chuandong Qin Yu Cao Liqun Meng 《Computers, Materials & Continua》 SCIE EI 2024年第5期1975-1994,共20页
Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for ga... Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%. 展开更多
关键词 Support vector machine proximal stochastic gradient descent brain tumor detection distributed computing
下载PDF
On the Linear Convergence of a Proximal Gradient Method for a Class of Nonsmooth Convex Minimization Problems 被引量:4
4
作者 Haibin Zhang Jiaojiao Jiang Zhi-Quan Luo 《Journal of the Operations Research Society of China》 EI 2013年第2期163-186,共24页
We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm a... We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm and l2-norm of the optimization variables.This class of problems arise naturally from applications in sparse group Lasso,which is a popular technique for variable selection.An effective approach to solve such problems is by the Proximal Gradient Method(PGM).In this paper we prove a local error bound around the optimal solution set for this problem and use it to establish the linear convergence of the PGM method without assuming strong convexity of the overall objective function. 展开更多
关键词 proximal gradient method Error bound Linear convergence Sparse group I asso
原文传递
An inexact alternating proximal gradient algorithm for nonnegative CP tensor decomposition 被引量:2
5
作者 WANG DeQing CONG FengYu 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2021年第9期1893-1906,共14页
Nonnegative tensor decomposition has become increasingly important for multiway data analysis in recent years. The alternating proximal gradient(APG) is a popular optimization method for nonnegative tensor decompositi... Nonnegative tensor decomposition has become increasingly important for multiway data analysis in recent years. The alternating proximal gradient(APG) is a popular optimization method for nonnegative tensor decomposition in the block coordinate descent framework. In this study, we propose an inexact version of the APG algorithm for nonnegative CANDECOMP/PARAFAC decomposition, wherein each factor matrix is updated by only finite inner iterations. We also propose a parameter warm-start method that can avoid the frequent parameter resetting of conventional APG methods and improve convergence performance.By experimental tests, we find that when the number of inner iterations is limited to around 10 to 20, the convergence speed is accelerated significantly without losing its low relative error. We evaluate our method on both synthetic and real-world tensors.The results demonstrate that the proposed inexact APG algorithm exhibits outstanding performance on both convergence speed and computational precision compared with existing popular algorithms. 展开更多
关键词 tensor decomposition nonnegative CANDECOMP/PARAFAC block coordinate descent alternating proximal gradient inexact scheme
原文传递
A Modified Proximal Gradient Method for a Family of Nonsmooth Convex Optimization Problems
6
作者 Ying-Yi Li Hai-Bin Zhang Fei Li 《Journal of the Operations Research Society of China》 EI CSCD 2017年第3期391-403,共13页
In this paper,we propose a modified proximal gradient method for solving a class of nonsmooth convex optimization problems,which arise in many contemporary statistical and signal processing applications.The proposed m... In this paper,we propose a modified proximal gradient method for solving a class of nonsmooth convex optimization problems,which arise in many contemporary statistical and signal processing applications.The proposed method adopts a new scheme to construct the descent direction based on the proximal gradient method.It is proven that the modified proximal gradient method is Q-linearly convergent without the assumption of the strong convexity of the objective function.Some numerical experiments have been conducted to evaluate the proposed method eventually. 展开更多
关键词 Nonsmooth convex optimization Modified proximal gradient method Q-linear convergence
原文传递
PAPR Reduction in Massive MU-MIMO-OFDM Systems Using the Proximal Gradient Method
7
作者 Davinder Singh R.K.Sarin 《Journal of Communications and Information Networks》 CSCD 2019年第1期88-94,共7页
In this paper,we address the issue of peak-to-average power ratio(PAPR)reduction in large-scale multiuser multiple-input multiple-output(MU-MIMO)orthogonal frequency-division multiplexing(OFDM)systems.PAPR reduction a... In this paper,we address the issue of peak-to-average power ratio(PAPR)reduction in large-scale multiuser multiple-input multiple-output(MU-MIMO)orthogonal frequency-division multiplexing(OFDM)systems.PAPR reduction and the multiuser interference(MUI)cancellation problem are jointly formulated as an l_(∞)-norm based composite convex optimization problem,which can be solved efficiently using the iterative proximal gradient method.The proximal operator associated with l_(∞)-norm is evaluated using a low-cost sorting algorithm.The proposed method adaptively chooses the step size to accelerate convergence.Simulation results reveal that the proximal gradient method converges swiftly while provid-ing considerable PAPR reduction and lower out-of-band radiation. 展开更多
关键词 OFDM MU-MIMO PAPR reduction proximal operator proximal gradient method
原文传递
A Mini-Batch Proximal Stochastic Recursive Gradient Algorithm with Diagonal Barzilai–Borwein Stepsize 被引量:1
8
作者 Teng-Teng Yu Xin-Wei Liu +1 位作者 Yu-Hong Dai Jie Sun 《Journal of the Operations Research Society of China》 EI CSCD 2023年第2期277-307,共31页
Many machine learning problems can be formulated as minimizing the sum of a function and a non-smooth regularization term.Proximal stochastic gradient methods are popular for solving such composite optimization proble... Many machine learning problems can be formulated as minimizing the sum of a function and a non-smooth regularization term.Proximal stochastic gradient methods are popular for solving such composite optimization problems.We propose a minibatch proximal stochastic recursive gradient algorithm SRG-DBB,which incorporates the diagonal Barzilai–Borwein(DBB)stepsize strategy to capture the local geometry of the problem.The linear convergence and complexity of SRG-DBB are analyzed for strongly convex functions.We further establish the linear convergence of SRGDBB under the non-strong convexity condition.Moreover,it is proved that SRG-DBB converges sublinearly in the convex case.Numerical experiments on standard data sets indicate that the performance of SRG-DBB is better than or comparable to the proximal stochastic recursive gradient algorithm with best-tuned scalar stepsizes or BB stepsizes.Furthermore,SRG-DBB is superior to some advanced mini-batch proximal stochastic gradient methods. 展开更多
关键词 Stochastic recursive gradient proximal gradient algorithm Barzilai-Borwein method Composite optimization
原文传递
HYBRID REGULARIZED CONE-BEAM RECONSTRUCTION FOR AXIALLY SYMMETRIC OBJECT TOMOGRAPHY
9
作者 Xinge LI Suhua WEI +1 位作者 Haibo XU Chong CHEN 《Acta Mathematica Scientia》 SCIE CSCD 2022年第1期403-419,共17页
In this paper,we consider 3 D tomographic reconstruction for axially symmetric objects from a single radiograph formed by cone-beam X-rays.All contemporary density reconstruction methods in high-energy X-ray radiograp... In this paper,we consider 3 D tomographic reconstruction for axially symmetric objects from a single radiograph formed by cone-beam X-rays.All contemporary density reconstruction methods in high-energy X-ray radiography are based on the assumption that the cone beam can be treated as fan beams located at parallel planes perpendicular to the symmetric axis,so that the density of the whole object can be recovered layer by layer.Considering the relationship between different layers,we undertake the cone-beam global reconstruction to solve the ambiguity effect at the material interfaces of the reconstruction results.In view of the anisotropy of classical discrete total variations,a new discretization of total variation which yields sharp edges and has better isotropy is introduced in our reconstruction model.Furthermore,considering that the object density consists of continually changing parts and jumps,a high-order regularization term is introduced.The final hybrid regularization model is solved using the alternating proximal gradient method,which was recently applied in image processing.Density reconstruction results are presented for simulated radiographs,which shows that the proposed method has led to an improvement in terms of the preservation of edge location. 展开更多
关键词 high-energy X-ray radiography cone-beam global reconstruction inverse problem total variation alternating proximal gradient method
下载PDF
DME Interference mitigation for L-DACS1 based on system identification and sparse representation 被引量:6
10
作者 Li Douzhe Wu Zhijun 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2016年第6期1762-1773,共12页
L-band digital aeronautical communication system 1(L-DACS1) is a promising candidate data-link for future air-ground communication, but it is severely interfered by the pulse pairs(PPs) generated by distance measure e... L-band digital aeronautical communication system 1(L-DACS1) is a promising candidate data-link for future air-ground communication, but it is severely interfered by the pulse pairs(PPs) generated by distance measure equipment. A novel PP mitigation approach is proposed in this paper. Firstly, a deformed PP detection(DPPD) method that combines a filter bank, correlation detection, and rescanning is proposed to detect the deformed PPs(DPPs) which are caused by multiple filters in the receiver. Secondly, a finite impulse response(FIR) model is used to approximate the overall characteristic of filters, and then the waveform of DPP can be acquired by the original waveform of PP and the FIR model. Finally, sparse representation is used to estimate the position and amplitude of each DPP, and then reconstruct each DPP. The reconstructed DPPs will be subtracted from the contaminated signal to mitigate interference. Numerical experiments show that the bit error rate performance of our approach is about 5 dB better than that of recent works and is closer to interference-free environment. 展开更多
关键词 DME interference L-DACS1 Least square approximations proximal gradient algorithm Sparse representation
原文传递
First-Order Algorithms for Convex Optimization with Nonseparable Objective and Coupled Constraints 被引量:7
11
作者 Xiang Gao Shu-Zhong Zhang 《Journal of the Operations Research Society of China》 EI CSCD 2017年第2期131-159,共29页
In this paper,we consider a block-structured convex optimization model,where in the objective the block variables are nonseparable and they are further linearly coupled in the constraint.For the 2-block case,we propos... In this paper,we consider a block-structured convex optimization model,where in the objective the block variables are nonseparable and they are further linearly coupled in the constraint.For the 2-block case,we propose a number of first-order algorithms to solve this model.First,the alternating direction method of multipliers(ADMM)is extended,assuming that it is easy to optimize the augmented Lagrangian function with one block of variables at each time while fixing the other block.We prove that O(1/t)iteration complexity bound holds under suitable conditions,where t is the number of iterations.If the subroutines of the ADMM cannot be implemented,then we propose new alternative algorithms to be called alternating proximal gradient method of multipliers,alternating gradient projection method of multipliers,and the hybrids thereof.Under suitable conditions,the O(1/t)iteration complexity bound is shown to hold for all the newly proposed algorithms.Finally,we extend the analysis for the ADMM to the general multi-block case. 展开更多
关键词 First-order algorithms ADMM proximal gradient method Convex optimization Iteration complexity
原文传递
Approximately orthogonal nonnegative Tucker decomposition for flexible multiway clustering 被引量:2
12
作者 QIU YiChun SUN WeiJun +2 位作者 ZHANG Yu GU XiaoBo ZHOU GuoXu 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2021年第9期1872-1880,共9页
High-order tensor data are prevalent in real-world applications, and multiway clustering is one of the most important techniques for exploratory data mining and compression of multiway data. However, existing multiway... High-order tensor data are prevalent in real-world applications, and multiway clustering is one of the most important techniques for exploratory data mining and compression of multiway data. However, existing multiway clustering is based on the K-means procedure and is incapable of addressing the issue of crossed membership degrees. To overcome this limitation, we propose a flexible multiway clustering model called approximately orthogonal nonnegative Tucker decomposition(AONTD). The new model provides extra flexibility to handle crossed memberships while fully exploiting the multilinear property of tensor data.The accelerated proximal gradient method and the low-rank compression tricks are adopted to optimize the cost function. The experimental results on both synthetic data and real-world cases illustrate that the proposed AONTD model outperforms the benchmark clustering methods by significantly improving the interpretability and robustness. 展开更多
关键词 multiway data analysis nonnegative Tucker decomposition flexible clustering accelerated proximal gradient
原文传递
Extrapolated Smoothing Descent Algorithm for Constrained Nonconvex and Nonsmooth Composite Problems
13
作者 Yunmei CHEN Hongcheng LIU Weina WANG 《Chinese Annals of Mathematics,Series B》 SCIE CSCD 2022年第6期1049-1070,共22页
In this paper,the authors propose a novel smoothing descent type algorithm with extrapolation for solving a class of constrained nonsmooth and nonconvex problems,where the nonconvex term is possibly nonsmooth.Their al... In this paper,the authors propose a novel smoothing descent type algorithm with extrapolation for solving a class of constrained nonsmooth and nonconvex problems,where the nonconvex term is possibly nonsmooth.Their algorithm adopts the proximal gradient algorithm with extrapolation and a safe-guarding policy to minimize the smoothed objective function for better practical and theoretical performance.Moreover,the algorithm uses a easily checking rule to update the smoothing parameter to ensure that any accumulation point of the generated sequence is an(afne-scaled)Clarke stationary point of the original nonsmooth and nonconvex problem.Their experimental results indicate the effectiveness of the proposed algorithm. 展开更多
关键词 Constrained nonconvex and nonsmooth optimization Smooth approximation proximal gradient algorithm with extrapolation gradient descent algorithm Image reconstruction
原文传递
On Globally Q-Linear Convergence of a Splitting Method for Group Lasso
14
作者 Yun-Da Dong Hai-Bin Zhang Huan Gao 《Journal of the Operations Research Society of China》 EI CSCD 2018年第3期445-454,共10页
In this paper,we discuss a splitting method for group Lasso.By assuming that the sequence of the step lengths has positive lower bound and positive upper bound(unrelated to the given problem data),we prove its Q-linea... In this paper,we discuss a splitting method for group Lasso.By assuming that the sequence of the step lengths has positive lower bound and positive upper bound(unrelated to the given problem data),we prove its Q-linear rate of convergence of the distance sequence of the iterates to the solution set.Moreover,we make comparisons with convergence of the proximal gradient method analyzed very recently. 展开更多
关键词 Group Lasso Splitting method proximal gradient method Q-linear rate of convergence
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部