Accelerated proximal gradient methods have recently been developed for solving quasi-static incremental problems of elastoplastic analysis with some different yield criteria.It has been demonstrated through numerical ...Accelerated proximal gradient methods have recently been developed for solving quasi-static incremental problems of elastoplastic analysis with some different yield criteria.It has been demonstrated through numerical experiments that these methods can outperform conventional optimization-based approaches in computational plasticity.However,in literature these algorithms are described individually for specific yield criteria,and hence there exists no guide for application of the algorithms to other yield criteria.This short paper presents a general form of algorithm design,independent of specific forms of yield criteria,that unifies the existing proximal gradient methods.Clear interpretation is also given to each step of the presented general algorithm so that each update rule is linked to the underlying physical laws in terms of mechanical quantities.展开更多
In this paper,an accelerated proximal gradient algorithm is proposed for Hankel tensor completion problems.In our method,the iterative completion tensors generated by the new algorithm keep Hankel structure based on p...In this paper,an accelerated proximal gradient algorithm is proposed for Hankel tensor completion problems.In our method,the iterative completion tensors generated by the new algorithm keep Hankel structure based on projection on the Hankel tensor set.Moreover,due to the special properties of Hankel structure,using the fast singular value thresholding operator of the mode-s unfolding of a Hankel tensor can decrease the computational cost.Meanwhile,the convergence of the new algorithm is discussed under some reasonable conditions.Finally,the numerical experiments show the effectiveness of the proposed algorithm.展开更多
Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for ga...Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%.展开更多
We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm a...We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm and l2-norm of the optimization variables.This class of problems arise naturally from applications in sparse group Lasso,which is a popular technique for variable selection.An effective approach to solve such problems is by the Proximal Gradient Method(PGM).In this paper we prove a local error bound around the optimal solution set for this problem and use it to establish the linear convergence of the PGM method without assuming strong convexity of the overall objective function.展开更多
Nonnegative tensor decomposition has become increasingly important for multiway data analysis in recent years. The alternating proximal gradient(APG) is a popular optimization method for nonnegative tensor decompositi...Nonnegative tensor decomposition has become increasingly important for multiway data analysis in recent years. The alternating proximal gradient(APG) is a popular optimization method for nonnegative tensor decomposition in the block coordinate descent framework. In this study, we propose an inexact version of the APG algorithm for nonnegative CANDECOMP/PARAFAC decomposition, wherein each factor matrix is updated by only finite inner iterations. We also propose a parameter warm-start method that can avoid the frequent parameter resetting of conventional APG methods and improve convergence performance.By experimental tests, we find that when the number of inner iterations is limited to around 10 to 20, the convergence speed is accelerated significantly without losing its low relative error. We evaluate our method on both synthetic and real-world tensors.The results demonstrate that the proposed inexact APG algorithm exhibits outstanding performance on both convergence speed and computational precision compared with existing popular algorithms.展开更多
In this paper,we propose a modified proximal gradient method for solving a class of nonsmooth convex optimization problems,which arise in many contemporary statistical and signal processing applications.The proposed m...In this paper,we propose a modified proximal gradient method for solving a class of nonsmooth convex optimization problems,which arise in many contemporary statistical and signal processing applications.The proposed method adopts a new scheme to construct the descent direction based on the proximal gradient method.It is proven that the modified proximal gradient method is Q-linearly convergent without the assumption of the strong convexity of the objective function.Some numerical experiments have been conducted to evaluate the proposed method eventually.展开更多
In this paper,we address the issue of peak-to-average power ratio(PAPR)reduction in large-scale multiuser multiple-input multiple-output(MU-MIMO)orthogonal frequency-division multiplexing(OFDM)systems.PAPR reduction a...In this paper,we address the issue of peak-to-average power ratio(PAPR)reduction in large-scale multiuser multiple-input multiple-output(MU-MIMO)orthogonal frequency-division multiplexing(OFDM)systems.PAPR reduction and the multiuser interference(MUI)cancellation problem are jointly formulated as an l_(∞)-norm based composite convex optimization problem,which can be solved efficiently using the iterative proximal gradient method.The proximal operator associated with l_(∞)-norm is evaluated using a low-cost sorting algorithm.The proposed method adaptively chooses the step size to accelerate convergence.Simulation results reveal that the proximal gradient method converges swiftly while provid-ing considerable PAPR reduction and lower out-of-band radiation.展开更多
Many machine learning problems can be formulated as minimizing the sum of a function and a non-smooth regularization term.Proximal stochastic gradient methods are popular for solving such composite optimization proble...Many machine learning problems can be formulated as minimizing the sum of a function and a non-smooth regularization term.Proximal stochastic gradient methods are popular for solving such composite optimization problems.We propose a minibatch proximal stochastic recursive gradient algorithm SRG-DBB,which incorporates the diagonal Barzilai–Borwein(DBB)stepsize strategy to capture the local geometry of the problem.The linear convergence and complexity of SRG-DBB are analyzed for strongly convex functions.We further establish the linear convergence of SRGDBB under the non-strong convexity condition.Moreover,it is proved that SRG-DBB converges sublinearly in the convex case.Numerical experiments on standard data sets indicate that the performance of SRG-DBB is better than or comparable to the proximal stochastic recursive gradient algorithm with best-tuned scalar stepsizes or BB stepsizes.Furthermore,SRG-DBB is superior to some advanced mini-batch proximal stochastic gradient methods.展开更多
In this paper,we consider 3 D tomographic reconstruction for axially symmetric objects from a single radiograph formed by cone-beam X-rays.All contemporary density reconstruction methods in high-energy X-ray radiograp...In this paper,we consider 3 D tomographic reconstruction for axially symmetric objects from a single radiograph formed by cone-beam X-rays.All contemporary density reconstruction methods in high-energy X-ray radiography are based on the assumption that the cone beam can be treated as fan beams located at parallel planes perpendicular to the symmetric axis,so that the density of the whole object can be recovered layer by layer.Considering the relationship between different layers,we undertake the cone-beam global reconstruction to solve the ambiguity effect at the material interfaces of the reconstruction results.In view of the anisotropy of classical discrete total variations,a new discretization of total variation which yields sharp edges and has better isotropy is introduced in our reconstruction model.Furthermore,considering that the object density consists of continually changing parts and jumps,a high-order regularization term is introduced.The final hybrid regularization model is solved using the alternating proximal gradient method,which was recently applied in image processing.Density reconstruction results are presented for simulated radiographs,which shows that the proposed method has led to an improvement in terms of the preservation of edge location.展开更多
L-band digital aeronautical communication system 1(L-DACS1) is a promising candidate data-link for future air-ground communication, but it is severely interfered by the pulse pairs(PPs) generated by distance measure e...L-band digital aeronautical communication system 1(L-DACS1) is a promising candidate data-link for future air-ground communication, but it is severely interfered by the pulse pairs(PPs) generated by distance measure equipment. A novel PP mitigation approach is proposed in this paper. Firstly, a deformed PP detection(DPPD) method that combines a filter bank, correlation detection, and rescanning is proposed to detect the deformed PPs(DPPs) which are caused by multiple filters in the receiver. Secondly, a finite impulse response(FIR) model is used to approximate the overall characteristic of filters, and then the waveform of DPP can be acquired by the original waveform of PP and the FIR model. Finally, sparse representation is used to estimate the position and amplitude of each DPP, and then reconstruct each DPP. The reconstructed DPPs will be subtracted from the contaminated signal to mitigate interference. Numerical experiments show that the bit error rate performance of our approach is about 5 dB better than that of recent works and is closer to interference-free environment.展开更多
In this paper,we consider a block-structured convex optimization model,where in the objective the block variables are nonseparable and they are further linearly coupled in the constraint.For the 2-block case,we propos...In this paper,we consider a block-structured convex optimization model,where in the objective the block variables are nonseparable and they are further linearly coupled in the constraint.For the 2-block case,we propose a number of first-order algorithms to solve this model.First,the alternating direction method of multipliers(ADMM)is extended,assuming that it is easy to optimize the augmented Lagrangian function with one block of variables at each time while fixing the other block.We prove that O(1/t)iteration complexity bound holds under suitable conditions,where t is the number of iterations.If the subroutines of the ADMM cannot be implemented,then we propose new alternative algorithms to be called alternating proximal gradient method of multipliers,alternating gradient projection method of multipliers,and the hybrids thereof.Under suitable conditions,the O(1/t)iteration complexity bound is shown to hold for all the newly proposed algorithms.Finally,we extend the analysis for the ADMM to the general multi-block case.展开更多
High-order tensor data are prevalent in real-world applications, and multiway clustering is one of the most important techniques for exploratory data mining and compression of multiway data. However, existing multiway...High-order tensor data are prevalent in real-world applications, and multiway clustering is one of the most important techniques for exploratory data mining and compression of multiway data. However, existing multiway clustering is based on the K-means procedure and is incapable of addressing the issue of crossed membership degrees. To overcome this limitation, we propose a flexible multiway clustering model called approximately orthogonal nonnegative Tucker decomposition(AONTD). The new model provides extra flexibility to handle crossed memberships while fully exploiting the multilinear property of tensor data.The accelerated proximal gradient method and the low-rank compression tricks are adopted to optimize the cost function. The experimental results on both synthetic data and real-world cases illustrate that the proposed AONTD model outperforms the benchmark clustering methods by significantly improving the interpretability and robustness.展开更多
In this paper,the authors propose a novel smoothing descent type algorithm with extrapolation for solving a class of constrained nonsmooth and nonconvex problems,where the nonconvex term is possibly nonsmooth.Their al...In this paper,the authors propose a novel smoothing descent type algorithm with extrapolation for solving a class of constrained nonsmooth and nonconvex problems,where the nonconvex term is possibly nonsmooth.Their algorithm adopts the proximal gradient algorithm with extrapolation and a safe-guarding policy to minimize the smoothed objective function for better practical and theoretical performance.Moreover,the algorithm uses a easily checking rule to update the smoothing parameter to ensure that any accumulation point of the generated sequence is an(afne-scaled)Clarke stationary point of the original nonsmooth and nonconvex problem.Their experimental results indicate the effectiveness of the proposed algorithm.展开更多
In this paper,we discuss a splitting method for group Lasso.By assuming that the sequence of the step lengths has positive lower bound and positive upper bound(unrelated to the given problem data),we prove its Q-linea...In this paper,we discuss a splitting method for group Lasso.By assuming that the sequence of the step lengths has positive lower bound and positive upper bound(unrelated to the given problem data),we prove its Q-linear rate of convergence of the distance sequence of the iterates to the solution set.Moreover,we make comparisons with convergence of the proximal gradient method analyzed very recently.展开更多
文摘Accelerated proximal gradient methods have recently been developed for solving quasi-static incremental problems of elastoplastic analysis with some different yield criteria.It has been demonstrated through numerical experiments that these methods can outperform conventional optimization-based approaches in computational plasticity.However,in literature these algorithms are described individually for specific yield criteria,and hence there exists no guide for application of the algorithms to other yield criteria.This short paper presents a general form of algorithm design,independent of specific forms of yield criteria,that unifies the existing proximal gradient methods.Clear interpretation is also given to each step of the presented general algorithm so that each update rule is linked to the underlying physical laws in terms of mechanical quantities.
文摘In this paper,an accelerated proximal gradient algorithm is proposed for Hankel tensor completion problems.In our method,the iterative completion tensors generated by the new algorithm keep Hankel structure based on projection on the Hankel tensor set.Moreover,due to the special properties of Hankel structure,using the fast singular value thresholding operator of the mode-s unfolding of a Hankel tensor can decrease the computational cost.Meanwhile,the convergence of the new algorithm is discussed under some reasonable conditions.Finally,the numerical experiments show the effectiveness of the proposed algorithm.
基金the Natural Science Foundation of Ningxia Province(No.2021AAC03230).
文摘Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%.
基金This work was partially supported by the National Natural Science Foundation of China(Nos.61179033,DMS-1015346)。
文摘We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm and l2-norm of the optimization variables.This class of problems arise naturally from applications in sparse group Lasso,which is a popular technique for variable selection.An effective approach to solve such problems is by the Proximal Gradient Method(PGM).In this paper we prove a local error bound around the optimal solution set for this problem and use it to establish the linear convergence of the PGM method without assuming strong convexity of the overall objective function.
基金This work was supported by the National Natural Science Foundation of China(Grant No.91748105)the National Foundation in China(Grant Nos.JCKY2019110B009 and 2020-JCJQ-JJ-252)+1 种基金the Fundamental Research Funds for the Central Universities(Grant Nos.DUT20LAB303 and DUT20LAB308)in Dalian University of Technology in Chinathe scholarship from China Scholarship Council(Grant No.201600090043)。
文摘Nonnegative tensor decomposition has become increasingly important for multiway data analysis in recent years. The alternating proximal gradient(APG) is a popular optimization method for nonnegative tensor decomposition in the block coordinate descent framework. In this study, we propose an inexact version of the APG algorithm for nonnegative CANDECOMP/PARAFAC decomposition, wherein each factor matrix is updated by only finite inner iterations. We also propose a parameter warm-start method that can avoid the frequent parameter resetting of conventional APG methods and improve convergence performance.By experimental tests, we find that when the number of inner iterations is limited to around 10 to 20, the convergence speed is accelerated significantly without losing its low relative error. We evaluate our method on both synthetic and real-world tensors.The results demonstrate that the proposed inexact APG algorithm exhibits outstanding performance on both convergence speed and computational precision compared with existing popular algorithms.
基金the National Natural Science Foundation of China(No.61179033).
文摘In this paper,we propose a modified proximal gradient method for solving a class of nonsmooth convex optimization problems,which arise in many contemporary statistical and signal processing applications.The proposed method adopts a new scheme to construct the descent direction based on the proximal gradient method.It is proven that the modified proximal gradient method is Q-linearly convergent without the assumption of the strong convexity of the objective function.Some numerical experiments have been conducted to evaluate the proposed method eventually.
文摘In this paper,we address the issue of peak-to-average power ratio(PAPR)reduction in large-scale multiuser multiple-input multiple-output(MU-MIMO)orthogonal frequency-division multiplexing(OFDM)systems.PAPR reduction and the multiuser interference(MUI)cancellation problem are jointly formulated as an l_(∞)-norm based composite convex optimization problem,which can be solved efficiently using the iterative proximal gradient method.The proximal operator associated with l_(∞)-norm is evaluated using a low-cost sorting algorithm.The proposed method adaptively chooses the step size to accelerate convergence.Simulation results reveal that the proximal gradient method converges swiftly while provid-ing considerable PAPR reduction and lower out-of-band radiation.
基金the National Natural Science Foundation of China(Nos.11671116,11701137,12071108,11991020,11991021 and 12021001)the Major Research Plan of the NSFC(No.91630202)+1 种基金the Strategic Priority Research Program of Chinese Academy of Sciences(No.XDA27000000)the Natural Science Foundation of Hebei Province(No.A2021202010)。
文摘Many machine learning problems can be formulated as minimizing the sum of a function and a non-smooth regularization term.Proximal stochastic gradient methods are popular for solving such composite optimization problems.We propose a minibatch proximal stochastic recursive gradient algorithm SRG-DBB,which incorporates the diagonal Barzilai–Borwein(DBB)stepsize strategy to capture the local geometry of the problem.The linear convergence and complexity of SRG-DBB are analyzed for strongly convex functions.We further establish the linear convergence of SRGDBB under the non-strong convexity condition.Moreover,it is proved that SRG-DBB converges sublinearly in the convex case.Numerical experiments on standard data sets indicate that the performance of SRG-DBB is better than or comparable to the proximal stochastic recursive gradient algorithm with best-tuned scalar stepsizes or BB stepsizes.Furthermore,SRG-DBB is superior to some advanced mini-batch proximal stochastic gradient methods.
基金supported by National Postdoctoral Program for Innovative Talents(BX201700038)supported by NSFC(11571003)+1 种基金supported by NSFC(11675021)supported by Beijing Natural Science Foundation(Z180002)。
文摘In this paper,we consider 3 D tomographic reconstruction for axially symmetric objects from a single radiograph formed by cone-beam X-rays.All contemporary density reconstruction methods in high-energy X-ray radiography are based on the assumption that the cone beam can be treated as fan beams located at parallel planes perpendicular to the symmetric axis,so that the density of the whole object can be recovered layer by layer.Considering the relationship between different layers,we undertake the cone-beam global reconstruction to solve the ambiguity effect at the material interfaces of the reconstruction results.In view of the anisotropy of classical discrete total variations,a new discretization of total variation which yields sharp edges and has better isotropy is introduced in our reconstruction model.Furthermore,considering that the object density consists of continually changing parts and jumps,a high-order regularization term is introduced.The final hybrid regularization model is solved using the alternating proximal gradient method,which was recently applied in image processing.Density reconstruction results are presented for simulated radiographs,which shows that the proposed method has led to an improvement in terms of the preservation of edge location.
基金supported in part by the National Natural Science Foundation (Nos. U1533107 and U1433105)the Civil Aviation Science and Technology Innovation Foundation (No. MHRD20130217)the Fundamental Research Funds for the Central Universities of CAUC (No. 3122016D003)
文摘L-band digital aeronautical communication system 1(L-DACS1) is a promising candidate data-link for future air-ground communication, but it is severely interfered by the pulse pairs(PPs) generated by distance measure equipment. A novel PP mitigation approach is proposed in this paper. Firstly, a deformed PP detection(DPPD) method that combines a filter bank, correlation detection, and rescanning is proposed to detect the deformed PPs(DPPs) which are caused by multiple filters in the receiver. Secondly, a finite impulse response(FIR) model is used to approximate the overall characteristic of filters, and then the waveform of DPP can be acquired by the original waveform of PP and the FIR model. Finally, sparse representation is used to estimate the position and amplitude of each DPP, and then reconstruct each DPP. The reconstructed DPPs will be subtracted from the contaminated signal to mitigate interference. Numerical experiments show that the bit error rate performance of our approach is about 5 dB better than that of recent works and is closer to interference-free environment.
文摘In this paper,we consider a block-structured convex optimization model,where in the objective the block variables are nonseparable and they are further linearly coupled in the constraint.For the 2-block case,we propose a number of first-order algorithms to solve this model.First,the alternating direction method of multipliers(ADMM)is extended,assuming that it is easy to optimize the augmented Lagrangian function with one block of variables at each time while fixing the other block.We prove that O(1/t)iteration complexity bound holds under suitable conditions,where t is the number of iterations.If the subroutines of the ADMM cannot be implemented,then we propose new alternative algorithms to be called alternating proximal gradient method of multipliers,alternating gradient projection method of multipliers,and the hybrids thereof.Under suitable conditions,the O(1/t)iteration complexity bound is shown to hold for all the newly proposed algorithms.Finally,we extend the analysis for the ADMM to the general multi-block case.
基金This work was supported by the National Natural Science Foundation of China(Grant Nos.62073087,62071132,61973090 and U1911401)the Key-Area Research and Development Program of Guangdong Province(Grant Nos.2019B010154002 and 2019010118001)。
文摘High-order tensor data are prevalent in real-world applications, and multiway clustering is one of the most important techniques for exploratory data mining and compression of multiway data. However, existing multiway clustering is based on the K-means procedure and is incapable of addressing the issue of crossed membership degrees. To overcome this limitation, we propose a flexible multiway clustering model called approximately orthogonal nonnegative Tucker decomposition(AONTD). The new model provides extra flexibility to handle crossed memberships while fully exploiting the multilinear property of tensor data.The accelerated proximal gradient method and the low-rank compression tricks are adopted to optimize the cost function. The experimental results on both synthetic data and real-world cases illustrate that the proposed AONTD model outperforms the benchmark clustering methods by significantly improving the interpretability and robustness.
基金supported by the National Natural Science Foundation of China(No.12001144)Zhejiang Provincial Natural Science Foundation of China(No.LQ20A010007)NSF/DMS-2152961。
文摘In this paper,the authors propose a novel smoothing descent type algorithm with extrapolation for solving a class of constrained nonsmooth and nonconvex problems,where the nonconvex term is possibly nonsmooth.Their algorithm adopts the proximal gradient algorithm with extrapolation and a safe-guarding policy to minimize the smoothed objective function for better practical and theoretical performance.Moreover,the algorithm uses a easily checking rule to update the smoothing parameter to ensure that any accumulation point of the generated sequence is an(afne-scaled)Clarke stationary point of the original nonsmooth and nonconvex problem.Their experimental results indicate the effectiveness of the proposed algorithm.
基金This research was supported by the National Natural Science Foundation of China(No.61179033)Collaborative Innovation Center on Beijing Society-Building and Social Governance.
文摘In this paper,we discuss a splitting method for group Lasso.By assuming that the sequence of the step lengths has positive lower bound and positive upper bound(unrelated to the given problem data),we prove its Q-linear rate of convergence of the distance sequence of the iterates to the solution set.Moreover,we make comparisons with convergence of the proximal gradient method analyzed very recently.