Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for ga...Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%.展开更多
Support vector machine(SVM)is a widely used method for classification.Proximal support vector machine(PSVM)is an extension of SVM and a promisingmethod to lead to a fast and simple algorithm for generating a classifie...Support vector machine(SVM)is a widely used method for classification.Proximal support vector machine(PSVM)is an extension of SVM and a promisingmethod to lead to a fast and simple algorithm for generating a classifier.Motivated by the fast computational efforts of PSVM and the properties of sparse solution yielded by l1-norm,in this paper,we first propose a PSVM with a cardinality constraint which is eventually relaxed byl1-norm and leads to a trade-offl1−l2 regularized sparse PSVM.Next we convert thisl1−l2 regularized sparse PSVM into an equivalent form of1 regularized least squares(LS)and solve it by a specialized interior-point method proposed by Kim et al.(J SelTop Signal Process 12:1932–4553,2007).Finally,l1−l2 regularized sparse PSVM is illustrated by means of a real-world dataset taken from the University of California,Irvine Machine Learning Repository(UCI Repository).Moreover,we compare the numerical results with the existing models such as generalized eigenvalue proximal SVM(GEPSVM),PSVM,and SVM-Light.The numerical results showthat thel1−l2 regularized sparsePSVMachieves not only better accuracy rate of classification than those of GEPSVM,PSVM,and SVM-Light,but also a sparser classifier compared with the1-PSVM.展开更多
Classification problem is the central problem in machine learning.Support vector machines(SVMs)are supervised learning models with associated learning algorithms and are used for classification in machine learning.In ...Classification problem is the central problem in machine learning.Support vector machines(SVMs)are supervised learning models with associated learning algorithms and are used for classification in machine learning.In this paper,we establish two consensus proximal support vector machines(PSVMs)models,based on methods for binary classification.The first one is to separate the objective functions into individual convex functions by using the number of the sample points of the training set.The constraints contain two types of the equations with global variables and local variables corresponding to the consensus points and sample points,respectively.To get more sparse solutions,the second one is l1–l2 consensus PSVMs in which the objective function contains an■1-norm term and an■2-norm term which is responsible for the good classification performance while■1-norm term plays an important role in finding the sparse solutions.Two consensus PSVMs are solved by the alternating direction method of multipliers.Furthermore,they are implemented by the real-world data taken from the University of California,Irvine Machine Learning Repository(UCI Repository)and are compared with the existed models such as■1-PSVM,■p-PSVM,GEPSVM,PSVM,and SVM-light.Numerical results show that our models outperform others with the classification accuracy and the sparse solutions.展开更多
Support Vector Clustering (SVC) is a kernel-based unsupervised learning clustering method. The main drawback of SVC is its high computational complexity in getting the adjacency matrix describing the connectivity for ...Support Vector Clustering (SVC) is a kernel-based unsupervised learning clustering method. The main drawback of SVC is its high computational complexity in getting the adjacency matrix describing the connectivity for each pairs of points. Based on the proximity graph model [3], the Euclidean distance in Hilbert space is calculated using a Gaussian kernel, which is the right criterion to generate a minimum spanning tree using Kruskal's algorithm. Then the connectivity estimation is lowered by only checking the linkages between the edges that construct the main stem of the MST (Minimum Spanning Tree), in which the non-compatibility degree is originally defined to support the edge selection during linkage estimations. This new approach is experimentally analyzed. The results show that the revised algorithm has a better performance than the proximity graph model with faster speed, optimized clustering quality and strong ability to noise suppression, which makes SVC scalable to large data sets.展开更多
Support vector machines(SVMs)are a kind of important machine learning methods generated by the cross interaction of statistical theory and optimization,and have been extensively applied into text categorization,diseas...Support vector machines(SVMs)are a kind of important machine learning methods generated by the cross interaction of statistical theory and optimization,and have been extensively applied into text categorization,disease diagnosis,face detection and so on.The loss function is the core research content of SVM,and its variational properties play an important role in the analysis of optimality conditions,the design of optimization algorithms,the representation of support vectors and the research of dual problems.This paper summarizes and analyzes the 0-1 loss function and its eighteen popular surrogate loss functions in SVM,and gives three variational properties of these loss functions:subdifferential,proximal operator and Fenchel conjugate,where the nine proximal operators and fifteen Fenchel conjugates are given by this paper.展开更多
支持向量机因其相比于传统算法具有良好的分类性能,而广泛地应用于故障诊断研究中。但标准SVM存在训练时间长,占用内存大的不足。近似支持向量机(Proximal Support Vec-tor Machines,PSVM)算法具有训练速度快占用内存少的特点,特别适用...支持向量机因其相比于传统算法具有良好的分类性能,而广泛地应用于故障诊断研究中。但标准SVM存在训练时间长,占用内存大的不足。近似支持向量机(Proximal Support Vec-tor Machines,PSVM)算法具有训练速度快占用内存少的特点,特别适用于大量数据的故障诊断。但其对于分类超平面附近点的诊断精度略显不足。针对此类问题文中将耗时较少的Vague-Sigmoid核函数应用于PSVM,用以提高其对于在分类面附近样本的分类精度,仿真证明获得了较好的效果。展开更多
半监督学习可借助有标签和部分无标签样本数据来构建电网暂态稳定评估模型,有效利用输入样本数据,可提高电网暂态稳定评估准确率,为此提出基于半监督近似流形支持向量机(manifold proximal support vector machine,MPSVM)的暂态稳定评...半监督学习可借助有标签和部分无标签样本数据来构建电网暂态稳定评估模型,有效利用输入样本数据,可提高电网暂态稳定评估准确率,为此提出基于半监督近似流形支持向量机(manifold proximal support vector machine,MPSVM)的暂态稳定评估方法。首先,在MPSVM的正则项中引入判别变量,可最大限度捕捉样本数据内部的几何信息,并通过最大距离理论表征电力系统稳定类和不稳定类之间的差异,进而转化为求解特征值问题;然后,采用贝叶斯非线性分层模型确定最优参数,可进一步提高评估准确率;最后,采用IEEE 39标准系统和鞍山电网的仿真分析验证所提评估模型的有效性和准确性。展开更多
微分方程的计算求解在计算机工程上有重要的理论意义和应用价值。针对传统数值解法计算复杂度高、解的形式离散等问题,本文基于微分方程的回归方程观点与解法,应用统计回归方法求解二阶常微分方程,并给出基于中心支持向量机(proximal su...微分方程的计算求解在计算机工程上有重要的理论意义和应用价值。针对传统数值解法计算复杂度高、解的形式离散等问题,本文基于微分方程的回归方程观点与解法,应用统计回归方法求解二阶常微分方程,并给出基于中心支持向量机(proximal support vector machine,P-SVM)在常微分方程的初值和边值问题上的近似解求法。通过在目标优化函数中添加偏置项,构建P-SVM回归模型,从而避免大规模求解线性方程组,得到结构简洁的最优解表达式。模型通过最小化训练样本点的均方误差和,在保证精度的同时,有效提高了近似解的计算速度。此外,形式简洁固定的解析解表达式也便于在实际应用中进行定性分析和性质研究。数值试验结果验证了P-SVM方法是一种高效可行的常微分方程求解方法。展开更多
基金the Natural Science Foundation of Ningxia Province(No.2021AAC03230).
文摘Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%.
基金This research was supported by the National Natural Science Foundation of China(No.11371242).
文摘Support vector machine(SVM)is a widely used method for classification.Proximal support vector machine(PSVM)is an extension of SVM and a promisingmethod to lead to a fast and simple algorithm for generating a classifier.Motivated by the fast computational efforts of PSVM and the properties of sparse solution yielded by l1-norm,in this paper,we first propose a PSVM with a cardinality constraint which is eventually relaxed byl1-norm and leads to a trade-offl1−l2 regularized sparse PSVM.Next we convert thisl1−l2 regularized sparse PSVM into an equivalent form of1 regularized least squares(LS)and solve it by a specialized interior-point method proposed by Kim et al.(J SelTop Signal Process 12:1932–4553,2007).Finally,l1−l2 regularized sparse PSVM is illustrated by means of a real-world dataset taken from the University of California,Irvine Machine Learning Repository(UCI Repository).Moreover,we compare the numerical results with the existing models such as generalized eigenvalue proximal SVM(GEPSVM),PSVM,and SVM-Light.The numerical results showthat thel1−l2 regularized sparsePSVMachieves not only better accuracy rate of classification than those of GEPSVM,PSVM,and SVM-Light,but also a sparser classifier compared with the1-PSVM.
基金This work is supported by the National Natural Science Foundation of China(Grant No.11371242)and the“085 Project”in Shanghai University.
文摘Classification problem is the central problem in machine learning.Support vector machines(SVMs)are supervised learning models with associated learning algorithms and are used for classification in machine learning.In this paper,we establish two consensus proximal support vector machines(PSVMs)models,based on methods for binary classification.The first one is to separate the objective functions into individual convex functions by using the number of the sample points of the training set.The constraints contain two types of the equations with global variables and local variables corresponding to the consensus points and sample points,respectively.To get more sparse solutions,the second one is l1–l2 consensus PSVMs in which the objective function contains an■1-norm term and an■2-norm term which is responsible for the good classification performance while■1-norm term plays an important role in finding the sparse solutions.Two consensus PSVMs are solved by the alternating direction method of multipliers.Furthermore,they are implemented by the real-world data taken from the University of California,Irvine Machine Learning Repository(UCI Repository)and are compared with the existed models such as■1-PSVM,■p-PSVM,GEPSVM,PSVM,and SVM-light.Numerical results show that our models outperform others with the classification accuracy and the sparse solutions.
基金TheNationalHighTechnologyResearchandDevelopmentProgramofChina (No .86 3 5 11 930 0 0 9)
文摘Support Vector Clustering (SVC) is a kernel-based unsupervised learning clustering method. The main drawback of SVC is its high computational complexity in getting the adjacency matrix describing the connectivity for each pairs of points. Based on the proximity graph model [3], the Euclidean distance in Hilbert space is calculated using a Gaussian kernel, which is the right criterion to generate a minimum spanning tree using Kruskal's algorithm. Then the connectivity estimation is lowered by only checking the linkages between the edges that construct the main stem of the MST (Minimum Spanning Tree), in which the non-compatibility degree is originally defined to support the edge selection during linkage estimations. This new approach is experimentally analyzed. The results show that the revised algorithm has a better performance than the proximity graph model with faster speed, optimized clustering quality and strong ability to noise suppression, which makes SVC scalable to large data sets.
文摘Support vector machines(SVMs)are a kind of important machine learning methods generated by the cross interaction of statistical theory and optimization,and have been extensively applied into text categorization,disease diagnosis,face detection and so on.The loss function is the core research content of SVM,and its variational properties play an important role in the analysis of optimality conditions,the design of optimization algorithms,the representation of support vectors and the research of dual problems.This paper summarizes and analyzes the 0-1 loss function and its eighteen popular surrogate loss functions in SVM,and gives three variational properties of these loss functions:subdifferential,proximal operator and Fenchel conjugate,where the nine proximal operators and fifteen Fenchel conjugates are given by this paper.
文摘支持向量机因其相比于传统算法具有良好的分类性能,而广泛地应用于故障诊断研究中。但标准SVM存在训练时间长,占用内存大的不足。近似支持向量机(Proximal Support Vec-tor Machines,PSVM)算法具有训练速度快占用内存少的特点,特别适用于大量数据的故障诊断。但其对于分类超平面附近点的诊断精度略显不足。针对此类问题文中将耗时较少的Vague-Sigmoid核函数应用于PSVM,用以提高其对于在分类面附近样本的分类精度,仿真证明获得了较好的效果。
文摘微分方程的计算求解在计算机工程上有重要的理论意义和应用价值。针对传统数值解法计算复杂度高、解的形式离散等问题,本文基于微分方程的回归方程观点与解法,应用统计回归方法求解二阶常微分方程,并给出基于中心支持向量机(proximal support vector machine,P-SVM)在常微分方程的初值和边值问题上的近似解求法。通过在目标优化函数中添加偏置项,构建P-SVM回归模型,从而避免大规模求解线性方程组,得到结构简洁的最优解表达式。模型通过最小化训练样本点的均方误差和,在保证精度的同时,有效提高了近似解的计算速度。此外,形式简洁固定的解析解表达式也便于在实际应用中进行定性分析和性质研究。数值试验结果验证了P-SVM方法是一种高效可行的常微分方程求解方法。