In this paper,a new quadratic kernel-free least square twin support vector machine(QLSTSVM)is proposed for binary classification problems.The advantage of QLSTSVM is that there is no need to select the kernel function...In this paper,a new quadratic kernel-free least square twin support vector machine(QLSTSVM)is proposed for binary classification problems.The advantage of QLSTSVM is that there is no need to select the kernel function and related parameters for nonlinear classification problems.After using consensus technique,we adopt alternating direction method of multipliers to solve the reformulated consensus QLSTSVM directly.To reduce CPU time,the Karush-Kuhn-Tucker(KKT)conditions is also used to solve the QLSTSVM.The performance of QLSTSVM is tested on two artificial datasets and several University of California Irvine(UCI)benchmark datasets.Numerical results indicate that the QLSTSVM may outperform several existing methods for solving twin support vector machine with Gaussian kernel in terms of the classification accuracy and operation time.展开更多
Support vector machine(SVM)is a widely used method for classification.Proximal support vector machine(PSVM)is an extension of SVM and a promisingmethod to lead to a fast and simple algorithm for generating a classifie...Support vector machine(SVM)is a widely used method for classification.Proximal support vector machine(PSVM)is an extension of SVM and a promisingmethod to lead to a fast and simple algorithm for generating a classifier.Motivated by the fast computational efforts of PSVM and the properties of sparse solution yielded by l1-norm,in this paper,we first propose a PSVM with a cardinality constraint which is eventually relaxed byl1-norm and leads to a trade-offl1−l2 regularized sparse PSVM.Next we convert thisl1−l2 regularized sparse PSVM into an equivalent form of1 regularized least squares(LS)and solve it by a specialized interior-point method proposed by Kim et al.(J SelTop Signal Process 12:1932–4553,2007).Finally,l1−l2 regularized sparse PSVM is illustrated by means of a real-world dataset taken from the University of California,Irvine Machine Learning Repository(UCI Repository).Moreover,we compare the numerical results with the existing models such as generalized eigenvalue proximal SVM(GEPSVM),PSVM,and SVM-Light.The numerical results showthat thel1−l2 regularized sparsePSVMachieves not only better accuracy rate of classification than those of GEPSVM,PSVM,and SVM-Light,but also a sparser classifier compared with the1-PSVM.展开更多
Classification problem is the central problem in machine learning.Support vector machines(SVMs)are supervised learning models with associated learning algorithms and are used for classification in machine learning.In ...Classification problem is the central problem in machine learning.Support vector machines(SVMs)are supervised learning models with associated learning algorithms and are used for classification in machine learning.In this paper,we establish two consensus proximal support vector machines(PSVMs)models,based on methods for binary classification.The first one is to separate the objective functions into individual convex functions by using the number of the sample points of the training set.The constraints contain two types of the equations with global variables and local variables corresponding to the consensus points and sample points,respectively.To get more sparse solutions,the second one is l1–l2 consensus PSVMs in which the objective function contains an■1-norm term and an■2-norm term which is responsible for the good classification performance while■1-norm term plays an important role in finding the sparse solutions.Two consensus PSVMs are solved by the alternating direction method of multipliers.Furthermore,they are implemented by the real-world data taken from the University of California,Irvine Machine Learning Repository(UCI Repository)and are compared with the existed models such as■1-PSVM,■p-PSVM,GEPSVM,PSVM,and SVM-light.Numerical results show that our models outperform others with the classification accuracy and the sparse solutions.展开更多
In general,data contain noises which come from faulty instruments,flawed measurements or faulty communication.Learning with data in the context of classification or regression is inevitably affected by noises in the d...In general,data contain noises which come from faulty instruments,flawed measurements or faulty communication.Learning with data in the context of classification or regression is inevitably affected by noises in the data.In order to remove or greatly reduce the impact of noises,we introduce the ideas of fuzzy membership functions and the Laplacian twin support vector machine(Lap-TSVM).A formulation of the linear intuitionistic fuzzy Laplacian twin support vector machine(IFLap-TSVM)is presented.Moreover,we extend the linear IFLap-TSVM to the nonlinear case by kernel function.The proposed IFLap-TSVM resolves the negative impact of noises and outliers by using fuzzy membership functions and is a more accurate reasonable classi-fier by using the geometric distribution information of labeled data and unlabeled data based on manifold regularization.Experiments with constructed artificial datasets,several UCI benchmark datasets and MNIST dataset show that the IFLap-TSVM has better classification accuracy than other state-of-the-art twin support vector machine(TSVM),intuitionistic fuzzy twin support vector machine(IFTSVM)and Lap-TSVM.展开更多
In this paper,we propose a kind of unified strict efficiency named E-strict efficiency via improvement sets for vector optimization.This kind of efficiency is shown to be an extension of the classical strict efficienc...In this paper,we propose a kind of unified strict efficiency named E-strict efficiency via improvement sets for vector optimization.This kind of efficiency is shown to be an extension of the classical strict efficiency andε-strict efficiency and has many desirable properties.We also discuss some relationships with other properly efficiency based on improvement sets and establish the corresponding scalarization theorems by a base-functional and a nonlinear functional.Moreover,some examples are given to illustrate the main conclusions.展开更多
Logistic regression has been proved as a promising method for machine learning,which focuses on the problem of classification.In this paper,we present anl_(1)-l_(2)-regularized logistic regression model,where thel1-no...Logistic regression has been proved as a promising method for machine learning,which focuses on the problem of classification.In this paper,we present anl_(1)-l_(2)-regularized logistic regression model,where thel1-norm is responsible for yielding a sparse logistic regression classifier and thel_(2)-norm for keeping betlter classification accuracy.To solve thel_(1)-l_(2)-regularized logistic regression model,we develop an alternating direction method of multipliers with embedding limitedlBroyden-Fletcher-Goldfarb-Shanno(L-BFGS)method.Furthermore,we implement our model for binary classification problems by using real data examples selected from the University of California,Irvine Machines Learning Repository(UCI Repository).We compare our numerical results with those obtained by the well-known LIBSVM and SVM-Light software.The numerical results show that ourl_(1)-l_(2)-regularized logisltic regression model achieves better classification and less CPU Time.展开更多
In this paper,we consider an optimization problem of the grasping manipulation of multi-fingered hand-arm robots.We first formulate an optimization model for the problem,based on the dynamic equations of the object a...In this paper,we consider an optimization problem of the grasping manipulation of multi-fingered hand-arm robots.We first formulate an optimization model for the problem,based on the dynamic equations of the object and the friction constraints.Then,we reformulate the model as a convex quadratic programming over circular cones.Moreover,we propose a primal-dual interior-point algorithm based on the kernel function to solve this convex quadratic programming over circular cones.We derive both the convergence of the algorithm and the iteration bounds for largeand small-update methods,respectively.Finally,we carry out the numerical tests of 180◦and 90◦manipulations of the hand-arm robot to demonstrate the effectiveness of the proposed algorithm.展开更多
The growing demands of information age have speeded up the development of data science.The research and application of data become more and more popular in many fields,such as information science,mathematics,operation...The growing demands of information age have speeded up the development of data science.The research and application of data become more and more popular in many fields,such as information science,mathematics,operations research,statistics and computer science.It is of theoretical significance to study the optimization models and algorithms driven by data,which is helpful to new business model,communication and network technology,intelligent transportation system,economic management and other related fields.展开更多
基金This research was supported by the National Natural Science Foundation of China(No.11771275).
文摘In this paper,a new quadratic kernel-free least square twin support vector machine(QLSTSVM)is proposed for binary classification problems.The advantage of QLSTSVM is that there is no need to select the kernel function and related parameters for nonlinear classification problems.After using consensus technique,we adopt alternating direction method of multipliers to solve the reformulated consensus QLSTSVM directly.To reduce CPU time,the Karush-Kuhn-Tucker(KKT)conditions is also used to solve the QLSTSVM.The performance of QLSTSVM is tested on two artificial datasets and several University of California Irvine(UCI)benchmark datasets.Numerical results indicate that the QLSTSVM may outperform several existing methods for solving twin support vector machine with Gaussian kernel in terms of the classification accuracy and operation time.
基金This research was supported by the National Natural Science Foundation of China(No.11371242).
文摘Support vector machine(SVM)is a widely used method for classification.Proximal support vector machine(PSVM)is an extension of SVM and a promisingmethod to lead to a fast and simple algorithm for generating a classifier.Motivated by the fast computational efforts of PSVM and the properties of sparse solution yielded by l1-norm,in this paper,we first propose a PSVM with a cardinality constraint which is eventually relaxed byl1-norm and leads to a trade-offl1−l2 regularized sparse PSVM.Next we convert thisl1−l2 regularized sparse PSVM into an equivalent form of1 regularized least squares(LS)and solve it by a specialized interior-point method proposed by Kim et al.(J SelTop Signal Process 12:1932–4553,2007).Finally,l1−l2 regularized sparse PSVM is illustrated by means of a real-world dataset taken from the University of California,Irvine Machine Learning Repository(UCI Repository).Moreover,we compare the numerical results with the existing models such as generalized eigenvalue proximal SVM(GEPSVM),PSVM,and SVM-Light.The numerical results showthat thel1−l2 regularized sparsePSVMachieves not only better accuracy rate of classification than those of GEPSVM,PSVM,and SVM-Light,but also a sparser classifier compared with the1-PSVM.
基金This work is supported by the National Natural Science Foundation of China(Grant No.11371242)and the“085 Project”in Shanghai University.
文摘Classification problem is the central problem in machine learning.Support vector machines(SVMs)are supervised learning models with associated learning algorithms and are used for classification in machine learning.In this paper,we establish two consensus proximal support vector machines(PSVMs)models,based on methods for binary classification.The first one is to separate the objective functions into individual convex functions by using the number of the sample points of the training set.The constraints contain two types of the equations with global variables and local variables corresponding to the consensus points and sample points,respectively.To get more sparse solutions,the second one is l1–l2 consensus PSVMs in which the objective function contains an■1-norm term and an■2-norm term which is responsible for the good classification performance while■1-norm term plays an important role in finding the sparse solutions.Two consensus PSVMs are solved by the alternating direction method of multipliers.Furthermore,they are implemented by the real-world data taken from the University of California,Irvine Machine Learning Repository(UCI Repository)and are compared with the existed models such as■1-PSVM,■p-PSVM,GEPSVM,PSVM,and SVM-light.Numerical results show that our models outperform others with the classification accuracy and the sparse solutions.
基金This work was supported by the National Natural Science Foundation of China(No.11771275)The second author thanks the partially support of Dutch Research Council(No.040.11.724).
文摘In general,data contain noises which come from faulty instruments,flawed measurements or faulty communication.Learning with data in the context of classification or regression is inevitably affected by noises in the data.In order to remove or greatly reduce the impact of noises,we introduce the ideas of fuzzy membership functions and the Laplacian twin support vector machine(Lap-TSVM).A formulation of the linear intuitionistic fuzzy Laplacian twin support vector machine(IFLap-TSVM)is presented.Moreover,we extend the linear IFLap-TSVM to the nonlinear case by kernel function.The proposed IFLap-TSVM resolves the negative impact of noises and outliers by using fuzzy membership functions and is a more accurate reasonable classi-fier by using the geometric distribution information of labeled data and unlabeled data based on manifold regularization.Experiments with constructed artificial datasets,several UCI benchmark datasets and MNIST dataset show that the IFLap-TSVM has better classification accuracy than other state-of-the-art twin support vector machine(TSVM),intuitionistic fuzzy twin support vector machine(IFTSVM)and Lap-TSVM.
基金This research was supported by the National Natural Science Foundation of China(No.11671062)the Chongqing Municipal Education Commission(No.KJ1500310)the Doctor startup fund of Chongqing Normal University(No.16XLB010).
文摘In this paper,we propose a kind of unified strict efficiency named E-strict efficiency via improvement sets for vector optimization.This kind of efficiency is shown to be an extension of the classical strict efficiency andε-strict efficiency and has many desirable properties.We also discuss some relationships with other properly efficiency based on improvement sets and establish the corresponding scalarization theorems by a base-functional and a nonlinear functional.Moreover,some examples are given to illustrate the main conclusions.
基金the National Natural Science Foundation of China(No.11371242)。
文摘Logistic regression has been proved as a promising method for machine learning,which focuses on the problem of classification.In this paper,we present anl_(1)-l_(2)-regularized logistic regression model,where thel1-norm is responsible for yielding a sparse logistic regression classifier and thel_(2)-norm for keeping betlter classification accuracy.To solve thel_(1)-l_(2)-regularized logistic regression model,we develop an alternating direction method of multipliers with embedding limitedlBroyden-Fletcher-Goldfarb-Shanno(L-BFGS)method.Furthermore,we implement our model for binary classification problems by using real data examples selected from the University of California,Irvine Machines Learning Repository(UCI Repository).We compare our numerical results with those obtained by the well-known LIBSVM and SVM-Light software.The numerical results show that ourl_(1)-l_(2)-regularized logisltic regression model achieves better classification and less CPU Time.
基金the National Natural Science Foundation of China(No.11371242)。
文摘In this paper,we consider an optimization problem of the grasping manipulation of multi-fingered hand-arm robots.We first formulate an optimization model for the problem,based on the dynamic equations of the object and the friction constraints.Then,we reformulate the model as a convex quadratic programming over circular cones.Moreover,we propose a primal-dual interior-point algorithm based on the kernel function to solve this convex quadratic programming over circular cones.We derive both the convergence of the algorithm and the iteration bounds for largeand small-update methods,respectively.Finally,we carry out the numerical tests of 180◦and 90◦manipulations of the hand-arm robot to demonstrate the effectiveness of the proposed algorithm.
文摘The growing demands of information age have speeded up the development of data science.The research and application of data become more and more popular in many fields,such as information science,mathematics,operations research,statistics and computer science.It is of theoretical significance to study the optimization models and algorithms driven by data,which is helpful to new business model,communication and network technology,intelligent transportation system,economic management and other related fields.