期刊文献+
共找到8篇文章
< 1 >
每页显示 20 50 100
Quadratic Kernel-Free Least Square Twin Support Vector Machine for Binary Classification Problems 被引量:2
1
作者 Qian-Qian Gao yan-qin bai Ya-Ru Zhan 《Journal of the Operations Research Society of China》 EI CSCD 2019年第4期539-559,共21页
In this paper,a new quadratic kernel-free least square twin support vector machine(QLSTSVM)is proposed for binary classification problems.The advantage of QLSTSVM is that there is no need to select the kernel function... In this paper,a new quadratic kernel-free least square twin support vector machine(QLSTSVM)is proposed for binary classification problems.The advantage of QLSTSVM is that there is no need to select the kernel function and related parameters for nonlinear classification problems.After using consensus technique,we adopt alternating direction method of multipliers to solve the reformulated consensus QLSTSVM directly.To reduce CPU time,the Karush-Kuhn-Tucker(KKT)conditions is also used to solve the QLSTSVM.The performance of QLSTSVM is tested on two artificial datasets and several University of California Irvine(UCI)benchmark datasets.Numerical results indicate that the QLSTSVM may outperform several existing methods for solving twin support vector machine with Gaussian kernel in terms of the classification accuracy and operation time. 展开更多
关键词 Twin support vector machine Quadratic kernel-free Least square Binary classification
原文传递
Sparse Proximal Support Vector Machine with a Specialized Interior-Point Method 被引量:2
2
作者 yan-qin bai Zhao-Ying Zhu Wen-Li Yan 《Journal of the Operations Research Society of China》 EI CSCD 2015年第1期1-15,共15页
Support vector machine(SVM)is a widely used method for classification.Proximal support vector machine(PSVM)is an extension of SVM and a promisingmethod to lead to a fast and simple algorithm for generating a classifie... Support vector machine(SVM)is a widely used method for classification.Proximal support vector machine(PSVM)is an extension of SVM and a promisingmethod to lead to a fast and simple algorithm for generating a classifier.Motivated by the fast computational efforts of PSVM and the properties of sparse solution yielded by l1-norm,in this paper,we first propose a PSVM with a cardinality constraint which is eventually relaxed byl1-norm and leads to a trade-offl1−l2 regularized sparse PSVM.Next we convert thisl1−l2 regularized sparse PSVM into an equivalent form of1 regularized least squares(LS)and solve it by a specialized interior-point method proposed by Kim et al.(J SelTop Signal Process 12:1932–4553,2007).Finally,l1−l2 regularized sparse PSVM is illustrated by means of a real-world dataset taken from the University of California,Irvine Machine Learning Repository(UCI Repository).Moreover,we compare the numerical results with the existing models such as generalized eigenvalue proximal SVM(GEPSVM),PSVM,and SVM-Light.The numerical results showthat thel1−l2 regularized sparsePSVMachieves not only better accuracy rate of classification than those of GEPSVM,PSVM,and SVM-Light,but also a sparser classifier compared with the1-PSVM. 展开更多
关键词 Proximal support vector machine Classification accuracy Interior-point methods Preconditioned conjugate gradients algorithm
原文传递
Consensus Proximal Support Vector Machine for Classification Problems with Sparse Solutions 被引量:1
3
作者 yan-qin bai Yan-Jun Shen Kai-Ji Shen 《Journal of the Operations Research Society of China》 EI 2014年第1期57-74,共18页
Classification problem is the central problem in machine learning.Support vector machines(SVMs)are supervised learning models with associated learning algorithms and are used for classification in machine learning.In ... Classification problem is the central problem in machine learning.Support vector machines(SVMs)are supervised learning models with associated learning algorithms and are used for classification in machine learning.In this paper,we establish two consensus proximal support vector machines(PSVMs)models,based on methods for binary classification.The first one is to separate the objective functions into individual convex functions by using the number of the sample points of the training set.The constraints contain two types of the equations with global variables and local variables corresponding to the consensus points and sample points,respectively.To get more sparse solutions,the second one is l1–l2 consensus PSVMs in which the objective function contains an■1-norm term and an■2-norm term which is responsible for the good classification performance while■1-norm term plays an important role in finding the sparse solutions.Two consensus PSVMs are solved by the alternating direction method of multipliers.Furthermore,they are implemented by the real-world data taken from the University of California,Irvine Machine Learning Repository(UCI Repository)and are compared with the existed models such as■1-PSVM,■p-PSVM,GEPSVM,PSVM,and SVM-light.Numerical results show that our models outperform others with the classification accuracy and the sparse solutions. 展开更多
关键词 Classification problems Support vector machine Proximal support vector machine CONSENSUS Alternating direction method of multipliers
原文传递
Intuitionistic Fuzzy Laplacian Twin Support Vector Machine for Semi-supervised Classification
4
作者 Jia-Bin Zhou yan-qin bai +1 位作者 Yan-Ru Guo Hai-Xiang Lin 《Journal of the Operations Research Society of China》 EI CSCD 2022年第1期89-112,共24页
In general,data contain noises which come from faulty instruments,flawed measurements or faulty communication.Learning with data in the context of classification or regression is inevitably affected by noises in the d... In general,data contain noises which come from faulty instruments,flawed measurements or faulty communication.Learning with data in the context of classification or regression is inevitably affected by noises in the data.In order to remove or greatly reduce the impact of noises,we introduce the ideas of fuzzy membership functions and the Laplacian twin support vector machine(Lap-TSVM).A formulation of the linear intuitionistic fuzzy Laplacian twin support vector machine(IFLap-TSVM)is presented.Moreover,we extend the linear IFLap-TSVM to the nonlinear case by kernel function.The proposed IFLap-TSVM resolves the negative impact of noises and outliers by using fuzzy membership functions and is a more accurate reasonable classi-fier by using the geometric distribution information of labeled data and unlabeled data based on manifold regularization.Experiments with constructed artificial datasets,several UCI benchmark datasets and MNIST dataset show that the IFLap-TSVM has better classification accuracy than other state-of-the-art twin support vector machine(TSVM),intuitionistic fuzzy twin support vector machine(IFTSVM)and Lap-TSVM. 展开更多
关键词 Twin support vector machine Semi-supervised classification Intuitionistic fuzzy Manifold regularization Noisy data
原文传递
A Kind of Unified Strict Efficiency via Improvement Sets in Vector Optimization
5
作者 Hui Guo yan-qin bai 《Journal of the Operations Research Society of China》 EI CSCD 2018年第4期557-569,共13页
In this paper,we propose a kind of unified strict efficiency named E-strict efficiency via improvement sets for vector optimization.This kind of efficiency is shown to be an extension of the classical strict efficienc... In this paper,we propose a kind of unified strict efficiency named E-strict efficiency via improvement sets for vector optimization.This kind of efficiency is shown to be an extension of the classical strict efficiency andε-strict efficiency and has many desirable properties.We also discuss some relationships with other properly efficiency based on improvement sets and establish the corresponding scalarization theorems by a base-functional and a nonlinear functional.Moreover,some examples are given to illustrate the main conclusions. 展开更多
关键词 E-strict efficiency Improvement sets Linear scalarization Nonlinear scalarization Vector optimization
原文传递
Alternating Direction Method of Multipliers for l_(1)-l_(2)-Regularized Logistic Regression Model
6
作者 yan-qin bai Kai-Ji Shen 《Journal of the Operations Research Society of China》 EI CSCD 2016年第2期243-253,共11页
Logistic regression has been proved as a promising method for machine learning,which focuses on the problem of classification.In this paper,we present anl_(1)-l_(2)-regularized logistic regression model,where thel1-no... Logistic regression has been proved as a promising method for machine learning,which focuses on the problem of classification.In this paper,we present anl_(1)-l_(2)-regularized logistic regression model,where thel1-norm is responsible for yielding a sparse logistic regression classifier and thel_(2)-norm for keeping betlter classification accuracy.To solve thel_(1)-l_(2)-regularized logistic regression model,we develop an alternating direction method of multipliers with embedding limitedlBroyden-Fletcher-Goldfarb-Shanno(L-BFGS)method.Furthermore,we implement our model for binary classification problems by using real data examples selected from the University of California,Irvine Machines Learning Repository(UCI Repository).We compare our numerical results with those obtained by the well-known LIBSVM and SVM-Light software.The numerical results show that ourl_(1)-l_(2)-regularized logisltic regression model achieves better classification and less CPU Time. 展开更多
关键词 Classification problems Logistic regression model SPARSITY ALTERNATING direction method of multipliers
原文传递
A Primal-Dual Interior-Point Method for Optimal Grasping Manipulation of Multi-fingered Hand-Arm Robots
7
作者 yan-qin bai Xue-Rui Gao Chang-Jun Yu 《Journal of the Operations Research Society of China》 EI CSCD 2017年第2期177-192,共16页
In this paper,we consider an optimization problem of the grasping manipulation of multi-fingered hand-arm robots.We first formulate an optimization model for the problem,based on the dynamic equations of the object a... In this paper,we consider an optimization problem of the grasping manipulation of multi-fingered hand-arm robots.We first formulate an optimization model for the problem,based on the dynamic equations of the object and the friction constraints.Then,we reformulate the model as a convex quadratic programming over circular cones.Moreover,we propose a primal-dual interior-point algorithm based on the kernel function to solve this convex quadratic programming over circular cones.We derive both the convergence of the algorithm and the iteration bounds for largeand small-update methods,respectively.Finally,we carry out the numerical tests of 180◦and 90◦manipulations of the hand-arm robot to demonstrate the effectiveness of the proposed algorithm. 展开更多
关键词 Grasping manipulation Circular cone programming Primal-dual interior-point algorithm Numerical tests
原文传递
Preface:Special Issue on Data-Driven Optimization Models and Algorithms
8
作者 yan-qin bai Yu-Hong Dai Nai-Hua Xiu 《Journal of the Operations Research Society of China》 EI CSCD 2015年第4期389-390,共2页
The growing demands of information age have speeded up the development of data science.The research and application of data become more and more popular in many fields,such as information science,mathematics,operation... The growing demands of information age have speeded up the development of data science.The research and application of data become more and more popular in many fields,such as information science,mathematics,operations research,statistics and computer science.It is of theoretical significance to study the optimization models and algorithms driven by data,which is helpful to new business model,communication and network technology,intelligent transportation system,economic management and other related fields. 展开更多
关键词 COMPUTER operations NETWORK
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部