Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be di...Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be difficult. This paper investigates Neyman- Pearson classification with convex loss function in the arbitrary class of real measurable functions. A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function. We give analysis to NP-ERM with convex loss function and prove it's performance guarantees. An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied, which produces a tight PAC bound of the NP-ERM with convex loss function.展开更多
In the present paper, we give an investigation on the learning rate of l2-coefficient regularized classification with strong loss and the data dependent kernel functional spaces. The results show that the learning rat...In the present paper, we give an investigation on the learning rate of l2-coefficient regularized classification with strong loss and the data dependent kernel functional spaces. The results show that the learning rate is influenced by the strong convexity.展开更多
Throughout this note, the following notations are used. For matrices A and B,A】B means that A-B is positive definite symmetric, A×B denotes the Kroneckerproduct of A and B R(A), A’ and A<sup>-</sup&g...Throughout this note, the following notations are used. For matrices A and B,A】B means that A-B is positive definite symmetric, A×B denotes the Kroneckerproduct of A and B R(A), A’ and A<sup>-</sup> stand for the column space, the transpose andany g-inverse of A, respectively; P<sub>A</sub>=A(A’A)<sup>-</sup>A’;for s×t matrix B=(b<sub>1</sub>…b<sub>t</sub>),vec(B) de-notes the st-dimensional vector (b<sub>1</sub>′b<sub>2</sub>′…b<sub>t</sub>′)′, trA stands for the trace of the square ma-trix A.展开更多
基金This is a Plenary Report on the International Symposium on Approximation Theory and Remote SensingApplications held in Kunming, China in April 2006Supported in part by NSF of China under grants 10571010 , 10171007 and Startup Grant for Doctoral Researchof Beijing University of Technology
文摘Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be difficult. This paper investigates Neyman- Pearson classification with convex loss function in the arbitrary class of real measurable functions. A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function. We give analysis to NP-ERM with convex loss function and prove it's performance guarantees. An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied, which produces a tight PAC bound of the NP-ERM with convex loss function.
文摘现有的面向大规模数据分类的支持向量机(support vector machine,SVM)对噪声样本敏感,针对这一问题,通过定义软性核凸包和引入pinball损失函数,提出了一种新的软性核凸包支持向量机(soft kernel convex hull support vector machine for large scale noisy datasets,SCH-SVM).SCH-SVM首先定义了软性核凸包的概念,然后选择出能代表样本在核空间几何轮廓的软性核凸包向量,再将其对应的原始空间样本作为训练样本并基于pinball损失函数来寻找两类软性核凸包之间的最大分位数距离.相关理论和实验结果亦证明了所提分类器在训练时间,抗噪能力和支持向量数上的有效性.
基金Supported by National Natural Science Foundation of China(Grant Nos.10871226,11001247 and 61179041)Natural Science Foundation of Zhejiang Province(Grant No.Y6100096)
文摘In the present paper, we give an investigation on the learning rate of l2-coefficient regularized classification with strong loss and the data dependent kernel functional spaces. The results show that the learning rate is influenced by the strong convexity.
文摘Throughout this note, the following notations are used. For matrices A and B,A】B means that A-B is positive definite symmetric, A×B denotes the Kroneckerproduct of A and B R(A), A’ and A<sup>-</sup> stand for the column space, the transpose andany g-inverse of A, respectively; P<sub>A</sub>=A(A’A)<sup>-</sup>A’;for s×t matrix B=(b<sub>1</sub>…b<sub>t</sub>),vec(B) de-notes the st-dimensional vector (b<sub>1</sub>′b<sub>2</sub>′…b<sub>t</sub>′)′, trA stands for the trace of the square ma-trix A.