摘要
极小极大概率机(Minimax Probability Machine,MPM)、极小极大概率终端学习机(Minimax Probability Extreme Learning Machine,MPELM)和孪生极小极大概率终端学习机(Twin MPELM,TMPELM)在不对数据分布进行具体要求的情况下,可以为泛化误差提供明确的上界,同时使经验风险极小化。目前,MPM算法、MPELM算法和TMPELM算法主要是通过求解二阶锥规划模型的内点算法实现。本文利用支持向量机思想和凸二次规划的Wolfe对偶形式,对已有的MPM算法、MPELM算法和TMPELM算法进行了改进,并提出了三个新算法。实验结果表明,本文所提算法是有效和可竞争的。
Minimax probability machine(MPM),minimax probability extreme learning machine(MPELM)and twin minimax probability extreme learning machine(TMPELM)can provide a clear upper bound for generalization error and minimize empirical risk without specific requirements on data distribution.At present,MPM,MPELM and TMPELM are mainly implemented by solving second-order cone programmings with the interior point algorithm.By means of the idea of support vector machine and Wolfe dual form of convex quadratic programming,three new algorithms by improving the existing MPM,MPELM and TMPELM algorithm are proposed in this paper.Experiment results indicate that the proposed algorithm is effective and contestable.
作者
李晓萌
代永潇
范丽亚
LI Xiaomeng;DAI Yongxiao;FAN Liya(School of Mathematical Sciences,Liaocheng University,Liaocheng 252059,China)
出处
《聊城大学学报(自然科学版)》
2022年第4期8-17,25,共11页
Journal of Liaocheng University:Natural Science Edition
基金
国家自然科学基金项目(11801248)
山东省自然科学基金项目(ZR2018BF010,ZR2020MA026)资助。