期刊文献+

一种卷积神经网络集成的多样性度量方法 被引量:1

Diversity measuring method of a convolutional neural network ensemble
下载PDF
导出
摘要 分类器模型之间的多样性是分类器集成的一个重要性能指标。目前大多数多样性度量方法都是基于基分类器模型的0/1输出结果(即Oracle输出)进行计算,针对卷积神经网络的概率向量输出结果,仍需要将其转化为Oracle输出方式进行度量,这种方式未能充分利用卷积神经网络输出的概率向量所包含的丰富信息。针对此问题,利用多分类卷积神经网络模型的输出特性,提出了一种基于卷积神经网络的概率向量输出方式的集成多样性度量方法,建立多个不同结构的卷积神经网络基模型并在CIFAR-10和CIFAR-100数据集上进行实验。实验结果表明,与双错度量、不一致性度量和Q统计多样性度量方法相比,所提出的方法能够更好地体现模型之间的多样性,为模型选择集成提供更好的指导。 Diversity among classifier models has been recognized as a significant performance index of a classifier ensemble.Currently,most diversity measuring methods are defined based on the 0/1 outputs(namely Oracle outputs)of the base model.The probability vector outputs of a convolutional neural network(CNN)still need to be converted into Oracle outputs for measurement,which fails to fully use the rich information contained in the CNN probability vector outputs.To solve this problem,a new diversity measuring method for probabilistic vector outputs based on CNNs is proposed.Several base models of CNN models with various structures are established and tested on the CIFAR-10 and CIFAR-100 datasets.Compared with double-fault measure,disagreement measure,and Q-Statistic,the proposed method can better reflect the differences between the models and provide better guidance for a selective ensemble of CNN models.
作者 汤礼颖 贺利乐 何林 屈东东 TANG Liying;HE Lile;HE Lin;QU Dongdong(School of Mechanical and Electrical Engineering,Xi’an University of Architecture and Technology,Xi’an 710055,China;School of Science,Xi’an University of Architecture and Technology,Xi’an 710055,China)
出处 《智能系统学报》 CSCD 北大核心 2021年第6期1030-1038,共9页 CAAI Transactions on Intelligent Systems
基金 国家自然科学基金项目(61903291)。
关键词 卷积神经网络 集成学习 多样性度量 机器学习 分类器集成 概率向量输出 Oracle输出 基模型 CNN ensemble learning diversity measures machine learning multiple classifier ensembles probability vector outputs Oracle outputs basic model
  • 相关文献

参考文献5

二级参考文献60

  • 1KUNCHEVA L I,SKURICHINA M,DUIN R P W.An experimental study on diversity for bagging and boosting with linear classifiers[J].Information Fusion,2002,3(4):245-258.
  • 2BROWN G,KUNCHEVA L I.“Good”and“bad”diversity in majority vote ensembles[C]∥Proceedings of International Conference on Multiple Classifier Systems.Berlin,Germany:Springer,2010:124-133.
  • 3NASCIMENTO D,COELHO A,CANUTO A.Integrating complementary techniques for promoting diversity in classifier ensembles:a systematic study[J].Neurocomputing,2014,138:347-357.
  • 4KUNCHEVA L I,WHITAKER C J.Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy[J].Machine Learning,2003,51:181-207.
  • 5WINDEATT T.Diversity measures for multiple classifier system analysis and design[J].Information Fusion,2005,6(1):21-36.
  • 6HAGHIGHI M S,VAHEDIAN A,YAZDI H S.Creating and measuring diversity in multiple classifier systems using support vector data description[J].Applied Soft Computing,2011,11(8):4931-4942.
  • 7KRAWCZYK B,WOZNIAK M.Diversity measures for one-class classifier ensembles[J].Neurocomputing,2004,126:36-44.
  • 8YIN X C,HUANG K Z,HAO H W,et al.A novel classifier ensemble method with sparsity and diversity[J].Neurocomputing,2014,134:214-221.
  • 9BI Y X.The impact of diversity on the accuracy of evidential classifier ensembles[J].International Journal of Approximate Reasoning,2012,53(4):584-607.
  • 10AKSELA M,LAAKSONEN J.Using diversity of errors for selecting members of a committee classifier[J].Pattern Recognition,2006,39(4):608-623.

共引文献1965

同被引文献7

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部