期刊文献+

改进M-training算法的高光谱图像分类 被引量:2

Hyperspectral image classification based on improved M-training algorithm
下载PDF
导出
摘要 为了解决高光谱数据有标签样本数量有限的分类问题,提出将M-training算法应用于高光谱图像分类。采用两个SVM、一个K近邻(KNN)以及一个随机森林(RF)进行分类器组合,对传统M-training算法进行改进,增强分类器的多样性和差异性。为了充分考虑大量无标签样本的影响,采用有标签样本与无标签样本错误率加权作为有标签样本集更新的限制条件,从而有效地扩大了有标签样本集。实验结果表明:改进算法和传统的M-training算法相比较,在总体分类精度与Kappa系数上分别提高1. 85%~12. 10%与0. 021 5~0. 141 3,从而验证了该算法的有效性。 To solve the problem of limited hyperspectral image classification of labeled samples,a modified M-training algorithm was applied to such classification.The algorithm employs two Support Vector Machines(SVMs),one K Nearest Neighbor(KNN),and one Random Forest(RF)classifier to enhance the diversity of classifiers,so as to improve the traditional algorithm of M-training.Taking the impact of a large number of unlabeled samples into account,the error rate of labeled and unlabeled samples was weighted as the limiting condition to updating the set of labeled samples,effectively enlarging the labeled set.The results showed that,compared with the traditional M-training algorithm,the proposed algorithm improves both overall classification accuracy and the Kappa coefficient by 1.85%~12.10%and 0.021 5~0.141 3,respectively,verifying the effectiveness of the improved algorithm.
作者 崔颖 王雪婷 陆忠军 王立国 CUI Ying;WANG Xueting;LU Zhongjun;WANG Liguo(College of Information and Communication Engineering,Harbin Engineering University,Harbin 150001,China;Remote Sensing Technology Center,Heilongjiang Academy of Agricultural Science,Harbin 150086,China)
出处 《哈尔滨工程大学学报》 EI CAS CSCD 北大核心 2018年第10期1688-1694,共7页 Journal of Harbin Engineering University
基金 国家自然科学基金项目(61675051) 教育部博士点基金项目(20132304110007) 中央高校基本科研业务费专项资金号(HEUCFG201831)
关键词 高光谱图像 半监督分类 M-training算法 错误率加权 图像处理 SVM分类器 RF分类器 KNN分类器 hyper-spectral image semi-supervised classification M-training algorithm error rate weighting image processing SVM RF classifier KNN classifier
  • 相关文献

参考文献2

二级参考文献39

  • 1Shahshahani B, Landgrebe D. The effect of unlabeled samples in reducing the small sample size problem and mitigating the hughes phenomenon [ J]. IEEE Transactions on Geoscience and Remote Sensing, 1994, 32(5) : 1087 -1095.
  • 2Zhu X. Semi-supervised learning literature survey [ R]. Technical Report 1530, Department of Computer Sciences, University ofWisconsin, Madison, 2005.
  • 3Blum A, Mitchell T. Combining labeled and unlabeled data with co-training [ C ]//Peter B, Yishay M. Proceedings of the 11 th Annual Conference on Computational Learning theory. Madison : ACM Press, 1998 : 92 - 100.
  • 4Brefeld U, Grtner T, Scheffer T, et al. Efficient co-regularised least squares regression [ C ] //Proceedings of the 23rd International Conference on Machine Learning, 2006:137 -144.
  • 5Blum A, Lafferty J, Rwebangira M, et al. Semi-supervised learning using randomized mincuts[ C]//Brndley C E. Proceedings of the 21st International Conference on Machine Learning (ICML 2004). Banff: ACM Press, 2004: 934- 947.
  • 6Nigam K, Ghani R. Analyzing the effectiveness and applicability of co-training [ C ] //Proceedings of the Ninth International Conference on Information and Knowledge Management, 2000:86 -93.
  • 7Miller D J, Browning J. A mixture model and EM-based algorithm for class discovery, robust classification, and outlier rejection in mixed labeled/unlabeled data sets [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25( 11 ) :1468 - 1483.
  • 8Kothari R, Jain V. Learning from labeled and unlabeled data[ C]//2002. IJCNN'02. Proceedings of the 2002 International Joint Conference on Neural Networks, 2002,3:2803 -2808.
  • 9Seeger M. Learning with labeled and unlabeled data [ R]. Technical Report, Edinburgh: University of Edinburgh, 2002.
  • 10Muhlenbach F, Lallich S, Zighed D A. Identifying and handling mislabeled instances [ J ]. Journal of Intelligent Information Systems,2004, 22(1 ): 89- 109.

共引文献13

同被引文献30

引证文献2

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部