期刊文献+

Multiple-Instance Learning with Instance Selection via Constructive Covering Algorithm 被引量:2

Multiple-Instance Learning with Instance Selection via Constructive Covering Algorithm
原文传递
导出
摘要 Multiple-Instance Learning (MIL) is used to predict the unlabeled bags' label by learning the labeled positive training bags and negative training bags.Each bag is made up of several unlabeled instances.A bag is labeled positive if at least one of its instances is positive,otherwise negative.Existing multiple-instance learning methods with instance selection ignore the representative degree of the selected instances.For example,if an instance has many similar instances with the same label around it,the instance should be more representative than others.Based on this idea,in this paper,a multiple-instance learning with instance selection via constructive covering algorithm (MilCa) is proposed.In MilCa,we firstly use maximal Hausdorff to select some initial positive instances from positive bags,then use a Constructive Covering Algorithm (CCA) to restructure the structure of the original instances of negative bags.Then an inverse testing process is employed to exclude the false positive instances from positive bags and to select the high representative degree instances ordered by the number of covered instances from training bags.Finally,a similarity measure function is used to convert the training bag into a single sample and CCA is again used to classification for the converted samples.Experimental results on synthetic data and standard benchmark datasets demonstrate that MilCa can decrease the number of the selected instances and it is competitive with the state-of-the-art MIL algorithms. Multiple-Instance Learning (MIL) is used to predict the unlabeled bags' label by learning the labeled positive training bags and negative training bags.Each bag is made up of several unlabeled instances.A bag is labeled positive if at least one of its instances is positive,otherwise negative.Existing multiple-instance learning methods with instance selection ignore the representative degree of the selected instances.For example,if an instance has many similar instances with the same label around it,the instance should be more representative than others.Based on this idea,in this paper,a multiple-instance learning with instance selection via constructive covering algorithm (MilCa) is proposed.In MilCa,we firstly use maximal Hausdorff to select some initial positive instances from positive bags,then use a Constructive Covering Algorithm (CCA) to restructure the structure of the original instances of negative bags.Then an inverse testing process is employed to exclude the false positive instances from positive bags and to select the high representative degree instances ordered by the number of covered instances from training bags.Finally,a similarity measure function is used to convert the training bag into a single sample and CCA is again used to classification for the converted samples.Experimental results on synthetic data and standard benchmark datasets demonstrate that MilCa can decrease the number of the selected instances and it is competitive with the state-of-the-art MIL algorithms.
出处 《Tsinghua Science and Technology》 SCIE EI CAS 2014年第3期285-292,共8页 清华大学学报(自然科学版(英文版)
基金 supported by the National Natural Science Foundation of China (No. 61175046) the Provincial Natural Science Research Program of Higher Education Institutions of Anhui Province (No. KJ2013A016) the Outstanding Young Talents in Higher Education Institutions of Anhui Province (No. 2011SQRL146) the Recruitment Project of Anhui University for Academic and Technology Leader
关键词 multiple-instance learning instance selection constructive covering algorithm maximal Hausdorff multiple-instance learning instance selection constructive covering algorithm maximal Hausdorff
  • 相关文献

参考文献28

  • 1T. G. Dietterich, R. H. Lathrop, and T. Lozano-Perez, Solving the multiple instance problem with axis-parallel rectangles, Artificial Intelligence, vol. 89, pp. 31-71, 1997.
  • 2A. Zafra, M. Pechenizkiy, and S. Ventura, ReliefF-MI: An extension of ReliefF to multiple instance learning, Neurocomputing, vol, 75, pp Y. X. Chert, J. B. Bi, and J. 210-218, 2012.
  • 3Z. Wang, MILES: Multiple- instance learning via embedded instance selection, IEEE Transaction Pattern Analysis and Machine Intelligence, vol. 28, pp. 1931-1947, 2006.
  • 4X. E Song, L. C. Jiao, S. Y. Yang, X. R. Zhang, and E H. Shang, Sparse coding and classifier ensemble based multi-instance learning for image categorization, Signal Processing, vol. 93, pp. 1-11, 2013.
  • 5Y. X. Chen and J. Z. Wang, Image categorization by learning and reasoning with regions, Journal of Machine Learnhtg Research, vol. 5, pp. 913-939, 2004.
  • 6S. Andrews, I. Tsochantaridis, and T. Hofmann, Support vector machines for multiple-instance learning, in Advance in Neutral Information Processing System 15, 2003, pp. 561-568.
  • 7P. Viola, J. Platt, and C. Zhang, Multiple instance boosting for object detection, in Advance hz Neutral Infotvtation Processing System 18, 2006, pp.1419-1426.
  • 8O. Maron and T. Lozano-Perez, A framework for multiple- instance learning, in Advance in Neutral Information Processing System 10, 1998, pp. 570-576.
  • 9Q. Zhang and S. A. Goldman, EM-DD: An improved multi-instance learning technique, in Advance #1 Neutral Information Processing System 14, 2002, pp. 1073-1080.
  • 10R. Rahmani, S. A. Goldman, H. Zhang, S. R. Cholleti, and J. E. Fritts, Localized content based image retrieval, 1EEE Transaction Pattern Analysis and Machhw Inetelligence, vol. 30, pp. 1902-2002, 2008.

同被引文献3

引证文献2

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部