期刊文献+

基于代价敏感方法的智能制造故障诊断 被引量:3

Intelligent manufacturing fault diagnosis based on cost-sensitive method
下载PDF
导出
摘要 在设备故障诊断过程中,数据集中正负分类样本数量相差较为悬殊等数据不平衡问题会导致诊断准确率降低。为减少由于正负类样本不均衡而导致的误判,提高设备故障诊断准确率,提出一种代价敏感方法。该方法借助Boosting方法,通过多次概率采样生成多个模型,并确定每个模型的权重。其中采样的概率取决于代价调整值,所提方法在每一个迭代过程中根据上一次迭代的结果对代价调整值进行调整。通过实验,并与其他方法进行对比,结果表明与采用固定的代价敏感值及非代价敏感方法相比,提出的方法具有更好的表现。 During the diagnosis process,data imbalance such as the significant gap between positive and negative samples can lead to reduced diagnostic accuracy.To reduce the misjudgment caused by the imbalance of positive and negative samples and to improve the accuracy of the diagnostic result,a cost sensitive method was proposed.By using Boosting method,the multiple models was produced with probability sampling,and the weights of each model were also determined,in which the probability of sampling depended on the cost adjustment value.In the proposed method,the cost adjustment value was adjusted according to the result of the previous iteration.Through comparing with other methods,the proposed method had better performance than the fixed cost sensitive value and the non-cost sensitive method.
作者 赵宏宇 沈江 安邦 ZHAO Hongyu;SHENG Jiang;AN Bang(College of Management and Economics,Tianjin University,Tianjin 300072,China)
出处 《计算机集成制造系统》 EI CSCD 北大核心 2019年第9期2180-2187,共8页 Computer Integrated Manufacturing Systems
基金 国家自然科学基金资助项目(71571105,71601026)~~
关键词 故障诊断 代价敏感方法 非均衡数据集 智能制造 fault diagnosis cost-sensitive method imbalance data set intelligent manufacturing
  • 相关文献

参考文献2

二级参考文献38

  • 1Holte R C, Acker L, Porter B W. Concept learning and the problem of small disjuncts [C] //Proc of the 11th Int Joint Conf on Arti{icial Intelligence. San Francisco: Morgan Kaufmann, 1989: 813-818.
  • 2Mease D, Wyner A J, Buja A. Boosted classification trees and class probability/quantile estimation [J]. The Journal of Machine Learning Research, 2007, 8:409-439.
  • 3Drummond C, Holte R C. C4.5, class imbalance, and cost sensitivity: Why under-sampling beats over-sampling [C] // Proc of the Learning from Imbalanced Datasets Ⅱ , ICML. Menlo Park, CA: AAAI, 2003:11-19.
  • 4Chawla N V, Bowyer K W, Hall L O, et al. SMOTE: Synthetic minority over-sampling technique [J]. Journal of Artificial Intelligence Research, 2002, 16 : 321-357.
  • 5Han H, Wang W Y, Mao B H. Borderline-SMOTE: A new over-sampling method in imbalaneed data sets learning [G] // Advances in Intelligent Computing. Berlin: Springer, 2005: 878-887.
  • 6He H, Bai Y, Garcia E A, et al. ADASYN: Adaptive synthetic sampling approach for imbalanced learning [C] // Proc of IEEE World Congress on Computational Intelligence, Piscataway, NJ: IEEE, 2008: 1322-1328.
  • 7Barua S, Islam M, Yao X, et al. MWMOTE-Majority weighted minority oversamp!ing technique for imbalanced data set learning [J]. IEEE Trans on Knowledge and Data Engineering, 2014, 26(2): 405-425.
  • 8Fithian W, Hastie T. Local case-control sampling: Efficient subsampling in imbalanced data sets [OL]. 2013 [2013-02- 26]. http://arxiv, org/abs/1306. 3706.
  • 9Kubat M, Matwin S. Addressing the curse of imbalanced training sets: One-sided selection [C] //Proc of the 14th Int Conf on Maehlne Learning. Menlo Park, CA: AAAI, 1997: 179-186.
  • 10Raskutti B, Kowalczyk A. Extreme re-balancing for SVMs: A case study [J]. ACM SIGKDD Explorations Newsletter, 2004, 6(1): 60-69.

共引文献10

同被引文献26

引证文献3

二级引证文献11

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部