期刊文献+

小样本纱线质量预测的机器学习算法适用性分析

Applicability analyses of machine learning algorithm for yarn quality prediction under small samples conditions
下载PDF
导出
摘要 为了解决当前基于神经网络的纱线质量预测模型针对小样本预测精度偏低和预测精度不稳定的问题,建立了随机森林(RF)算法预测模型、多层感知机神经网络(MLP)算法预测模型和线性回归(LR)算法预测模型,就各算法模型在小样本情况下对不同数据特点的数据集的敏感性、不同数据维度的敏感性和不同训练样本数的敏感性进行了预测性能对比试验。用决定系数和均方根误差进行模型预测性能评估。试验结果表明:在小样本情况下,相比于MLP算法和LR算法,大多数情况下RF算法预测准确性更高、预测精度稳定性更好、对小训练样本量的适应性更好,具有较高的综合预测性能。 In order to solve the problems of low and unstable prediction accuracy of the current yarn quality prediction model based on neural network,three prediction models were established,including RF(random forest)algorithm prediction model,MLP(multi-layer perceptron)neural network algorithm prediction model and LR(linear regression)algorithm prediction model.Experiments were conducted to compare the three models′prediction performance on the sensitivity with small samples in different data distribution,the sensitivity in different data dimensions and the sensitivity in different training sample sizes.The coefficient of determination and root mean square error were used to evaluate the prediction performance of each model.The experimental results showed that,compared with MLP algorithm and LR algorithm,in most cases,RF algorithm had higher prediction accuracy,better stability of prediction accuracy,better adaptability to small training sample size,and higher comprehensive prediction performance.
作者 刘智玉 李学星 李立轻 陈南梁 汪军 LIU Zhiyu;LI Xuexing;LI Liqing;CHEN Nanliang;WANG Jun(Donghua University,Shanghai 201620,China;Key Laboratory of Ministry of Education for Textile Science&Technology,Shanghai 201620,China)
出处 《棉纺织技术》 CAS 2024年第8期27-34,共8页 Cotton Textile Technology
关键词 随机森林算法 多层感知机神经网络 线性回归算法 质量预测 小样本 预测模型 决定系数 random forest algorithm multi-layer perceptron neural network linear regression algorithm quality prediction small sample prediction model decision coefficient
  • 相关文献

参考文献6

二级参考文献60

  • 1刘微,罗林开,王华珍.基于随机森林的基金重仓股预测[J].福州大学学报(自然科学版),2008,36(S1):134-139. 被引量:8
  • 2林成德,彭国兰.随机森林在企业信用评估指标体系确定中的应用[J].厦门大学学报(自然科学版),2007,46(2):199-203. 被引量:37
  • 3Breiman L. Bagging Preditors [J].Machine Learning, 1996,24(2).
  • 4Dietterich T. An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting and Randomization [J].Machine Learning, 2000,40(2).
  • 5Ho T K. The Random Subspace Method for Constructing Decision Forests [J].Trans. on Pattern Analysis and Machine Intelligence, 1998,20 (8).
  • 6Amit Y, Gernan D. Shape Quantization and Recognition with Randomized Trees[J]. Neural Computation, 1997,9(7). Breiman L Random Forest[J]. Machine Learning, 2001,45(1).
  • 7Breiman L. Random Forests[J]. Machine Learning, 2001,45(1).
  • 8Tibshirani tL Bias, Variance, and Prediction Error for Classification Rules[C]. Technical Report, Statistics Department, University of Toronto, 1996.
  • 9Wolpert D H, Macready W G. An Efficient Method to Estimate Bagging's Generalization Error[J]. Machine Learning, 1999,35(1).
  • 10Breiman L. Out-of-bag Estimation[EB/OL]. [2010- 06- 30]. http//stat, berkeley, edu/ pub/ users/ breiman / OOB estimation, ps.

共引文献863

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部