期刊文献+

基于动态惯性权重粒子群算法的磨削低能耗加工方法 被引量:3

Reduce grinding energy consumption by modified particle swarm optimization based on dynamic inertia weigh
下载PDF
导出
摘要 利用三层误差反向传播(back propagation,BP)神经网络建立磨削能耗预测模型,以砂轮线速度、进给速度和磨削深度为影响因素设计125组全因子试验,并取其中的75组试验数据作为该预测模型的训练样本与测试样本。采用动态惯性权重改进粒子群算法(adaption particle swarm optimization,APSO),以BP神经网络的预测作为适应度函数,以最小能耗为目标进行迭代寻优获取最优工艺参数。结果表明:模型预测结果较为准确,采用优化后的工艺参数能够有效降低磨削能耗。 A three-layer back propagation(BP)neural network was used to establish a grinding energy consumption prediction model.125 single-factor experiments were designed with the grinding wheel linear velocity,feed rate and grinding depth of cut as the influencing factors.75 sets of experimental data were obtained as the training samples and test samples of the prediction model.Particle swarm optimization algorithm was improved by using adaptive dynamic inertia weight(adaption particle swarm optimization,APSO),and the prediction of BP neural network was used as fitness function.The optimal process parameters were obtained by iterative optimization aiming at minimum energy consumption.The results show that the prediction model is accurate and the optimized process parameters can effectively reduce the grinding energy consumption.
作者 张昆 田业冰 丛建臣 刘俨后 闫宁 鲁涛 ZHANG Kun;TIAN Yebing;CONG Jianchen;LIU Yanhou;YAN Ning;LU Tao(School of Mechanical Engineering, Shandong University of Technology, Zibo 255049, Shandong, China;Tianrun Industry Technology Co. , Ltd. , Weihai 264400, Shandong, China;Zhengzhou Research Institute for Abrasives & Grinding Co. , Ltd. , Zhengzhou 450001, China)
出处 《金刚石与磨料磨具工程》 CAS 北大核心 2021年第1期71-75,共5页 Diamond & Abrasives Engineering
基金 国家自然科学基金资助项目(51875329) 山东省泰山学者工程专项(tsqn201812064) 山东省自然科学基金资助项目(ZR2017MEE050) 山东省重点研发计划资助项目(2018GGX103008,2019GGX104073) 山东省高等学校青创科技项目(J17KA037) 淄博市重点研发计划项目(2019ZBXC070)。
关键词 改进粒子群算法 BP神经网络 磨削能耗 参数优化 adaption particle swarm optimization BP neural network energy consumption of grinding parameter optimization
  • 相关文献

参考文献4

二级参考文献47

  • 1Rumelhart D E, Hinton G E, Williams R J. Learninginternal repr esentatio ns by error propagation[A].Rumelhart D E James L.McClelland J L. Parallel di stributed processing: explorations in the microstructure of cognition[C], vol ume 1, Cambridge, MA:MIT Press, 1986.318~362.
  • 2Neural Network Toolbox User's Guide .The Mathworks,inc. 1999.
  • 3Fahlman S E. Faster-learning variations on back-propagation: an e mpirical study[A].Touretzky D,Hinton G,Sejnowski T. Proceedings of the 1988 C onnectionist Models Summer School[C].Carnegic Mellon University,1988,38~51.
  • 4Jacobs R A. Increased rates of convergence through learning rate adaptation[J]. Neural Networks,1988,1:295~307.
  • 5Shar S, Palmieri F. MEKA-a fast, local algorithm for training feedforwa rd neural networks[A]. Proceedings of the International Joint Conference on Ne ural Networks[C]. IEEE Press, New York, 1990.41~46.
  • 6Watrous R L. Learning algorithms for connectionist network: appli ed gradie nt methods of nonlinear optimization[A]. Proceedings of IEEE International Con ference on Neural Networks[c]. IEEE Press, New York, 1987.619~627.
  • 7Shar S,Palmieri F,Datum M.Optimal filtering algorithms f or fast l earning in feedforward neural networks[J]. Neural Networks,1992, 5(5):779~7 87.
  • 8Martin R,Heinrich B. A Direct Adaptive Method for F aster Backpropagation Learning: The RPROP Algorithrm[A]. Ruspini H. Proceedi ngs of the IEEE Interna t ional Conference on Neural Networks (ICNN)[C]. IEEE Press, New York. 1993.58 6~591.
  • 9Fletcher R,Reeves C M. Function minimization by conjugate gra dients[J]. Computer Journal ,1964,7:149~154.
  • 10Powell MJD. Restart procedures for the conjugate gradient metho d[J]. Mathematical Programming, 1977, 12: 241~254.

共引文献216

同被引文献21

引证文献3

二级引证文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部