期刊文献+

基于残差U-Net和自注意力Transformer编码器的磁场预测方法

Magnetic Field Prediction Method Based on Residual U-Net and Self-Attention Transformer Encoder
下载PDF
导出
摘要 利用有限元方法对几何结构复杂的电机和变压器进行磁场分析,存在仿真时间长且无法复用的问题。因此,该文提出一种基于残差U-Net和自注意力Transformer编码器的磁场预测方法。首先建立永磁同步电机(PMSM)和非晶合金变压器(AMT)有限元模型,得到深度学习训练所需的数据集;然后将Transformer模块与U-Net模型结合,并引入短残差机制建立ResUnet-Transformer模型,通过预测图像的像素实现磁场预测;最后通过Targeted Dropout算法和动态学习率调整策略对模型进行优化,解决拟合问题并提高预测精度。计算实例证明,ResUnet-Transformer模型在PMSM和AMT数据集上测试集的平均绝对百分比误差(MAPE)均小于1%,且仅需500组样本。该文提出的磁场预测方法能减少实际工况和多工况下精细模拟和拓扑优化的时间和资源消耗,亦是虚拟传感器乃至数字孪生的关键实现方法之一。 Accurate simulation of electromagnetic characteristics in electrical equipment relies on the finite element method.However,the increasing complexity of large electrical machines and transformers poses challenges,leading to prolonged simulation time and significant computational resource consumption.At the same time,the finite element method cannot establish a priori model.When design parameters,structures,or operating conditions change,it is necessary to reestablish the model.Considering the powerful feature extraction ability of deep learning,this paper proposes a magnetic field prediction method based on a residual U-Net and a self-attention Transformer encoder.The finite element method is used to obtain the dataset for deep learning training.The deep learning model can be trained once and used for multiple predictions,addressing the limitations of the finite element method and reducing computational time and resource consumption.Firstly,this paper leverages the inherent advantages of the convolutional neural network(CNN)in image processing,particularly the U-shaped CNN,known as U-Net,based on the encoder and decoder structure.This architecture exhibits a stronger ability to capture fine details and learn from limited samples than the traditional CNN.To mitigate network degradation and address convolutional operation limitations,short residual connections and Transformer modules are introduced to the U-Net architecture,creating the ResUnet-Transformer model.The short residual connections accelerate network training,while the self-attention mechanism from the Transformer network facilitates the effective interaction of global features.Secondly,this paper introduces the Targeted Dropout algorithm and adaptive learning rate to suppress overfitting and enhance the accuracy of magnetic field predictions.The Targeted Dropout algorithm incorporates post-pruning strategies into the training process of neural networks,effectively mitigating overfitting and improving the model’s generalization.Additionally,an adaptive learning rate is implemented using the cosine annealing algorithm based on the Adam optimization algorithm,gradually reducing the learning rate as the objective function converges to the optimal value and avoiding oscillations or non-convergence.Finally,the ResUnet-Transformer model is validated through engineering cases involving permanent magnet synchronous motors(PMSM)and amorphous metal transformers(AMT).On the PMSM dataset,training the ResUnet-Transformer model with 250 samples and testing it with 100 samples,the mean square error(MSE)and mean absolute percentage error(MAPE)are used as performance evaluation metrics.Compared to CNN,U-Net,and Linknet models,the ResUnet-Transformer model achieves the highest prediction accuracy,with an MSE of 0.07×10-3 and a MAPE of 1.4%.The prediction efficiency of the 100 test samples using the ResUnet-Transformer model surpasses the finite element method by 66.1%.Maintaining consistency in structural and parameter settings,introducing the Targeted Dropout algorithm and cosine annealing algorithm improves the prediction accuracy by 36.4%and 26.3%,respectively.To evaluate the model's generalization capability,the number of training samples for PMSM and AMT datasets is varied,and the model is tested using 100 samples.Inadequate training samples result in poor magnetic field prediction performance.When the training dataset size increases to 300,the prediction error does not decrease but shows a slight rise.However,with further increases in the training dataset size,the error significantly decreases,and the MAPE for the PMSM and AMT datasets reaches 0.7%and 0.5%,respectively,with just 500 training samples.
作者 金亮 尹振豪 刘璐 宋居恒 刘元凯 Jin Liang;Yin Zhenhao;Liu Lu;Song Juheng;Liu Yuankai(State Key Laboratory of Reliability and Intelligence of Electrical Equipment Hebei University of Technology,Tianjin 300401 China;Key Laboratory of Electromagnetic Field and Electrical Apparatus Reliability of Hebei Province Hebei University of Technology,Tianjin 300401 China)
出处 《电工技术学报》 EI CSCD 北大核心 2024年第10期2937-2952,共16页 Transactions of China Electrotechnical Society
基金 国家自然科学基金面上项目(51977148) 国家自然科学基金重大研究计划项目(92066206) 中央引导地方科技发展专项自由探索项目(226Z4503G)资助。
关键词 有限元方法 电磁场 深度学习 U-Net TRANSFORMER Finite element method electromagnetic field deep learning U-net Transformer
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部