期刊文献+

二次分解策略组合Informer的短期电力负荷预测方法 被引量:3

Short-term power load forecasting method of two-level decomposition combines Informer
下载PDF
导出
摘要 针对电力负荷数据存在的波动性、非平稳性而导致预测精度低的问题,提出一种具有二次分解重构策略的深度学习电力负荷预测模型。首先,对负荷数据进行基于局部加权回归的周期趋势分解(STL)-改进的自适应噪声完备集合经验模态分解(ICEEMDAN)二次分解,通过计算样本熵和最大信息数对分量进行重构;然后在Informer模型中引入非平稳性机制,并融合卷积神经网络对重构分量进行预测;最后,将各分量的预测结果线性相加,得到最终预测结果。实验结果表明,所提方法在3个评价指标上的预测误差均低于所对比模型,证明该预测方法可以有效降低数据的非平稳性并提高预测精度。 Aiming at the low prediction accuracy caused by the fluctuation and non-stationarity of power load data,a deep learning power load prediction model with serial two-level decomposition and reconstruction strategy is proposed.Firstly,the load data is decomposed by seasonal and trend decomposition using loess(STL)-improved complete ensemble empirical mode decomposition with adaptive noise(ICEEMDAN),and the component is reconstructed by calculating sample entropy and maximum information coefficient.Then,a non-stationary mechanism is introduced into Informer model and convolutional neural network is fused to predict the reconstructed component.Finally,the predicted results of each component are linearly added to obtain the final predicted results.The experimental results show that the prediction errors of the proposed method on the three evaluation indexes are all lower than those of the compared models,which proves that the proposed method can effectively reduce the non-stationarity of data and improve the prediction accuracy.
作者 朱莉 韩凯萍 朱春强 Zhu Li;Han Kaiping;Zhu Chunqiang(College of Computer Science and Technology,Xi'an University of Science and Technology,Xi'an 710600,China;State Grid Shaanxi Electric Power Company Training Center,Xi'an 710000,China)
出处 《国外电子测量技术》 北大核心 2023年第6期23-32,共10页 Foreign Electronic Measurement Technology
基金 国家重点研发计划(2019YFB1405000)项目资助。
关键词 短期电力负荷预测 二次分解 样本熵 最大信息数 Informer模型 short-term power load forecasting two-level decomposition sample entropy MIC Informer mode
  • 相关文献

参考文献16

二级参考文献189

共引文献550

同被引文献28

引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部