摘要
交通流预测是智能交通系统(ITS)的核心,其中时空特性是最主要的特征;由于不同道路之间存在复杂的空间相关性和时间依赖性,因此交通流预测成为一项具有挑战性的任务;目前,基于图卷积神经网络的预测方法在网络局部以及整体的特征感知和提取方面,仍存在优化空间;为了解决以上问题,提出了一种基于图神经网络的优化模型(DMCRNN),该模型以DCRNN为基准模型,利用相互学习策略对其进行优化;在训练过程中,两个DCRNN网络之间相互学习、相互指导,以此来增强每个网络的特征学习能力;在METR-LA和PEMS-BAY两个真实数据集上验证优化策略的有效性;结果表明,经过优化后的模型预测误差显著降低,在两个数据集上一小时的MAE与DCRNN相比分别降低了0.15和0.12,即相互学习优化策略具有较好的性能。
Traffic flow prediction is the core of intelligent transportation systems(ITS),with spatiotemporal characteristics being the most important feature.Due to the complex spatial correlations and time dependencies between different roads,the traffic flow prediction has become a challenging task.Currently,prediction methods based on graph convolutional neural networks are further optimized on the feature perception and extraction of local and global networks.To address above issues,a diffusion mutual convolutional recurrent neural network(DMCRNN)optimized model based on graph neural networks is proposed.The model is based on the DCRNN as a benchmark model,and the mutual learning strategy is utilized to optimize it.During training,two DCRNN networks learn from and guide each other to enhance their respective feature learning capabilities.The effectiveness of the optimization strategy is verified on two real datasets of the METR-LA and PEMS-BAY.The results show that the optimized model significantly reduces the prediction errors,with a decrease in the MAE of 0.15 and 0.12 for one hour on the two datasets than that in the DCRNN,respectively,indicating that the mutual learning optimization strategy has a good performance.
作者
刘忠伟
李萍
周盛
闫豆豆
李颖
安毅生
LIU Zhongwei;LI Ping;ZHOU Sheng;YAN Doudou;LI Ying;AN Yisheng(Yunji Smart Engineering Co.,Ltd.,Shenzhen 518000,China;School of Information Engineering,Chang an University,Xi'an 710064,China)
出处
《计算机测量与控制》
2024年第4期166-173,共8页
Computer Measurement &Control
基金
国家自然科学基金青年项目(52002031)
国家自然科学基金面上项目(52172325)
国家重点研发(2021YFB1600104)。
关键词
交通流预测
时空特性
图神经网络
知识蒸馏
相互学习
traffic flow prediction
spatiotemporal characteristics
graph convolutional neural networks
knowledge distillation
mutual learning