摘要
针对现有的端到端自动驾驶模型复杂度高、参数多、计算量大且仅由卷积神经网络组成的网络模型无法处理时间特征等问题,提出了一种融合时空特征的端到端自动驾驶模型M-GRU。该模型由改进的MobileNetV2网络和门控循环单元(GRU)网络组成。其中改进的MobileNetV2网络是在MobileNetV2基础上添加注意力模块,其通过加权处理与驾驶决策密切相关的特征信息,进而提升网络对重要特征的关注。M-GRU模型中的2个网络模块分别提取图像的空间特征和时间特征,通过行为克隆的方式,实现对自动驾驶行为的预测。在模拟器中对M-GRU模型进行训练和测试,并与NVIDIA提出的PilotNet模型进行对比,结果显示M-GRU的损失函数值比PilotNet低,M-GRU模型控制的小车能在道路上完成直行、转弯、加减速任务,同时完成简单和困难模式下的驾驶任务,具有更好的性能。
For the existing end-to-end autonomous driving models with high complexity,numerous parameters,large computation,and the inability of network models consisting solely of convolutional neural networks to handle temporal features,a novel end-to-end autonomous driving model called M-GRU was proposed.The model consisted of an improved MobileNetV2 network and a gated recurrent unit(GRU)network.The improved MobileNetV2 network added an attention module based on the original MobileNetV2,which was designed to enhance the network′s attention to important features by weighting feature information closely related to driving decisions.The two network modules in the M-GRU model extracted spatial and temporal features of images and predicted autonomous driving behaviors through behavior cloning.The M-GRU model was trained and tested in a simulator,and compared with NVIDIA′s PilotNet model.The results showed that the loss function value of M-GRU was lower than that of PilotNet,and the small car controlled by the M-GRU model was able to complete tasks such as straight driving,turning,and acceleration/deceleration on the road,as well as driving tasks in both simple and challenging modes,which provided better performance.
作者
吴武飞
李文波
柏迪
喻军
WU Wufei;LI Wenbo;BAI Di;YU Jun(School of Information Engineering,Nanchang University,Nanchang 330031,China;School of Mathematics and Computer Sciences,Nanchang University,Nanchang 330031,China)
出处
《南昌大学学报(工科版)》
CAS
2024年第2期162-169,共8页
Journal of Nanchang University(Engineering & Technology)
基金
国家自然科学基金资助项目(62002147)
中国博士后科学基金资助项目(2020TQ013)。