摘要
交叉口车辆排放较为复杂,尤其是在考虑初始排队长度的情况下,更是难以建立明确的数学模型。Q学习是一种无模型的强化学习算法,通过与环境的试错交互学习最优控制策略。本文提出了一种基于Q学习的交通排放信号控制方案。利用仿真平台USTCMTS2.0,通过不断地试错学习找到在不同相位排队长度下最优配时。在Q学习中添加了模糊初始化Q函数的方法以改进Q学习的收敛速度,加速了学习过程。仿真实验结果表明:强化学习算法取得较好的效果。相比较Hideki的方法,在车流量较高时,车辆平均排放量减少了13.9%,并且对Q函数值的模糊初始化大大加速了Q函数收敛的过程。
Vehicle emissions at intersection are highly complex and it is difficult to establish clear mathematical model of vehicle emission especially in considering intersection initial queue length. Q-learning is a kind of model-free reinforcement learning algorithm. It searches for the optimal control strategy through trial and error interactive with environment. Based on microscopic traffic simulator USTCMTS2.0 platform, Q-learning is used to search for the optimal signal timing scheme with lowest emission in the different conditions of intersection initial queue length. In allusion to the low learning efficiency of Q-learning, the fuzzy inference is introduced to accelerate convergence of Q-learning. Simulation results show that the reinforcement learning algorithm is more effective. The average vehicle emissions are reduced by 13.9% compared with Hideki's method for high saturation of the intersection and the fuzzy initialization of Q-value significantly accelerates the convergence process of Q-learning.
出处
《电子技术(上海)》
2014年第8期5-8,共4页
Electronic Technology
关键词
Q学习
模糊推理
交通信号控制
排队长度
尾气排放
Q-learning
Fuzzy inference
traffic signal control
queue length
vehicle emission