摘要
为提高高速公路汇流瓶颈区的通行效率,本文结合强化学习无需建立模型,具有智能学习的特点,对瓶颈区的可变限速策略进行了优化,首次提出了基于Q学习算法的可变限速控制策略.策略以最大化系统总流出车辆数为目标,通过遍历交通流状态集合,尝试不同限速值序列进行自适应学习.以真实路段交通流数据搭建了元胞传输模型仿真平台,通过将其与无控制和基于反馈控制的可变限速策略进行对比,对Q学习策略的控制效果进行评价.通行时间的降低和交通参数的变化表明,强化学习控制策略在提高汇流瓶颈区通行效率和改善交通流运行状况方面具有优越性.
To improve the efficiency of freeway merge bottleneck, this paper optimizes the bottleneck variable speed limit strategy. Considering the characteristics of reinforcement learning that it is modelingfree and intelligent learning, a QL-VSL control strategy that integrates the Q-learning(QL) algorithm in the VSL control is proposed for the first time. The goal of the strategy is to maximize the outflow vehicle, it is adaptive learning through traversing traffic flow states and taking different speed limits. The cell transmission model(CTM) calibrated with the real traffic data is used for the simulation. The effectiveness of the proposed QL-VSL control strategy is evaluated with no VSL control and the feedback VSL control in the simulation. The travel time reduction and traffic parameter changes show that the proposed QL-VSL control strategy outperforms in improving the traffic efficiency and traffic operations at freeway merge bottlenecks.
出处
《交通运输系统工程与信息》
EI
CSCD
北大核心
2015年第1期55-61,共7页
Journal of Transportation Systems Engineering and Information Technology
基金
国家自然科学基金资助项目(51322810)
关键词
智能交通
可变限速
强化学习
高速公路汇流瓶颈区
Q学习算法
intelligent transportation
variable speed limit
reinforcement learning
freeway merge bottleneck
Q-learning