In order to reduce average arterial vehicle delay, a novel distributed and coordinated traffic control algorithm is developed using the multiple agent system and the reinforce learning (RL). The RL is used to minimi...In order to reduce average arterial vehicle delay, a novel distributed and coordinated traffic control algorithm is developed using the multiple agent system and the reinforce learning (RL). The RL is used to minimize average delay of arterial vehicles by training the interaction ability between agents and exterior environments. The Robertson platoon dispersion model is embedded in the RL algorithm to precisely predict platoon movements on arteries and then the reward function is developed based on the dispersion model and delay equations cited by HCM2000. The performance of the algorithm is evaluated in a Matlab environment and comparisons between the algorithm and the conventional coordination algorithm are conducted in three different traffic load scenarios. Results show that the proposed algorithm outperforms the conventional algorithm in all the scenarios. Moreover, with the increase in saturation degree, the performance is improved more significantly. The results verify the feasibility and efficiency of the established algorithm.展开更多
分布式资源(distributed energy resources,DERs)的随机元素会引起多虚拟电厂(multi-virtual power plant,MVPP)系统内虚拟电厂(virtual power plant,VPP)策略频繁变化。对于某主体,如何感知其他主体策略突然变化时对自身收益的影响趋势...分布式资源(distributed energy resources,DERs)的随机元素会引起多虚拟电厂(multi-virtual power plant,MVPP)系统内虚拟电厂(virtual power plant,VPP)策略频繁变化。对于某主体,如何感知其他主体策略突然变化时对自身收益的影响趋势,并快速调整自身策略,是亟需解决的难点。该文提出基于二阶随机动力学的多虚拟电厂自趋优能量管理策略,旨在提升VPP应对其他主体策略变化时的自治能力。首先,针对DERs异质运行特性,聚焦可调空间构建VPP聚合运行模型;然后,基于随机图描绘VPP策略变化的随机特性;其次,用二阶随机动力学方程(stochastic dynamic equation,SDE)探索VPP收益结构的自发演化信息,修正其他主体策略变化时自身综合收益;再次,将修正收益作为融合软动作-评价(integrated soft actor–critic,ISAC)强化学习算法的奖励搭建多智能体求解框架。最后,设计多算法对比实验,验证了该文策略的自趋优性能。展开更多
This study provides a systematic analysis of the resource-consuming training of deep reinforcement-learning (DRL) agents for simulated low-speed automated driving (AD). In Unity, this study established two case studie...This study provides a systematic analysis of the resource-consuming training of deep reinforcement-learning (DRL) agents for simulated low-speed automated driving (AD). In Unity, this study established two case studies: garage parking and navigating an obstacle-dense area. Our analysis involves training a path-planning agent with real-time-only sensor information. This study addresses research questions insufficiently covered in the literature, exploring curriculum learning (CL), agent generalization (knowledge transfer), computation distribution (CPU vs. GPU), and mapless navigation. CL proved necessary for the garage scenario and beneficial for obstacle avoidance. It involved adjustments at different stages, including terminal conditions, environment complexity, and reward function hyperparameters, guided by their evolution in multiple training attempts. Fine-tuning the simulation tick and decision period parameters was crucial for effective training. The abstraction of high-level concepts (e.g., obstacle avoidance) necessitates training the agent in sufficiently complex environments in terms of the number of obstacles. While blogs and forums discuss training machine learning models in Unity, a lack of scientific articles on DRL agents for AD persists. However, since agent development requires considerable training time and difficult procedures, there is a growing need to support such research through scientific means. In addition to our findings, we contribute to the R&D community by providing our environment with open sources.展开更多
基金The National Key Technology R&D Program during the 11th Five-Year Plan Period of China (No. 2009BAG17B02)the National High Technology Research and Development Program of China (863 Program) (No. 2011AA110304)the National Natural Science Foundation of China (No. 50908100)
文摘In order to reduce average arterial vehicle delay, a novel distributed and coordinated traffic control algorithm is developed using the multiple agent system and the reinforce learning (RL). The RL is used to minimize average delay of arterial vehicles by training the interaction ability between agents and exterior environments. The Robertson platoon dispersion model is embedded in the RL algorithm to precisely predict platoon movements on arteries and then the reward function is developed based on the dispersion model and delay equations cited by HCM2000. The performance of the algorithm is evaluated in a Matlab environment and comparisons between the algorithm and the conventional coordination algorithm are conducted in three different traffic load scenarios. Results show that the proposed algorithm outperforms the conventional algorithm in all the scenarios. Moreover, with the increase in saturation degree, the performance is improved more significantly. The results verify the feasibility and efficiency of the established algorithm.
文摘分布式资源(distributed energy resources,DERs)的随机元素会引起多虚拟电厂(multi-virtual power plant,MVPP)系统内虚拟电厂(virtual power plant,VPP)策略频繁变化。对于某主体,如何感知其他主体策略突然变化时对自身收益的影响趋势,并快速调整自身策略,是亟需解决的难点。该文提出基于二阶随机动力学的多虚拟电厂自趋优能量管理策略,旨在提升VPP应对其他主体策略变化时的自治能力。首先,针对DERs异质运行特性,聚焦可调空间构建VPP聚合运行模型;然后,基于随机图描绘VPP策略变化的随机特性;其次,用二阶随机动力学方程(stochastic dynamic equation,SDE)探索VPP收益结构的自发演化信息,修正其他主体策略变化时自身综合收益;再次,将修正收益作为融合软动作-评价(integrated soft actor–critic,ISAC)强化学习算法的奖励搭建多智能体求解框架。最后,设计多算法对比实验,验证了该文策略的自趋优性能。
文摘This study provides a systematic analysis of the resource-consuming training of deep reinforcement-learning (DRL) agents for simulated low-speed automated driving (AD). In Unity, this study established two case studies: garage parking and navigating an obstacle-dense area. Our analysis involves training a path-planning agent with real-time-only sensor information. This study addresses research questions insufficiently covered in the literature, exploring curriculum learning (CL), agent generalization (knowledge transfer), computation distribution (CPU vs. GPU), and mapless navigation. CL proved necessary for the garage scenario and beneficial for obstacle avoidance. It involved adjustments at different stages, including terminal conditions, environment complexity, and reward function hyperparameters, guided by their evolution in multiple training attempts. Fine-tuning the simulation tick and decision period parameters was crucial for effective training. The abstraction of high-level concepts (e.g., obstacle avoidance) necessitates training the agent in sufficiently complex environments in terms of the number of obstacles. While blogs and forums discuss training machine learning models in Unity, a lack of scientific articles on DRL agents for AD persists. However, since agent development requires considerable training time and difficult procedures, there is a growing need to support such research through scientific means. In addition to our findings, we contribute to the R&D community by providing our environment with open sources.
文摘针对实际多智能体系统对交互经验的庞大需求,在单智能体领域分布式架构的基础上,提出概率经验优先回放机制与分布式架构并行的多智能体软行动-评论者算法(multi-agent soft Actor-Critic with probabilistic prioritized experience replay based on a distributed paradigm, DPER-MASAC).该算法中的行动者以并行与环境交互的方式收集经验数据,为突破单纯最近经验在多智能体高吞吐量情况下被高概率抽取的局限性,提出更为普适的改进的基于优先级的概率方式对经验数据进行抽样利用的模式,并对智能体的网络参数进行更新.为验证算法的效率,设计了难度递增的2类合作和竞争关系共存的捕食者-猎物任务场景,将DPER-MASAC与多智能体软行动-评论者算法(multi-agent soft Actor-Critic, MASAC)和带有优先经验回放机制的多智能体软行动-评论者算法(multi-agent soft Actor-Critic with prioritized experience replay, PER-MASAC)2种基线算法进行对比实验.结果表明,采用DPER-MASAC训练的捕食者团队其决策水平在最终性能和任务成功率2个维度上均有明显提升.