By integrating deep neural networks with reinforcement learning,the Double Deep Q Network(DDQN)algorithm overcomes the limitations of Q-learning in handling continuous spaces and is widely applied in the path planning...By integrating deep neural networks with reinforcement learning,the Double Deep Q Network(DDQN)algorithm overcomes the limitations of Q-learning in handling continuous spaces and is widely applied in the path planning of mobile robots.However,the traditional DDQN algorithm suffers from sparse rewards and inefficient utilization of high-quality data.Targeting those problems,an improved DDQN algorithm based on average Q-value estimation and reward redistribution was proposed.First,to enhance the precision of the target Q-value,the average of multiple previously learned Q-values from the target Q network is used to replace the single Q-value from the current target Q network.Next,a reward redistribution mechanism is designed to overcome the sparse reward problem by adjusting the final reward of each action using the round reward from trajectory information.Additionally,a reward-prioritized experience selection method is introduced,which ranks experience samples according to reward values to ensure frequent utilization of high-quality data.Finally,simulation experiments are conducted to verify the effectiveness of the proposed algorithm in fixed-position scenario and random environments.The experimental results show that compared to the traditional DDQN algorithm,the proposed algorithm achieves shorter average running time,higher average return and fewer average steps.The performance of the proposed algorithm is improved by 11.43%in the fixed scenario and 8.33%in random environments.It not only plans economic and safe paths but also significantly improves efficiency and generalization in path planning,making it suitable for widespread application in autonomous navigation and industrial automation.展开更多
针对无人机(UAV)空战环境信息复杂、对抗性强所导致的敌机机动策略难以预测,以及作战胜率不高的问题,设计了一种引导Minimax-DDQN(Minimax-Double Deep Q-Network)算法。首先,在Minimax决策方法的基础上提出了一种引导式策略探索机制;然...针对无人机(UAV)空战环境信息复杂、对抗性强所导致的敌机机动策略难以预测,以及作战胜率不高的问题,设计了一种引导Minimax-DDQN(Minimax-Double Deep Q-Network)算法。首先,在Minimax决策方法的基础上提出了一种引导式策略探索机制;然后,结合引导Minimax策略,以提升Q网络更新效率为出发点设计了一种DDQN(Double Deep Q-Network)算法;最后,提出进阶式三阶段的网络训练方法,通过不同决策模型间的对抗训练,获取更为优化的决策模型。实验结果表明,相较于Minimax-DQN(Minimax-DQN)、Minimax-DDQN等算法,所提算法追击直线目标的成功率提升了14%~60%,并且与DDQN算法的对抗胜率不低于60%。可见,与DDQN、Minimax-DDQN等算法相比,所提算法在高对抗的作战环境中具有更强的决策能力,适应性更好。展开更多
电力传感网可以用于对电力网络的设备工作状态和工作环境等信息实时采集和获取,对于电力网络设施的实时监控与快速响应具有重要作用。针对系统在数据排队时延和丢包率上的特殊要求,提出了一种基于强化学习的电力传感网资源分配方案。在...电力传感网可以用于对电力网络的设备工作状态和工作环境等信息实时采集和获取,对于电力网络设施的实时监控与快速响应具有重要作用。针对系统在数据排队时延和丢包率上的特殊要求,提出了一种基于强化学习的电力传感网资源分配方案。在资源受限的情况下,通过资源分配算法来优化传感器节点的排队时延和丢包率,并将该优化问题建模为马尔可夫决策过程(Markov decision process,MDP),通过双深度Q网络(double deep Q-learning,DDQN)来对优化目标函数求解。仿真结果与数值分析表明,所提方案在收敛性、排队时延和丢包率等方面的性能均优于基准方案。展开更多
Edge computing nodes undertake an increasing number of tasks with the rise of business density.Therefore,how to efficiently allocate large-scale and dynamic workloads to edge computing resources has become a critical ...Edge computing nodes undertake an increasing number of tasks with the rise of business density.Therefore,how to efficiently allocate large-scale and dynamic workloads to edge computing resources has become a critical challenge.This study proposes an edge task scheduling approach based on an improved Double Deep Q Network(DQN),which is adopted to separate the calculations of target Q values and the selection of the action in two networks.A new reward function is designed,and a control unit is added to the experience replay unit of the agent.The management of experience data are also modified to fully utilize its value and improve learning efficiency.Reinforcement learning agents usually learn from an ignorant state,which is inefficient.As such,this study proposes a novel particle swarm optimization algorithm with an improved fitness function,which can generate optimal solutions for task scheduling.These optimized solutions are provided for the agent to pre-train network parameters to obtain a better cognition level.The proposed algorithm is compared with six other methods in simulation experiments.Results show that the proposed algorithm outperforms other benchmark methods regarding makespan.展开更多
The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the mai...The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the main trends of UAV development in the future.This paper studies the behavior decision-making process of UAV swarm rendezvous task based on the double deep Q network(DDQN)algorithm.We design a guided reward function to effectively solve the problem of algorithm convergence caused by the sparse return problem in deep reinforcement learning(DRL)for the long period task.We also propose the concept of temporary storage area,optimizing the memory playback unit of the traditional DDQN algorithm,improving the convergence speed of the algorithm,and speeding up the training process of the algorithm.Different from traditional task environment,this paper establishes a continuous state-space task environment model to improve the authentication process of UAV task environment.Based on the DDQN algorithm,the collaborative tasks of UAV swarm in different task scenarios are trained.The experimental results validate that the DDQN algorithm is efficient in terms of training UAV swarm to complete the given collaborative tasks while meeting the requirements of UAV swarm for centralization and autonomy,and improving the intelligence of UAV swarm collaborative task execution.The simulation results show that after training,the proposed UAV swarm can carry out the rendezvous task well,and the success rate of the mission reaches 90%.展开更多
High penetration of distributed renewable energy sources and electric vehicles(EVs)makes future active distribution network(ADN)highly variable.These characteristics put great challenges to traditional voltage control...High penetration of distributed renewable energy sources and electric vehicles(EVs)makes future active distribution network(ADN)highly variable.These characteristics put great challenges to traditional voltage control methods.Voltage control based on the deep Q-network(DQN)algorithm offers a potential solution to this problem because it possesses humanlevel control performance.However,the traditional DQN methods may produce overestimation of action reward values,resulting in degradation of obtained solutions.In this paper,an intelligent voltage control method based on averaged weighted double deep Q-network(AWDDQN)algorithm is proposed to overcome the shortcomings of overestimation of action reward values in DQN algorithm and underestimation of action reward values in double deep Q-network(DDQN)algorithm.Using the proposed method,the voltage control objective is incorporated into the designed action reward values and normalized to form a Markov decision process(MDP)model which is solved by the AWDDQN algorithm.The designed AWDDQN-based intelligent voltage control agent is trained offline and used as online intelligent dynamic voltage regulator for the ADN.The proposed voltage control method is validated using the IEEE 33-bus and 123-bus systems containing renewable energy sources and EVs,and compared with the DQN and DDQN algorithms based methods,and traditional mixed-integer nonlinear program based methods.The simulation results show that the proposed method has better convergence and less voltage volatility than the other ones.展开更多
目的为了解决车载边缘计算中用户服务质量低以及边缘节点资源不足的问题,方法结合车载边缘计算和停车边缘计算技术,提出“端-多边-云”协作计算卸载模型,并设计基于DRL的协作计算卸载与资源分配算法(cooperative computation offloading...目的为了解决车载边缘计算中用户服务质量低以及边缘节点资源不足的问题,方法结合车载边缘计算和停车边缘计算技术,提出“端-多边-云”协作计算卸载模型,并设计基于DRL的协作计算卸载与资源分配算法(cooperative computation offloading and resource allocation algorithm based on DRL,DRL-CCORA)。首先,将路边停放车辆的算力构建成停车边缘服务器(parking edge server,PES),联合边缘节点为车辆任务提供计算服务,减轻边缘节点的负载;其次,将计算卸载与资源分配问题转化为马尔可夫决策过程模型,综合时延、能耗和服务质量构建奖励函数,并根据任务需要的计算资源、任务的最大容忍时延以及车辆到PES的距离对计算任务进行预分类处理,缩减问题的规模;最后,利用双深度Q网络(double deep q network,DDQN)算法获得计算卸载和资源分配的最优策略。结果结果表明,相较于对比算法,所提算法的用户总服务质量提高了6.25%,任务的完成率提高了10.26%,任务计算的时延和能耗分别降低了18.8%、5.26%。结论所提算法优化了边缘节点的负载,降低了任务完成的时延和能耗,提高了用户的服务质量。展开更多
针对自动化立体仓库出库作业过程中剩余货物退库问题,以堆垛机作业总能耗最小化为目标,以退库货位分配为决策变量,建立了自动化立体仓库退库货位优化模型,提出了基于深度强化学习的自动化立体仓库退库货位优化框架。在该框架内,以立体...针对自动化立体仓库出库作业过程中剩余货物退库问题,以堆垛机作业总能耗最小化为目标,以退库货位分配为决策变量,建立了自动化立体仓库退库货位优化模型,提出了基于深度强化学习的自动化立体仓库退库货位优化框架。在该框架内,以立体仓库实时存储信息和出库作业信息构建多维状态,以退库货位选择构建动作,建立自动化立体仓库退库货位优化的马尔科夫决策过程模型;将立体仓库多维状态特征输入双层决斗网络,采用决斗双重深度Q网络(dueling double deep Q-network,D3QN)算法训练网络模型并预测退库动作目标价值,以确定智能体的最优行为策略。实验结果表明D3QN算法在求解大规模退库货位优化问题上具有较好的稳定性。展开更多
基金funded by National Natural Science Foundation of China(No.62063006)Guangxi Science and Technology Major Program(No.2022AA05002)+1 种基金Key Laboratory of AI and Information Processing(Hechi University),Education Department of Guangxi Zhuang Autonomous Region(No.2022GXZDSY003)Central Leading Local Science and Technology Development Fund Project of Wuzhou(No.202201001).
文摘By integrating deep neural networks with reinforcement learning,the Double Deep Q Network(DDQN)algorithm overcomes the limitations of Q-learning in handling continuous spaces and is widely applied in the path planning of mobile robots.However,the traditional DDQN algorithm suffers from sparse rewards and inefficient utilization of high-quality data.Targeting those problems,an improved DDQN algorithm based on average Q-value estimation and reward redistribution was proposed.First,to enhance the precision of the target Q-value,the average of multiple previously learned Q-values from the target Q network is used to replace the single Q-value from the current target Q network.Next,a reward redistribution mechanism is designed to overcome the sparse reward problem by adjusting the final reward of each action using the round reward from trajectory information.Additionally,a reward-prioritized experience selection method is introduced,which ranks experience samples according to reward values to ensure frequent utilization of high-quality data.Finally,simulation experiments are conducted to verify the effectiveness of the proposed algorithm in fixed-position scenario and random environments.The experimental results show that compared to the traditional DDQN algorithm,the proposed algorithm achieves shorter average running time,higher average return and fewer average steps.The performance of the proposed algorithm is improved by 11.43%in the fixed scenario and 8.33%in random environments.It not only plans economic and safe paths but also significantly improves efficiency and generalization in path planning,making it suitable for widespread application in autonomous navigation and industrial automation.
文摘针对无人机(UAV)空战环境信息复杂、对抗性强所导致的敌机机动策略难以预测,以及作战胜率不高的问题,设计了一种引导Minimax-DDQN(Minimax-Double Deep Q-Network)算法。首先,在Minimax决策方法的基础上提出了一种引导式策略探索机制;然后,结合引导Minimax策略,以提升Q网络更新效率为出发点设计了一种DDQN(Double Deep Q-Network)算法;最后,提出进阶式三阶段的网络训练方法,通过不同决策模型间的对抗训练,获取更为优化的决策模型。实验结果表明,相较于Minimax-DQN(Minimax-DQN)、Minimax-DDQN等算法,所提算法追击直线目标的成功率提升了14%~60%,并且与DDQN算法的对抗胜率不低于60%。可见,与DDQN、Minimax-DDQN等算法相比,所提算法在高对抗的作战环境中具有更强的决策能力,适应性更好。
文摘电力传感网可以用于对电力网络的设备工作状态和工作环境等信息实时采集和获取,对于电力网络设施的实时监控与快速响应具有重要作用。针对系统在数据排队时延和丢包率上的特殊要求,提出了一种基于强化学习的电力传感网资源分配方案。在资源受限的情况下,通过资源分配算法来优化传感器节点的排队时延和丢包率,并将该优化问题建模为马尔可夫决策过程(Markov decision process,MDP),通过双深度Q网络(double deep Q-learning,DDQN)来对优化目标函数求解。仿真结果与数值分析表明,所提方案在收敛性、排队时延和丢包率等方面的性能均优于基准方案。
基金supported by the National Key Research and Development Program of China(No.2021YFE0116900)National Natural Science Foundation of China(Nos.42275157,62002276,and 41975142)Major Program of the National Social Science Fund of China(No.17ZDA092).
文摘Edge computing nodes undertake an increasing number of tasks with the rise of business density.Therefore,how to efficiently allocate large-scale and dynamic workloads to edge computing resources has become a critical challenge.This study proposes an edge task scheduling approach based on an improved Double Deep Q Network(DQN),which is adopted to separate the calculations of target Q values and the selection of the action in two networks.A new reward function is designed,and a control unit is added to the experience replay unit of the agent.The management of experience data are also modified to fully utilize its value and improve learning efficiency.Reinforcement learning agents usually learn from an ignorant state,which is inefficient.As such,this study proposes a novel particle swarm optimization algorithm with an improved fitness function,which can generate optimal solutions for task scheduling.These optimized solutions are provided for the agent to pre-train network parameters to obtain a better cognition level.The proposed algorithm is compared with six other methods in simulation experiments.Results show that the proposed algorithm outperforms other benchmark methods regarding makespan.
基金supported by the Aeronautical Science Foundation(2017ZC53033).
文摘The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the main trends of UAV development in the future.This paper studies the behavior decision-making process of UAV swarm rendezvous task based on the double deep Q network(DDQN)algorithm.We design a guided reward function to effectively solve the problem of algorithm convergence caused by the sparse return problem in deep reinforcement learning(DRL)for the long period task.We also propose the concept of temporary storage area,optimizing the memory playback unit of the traditional DDQN algorithm,improving the convergence speed of the algorithm,and speeding up the training process of the algorithm.Different from traditional task environment,this paper establishes a continuous state-space task environment model to improve the authentication process of UAV task environment.Based on the DDQN algorithm,the collaborative tasks of UAV swarm in different task scenarios are trained.The experimental results validate that the DDQN algorithm is efficient in terms of training UAV swarm to complete the given collaborative tasks while meeting the requirements of UAV swarm for centralization and autonomy,and improving the intelligence of UAV swarm collaborative task execution.The simulation results show that after training,the proposed UAV swarm can carry out the rendezvous task well,and the success rate of the mission reaches 90%.
基金supported in part by the Anhui Province Natural Science Foundation(No.2108085UD02)the National Natural Science Foundation of China(No.51577047)111 Project(No.BP0719039)。
文摘High penetration of distributed renewable energy sources and electric vehicles(EVs)makes future active distribution network(ADN)highly variable.These characteristics put great challenges to traditional voltage control methods.Voltage control based on the deep Q-network(DQN)algorithm offers a potential solution to this problem because it possesses humanlevel control performance.However,the traditional DQN methods may produce overestimation of action reward values,resulting in degradation of obtained solutions.In this paper,an intelligent voltage control method based on averaged weighted double deep Q-network(AWDDQN)algorithm is proposed to overcome the shortcomings of overestimation of action reward values in DQN algorithm and underestimation of action reward values in double deep Q-network(DDQN)algorithm.Using the proposed method,the voltage control objective is incorporated into the designed action reward values and normalized to form a Markov decision process(MDP)model which is solved by the AWDDQN algorithm.The designed AWDDQN-based intelligent voltage control agent is trained offline and used as online intelligent dynamic voltage regulator for the ADN.The proposed voltage control method is validated using the IEEE 33-bus and 123-bus systems containing renewable energy sources and EVs,and compared with the DQN and DDQN algorithms based methods,and traditional mixed-integer nonlinear program based methods.The simulation results show that the proposed method has better convergence and less voltage volatility than the other ones.
文摘目的为了解决车载边缘计算中用户服务质量低以及边缘节点资源不足的问题,方法结合车载边缘计算和停车边缘计算技术,提出“端-多边-云”协作计算卸载模型,并设计基于DRL的协作计算卸载与资源分配算法(cooperative computation offloading and resource allocation algorithm based on DRL,DRL-CCORA)。首先,将路边停放车辆的算力构建成停车边缘服务器(parking edge server,PES),联合边缘节点为车辆任务提供计算服务,减轻边缘节点的负载;其次,将计算卸载与资源分配问题转化为马尔可夫决策过程模型,综合时延、能耗和服务质量构建奖励函数,并根据任务需要的计算资源、任务的最大容忍时延以及车辆到PES的距离对计算任务进行预分类处理,缩减问题的规模;最后,利用双深度Q网络(double deep q network,DDQN)算法获得计算卸载和资源分配的最优策略。结果结果表明,相较于对比算法,所提算法的用户总服务质量提高了6.25%,任务的完成率提高了10.26%,任务计算的时延和能耗分别降低了18.8%、5.26%。结论所提算法优化了边缘节点的负载,降低了任务完成的时延和能耗,提高了用户的服务质量。
文摘针对自动化立体仓库出库作业过程中剩余货物退库问题,以堆垛机作业总能耗最小化为目标,以退库货位分配为决策变量,建立了自动化立体仓库退库货位优化模型,提出了基于深度强化学习的自动化立体仓库退库货位优化框架。在该框架内,以立体仓库实时存储信息和出库作业信息构建多维状态,以退库货位选择构建动作,建立自动化立体仓库退库货位优化的马尔科夫决策过程模型;将立体仓库多维状态特征输入双层决斗网络,采用决斗双重深度Q网络(dueling double deep Q-network,D3QN)算法训练网络模型并预测退库动作目标价值,以确定智能体的最优行为策略。实验结果表明D3QN算法在求解大规模退库货位优化问题上具有较好的稳定性。