To solve the problem of multi-target hunting by an unmanned surface vehicle(USV)fleet,a hunting algorithm based on multi-agent reinforcement learning is proposed.Firstly,the hunting environment and kinematic model wit...To solve the problem of multi-target hunting by an unmanned surface vehicle(USV)fleet,a hunting algorithm based on multi-agent reinforcement learning is proposed.Firstly,the hunting environment and kinematic model without boundary constraints are built,and the criteria for successful target capture are given.Then,the cooperative hunting problem of a USV fleet is modeled as a decentralized partially observable Markov decision process(Dec-POMDP),and a distributed partially observable multitarget hunting Proximal Policy Optimization(DPOMH-PPO)algorithm applicable to USVs is proposed.In addition,an observation model,a reward function and the action space applicable to multi-target hunting tasks are designed.To deal with the dynamic change of observational feature dimension input by partially observable systems,a feature embedding block is proposed.By combining the two feature compression methods of column-wise max pooling(CMP)and column-wise average-pooling(CAP),observational feature encoding is established.Finally,the centralized training and decentralized execution framework is adopted to complete the training of hunting strategy.Each USV in the fleet shares the same policy and perform actions independently.Simulation experiments have verified the effectiveness of the DPOMH-PPO algorithm in the test scenarios with different numbers of USVs.Moreover,the advantages of the proposed model are comprehensively analyzed from the aspects of algorithm performance,migration effect in task scenarios and self-organization capability after being damaged,the potential deployment and application of DPOMH-PPO in the real environment is verified.展开更多
To guarantee the heterogeneous delay requirements of the diverse vehicular services,it is necessary to design a full cooperative policy for both Vehicle to Infrastructure(V2I)and Vehicle to Vehicle(V2V)links.This pape...To guarantee the heterogeneous delay requirements of the diverse vehicular services,it is necessary to design a full cooperative policy for both Vehicle to Infrastructure(V2I)and Vehicle to Vehicle(V2V)links.This paper investigates the reduction of the delay in edge information sharing for V2V links while satisfying the delay requirements of the V2I links.Specifically,a mean delay minimization problem and a maximum individual delay minimization problem are formulated to improve the global network performance and ensure the fairness of a single user,respectively.A multi-agent reinforcement learning framework is designed to solve these two problems,where a new reward function is proposed to evaluate the utilities of the two optimization objectives in a unified framework.Thereafter,a proximal policy optimization approach is proposed to enable each V2V user to learn its policy using the shared global network reward.The effectiveness of the proposed approach is finally validated by comparing the obtained results with those of the other baseline approaches through extensive simulation experiments.展开更多
随着空间目标的数量逐渐增多、空中目标动态性日趋提升,对目标的观测定位问题变得愈发重要.由于需同时观测的目标多且目标动态性强,而星座观测资源有限,为了更高效地调用星座观测资源,需要动态调整多目标协同观测方案,使各目标均具有较...随着空间目标的数量逐渐增多、空中目标动态性日趋提升,对目标的观测定位问题变得愈发重要.由于需同时观测的目标多且目标动态性强,而星座观测资源有限,为了更高效地调用星座观测资源,需要动态调整多目标协同观测方案,使各目标均具有较好的定位精度,因此需解决星座协同观测多目标的任务规划问题.建立星座姿态轨道模型、目标飞行模型、目标协同探测及定位模型,提出基于几何精度衰减因子(geometric dilution of precision, GDOP)的目标观测定位误差预估模型及目标观测优先级模型,建立基于强化学习的协同观测任务规划框架,采用多头自注意力机制建立策略网络,以及近端策略优化算法开展任务规划算法训练.仿真验证论文提出的方法相比传统启发式方法提升了多目标观测精度和有效跟踪时间,相比遗传算法具有更快的计算速度.展开更多
针对近端策略优化(PPO)算法难以严格约束新旧策略的差异和探索与利用效率较低这2个问题,提出一种基于裁剪优化和策略指导的PPO(COAPG-PPO)算法。首先,通过分析PPO的裁剪机制,设计基于Wasserstein距离的信任域裁剪方案,加强对新旧策略差...针对近端策略优化(PPO)算法难以严格约束新旧策略的差异和探索与利用效率较低这2个问题,提出一种基于裁剪优化和策略指导的PPO(COAPG-PPO)算法。首先,通过分析PPO的裁剪机制,设计基于Wasserstein距离的信任域裁剪方案,加强对新旧策略差异的约束;其次,在策略更新过程中,融入模拟退火和贪心算法的思想,提升算法的探索效率和学习速度。为了验证所提算法的有效性,使用MuJoCo测试基准对COAPG-PPO与CO-PPO(PPO based on Clipping Optimization)、PPO-CMA(PPO with Covariance Matrix Adaptation)、TR-PPO-RB(Trust Region-based PPO with RollBack)和PPO算法进行对比实验。实验结果表明,COAPG-PPO算法在大多数环境中具有更严格的约束能力、更高的探索和利用效率,以及更高的奖励值。展开更多
为提高移动机器人在无地图情况下的视觉导航能力,提升导航成功率,提出了一种融合长短期记忆神经网络(long short term memory, LSTM)和近端策略优化算法(proximal policy optimization, PPO)算法的移动机器人视觉导航模型。首先,该模型...为提高移动机器人在无地图情况下的视觉导航能力,提升导航成功率,提出了一种融合长短期记忆神经网络(long short term memory, LSTM)和近端策略优化算法(proximal policy optimization, PPO)算法的移动机器人视觉导航模型。首先,该模型融合LSTM和PPO算法作为视觉导航的网络模型;其次,通过移动机器人动作,与目标距离,运动时间等因素设计奖励函数,用以训练目标;最后,以移动机器人第一视角获得的RGB-D图像及目标点的极性坐标为输入,以移动机器人的连续动作值为输出,实现无地图的端到端视觉导航任务,并根据推理到达未接受过训练的新目标。对比前序算法,该模型在模拟环境中收敛速度更快,旧目标的导航成功率平均提高17.7%,新目标的导航成功率提高23.3%,具有较好的导航性能。展开更多
SATech-01 is an experimental satellite for space science exploration and on-orbit demonstration of advanced technologies.The satellite is equipped with 16 experimental payloads and supports multiple working modes to m...SATech-01 is an experimental satellite for space science exploration and on-orbit demonstration of advanced technologies.The satellite is equipped with 16 experimental payloads and supports multiple working modes to meet the observation requirements of various payloads.Due to the limitation of platform power supply and data storage systems,proposing reasonable mission planning schemes to improve scientific revenue of the payloads becomes a critical issue.In this article,we formulate the integrated task scheduling of SATech-01 as a multi-objective optimization problem and propose a novel Fair Integrated Scheduling with Proximal Policy Optimization(FIS-PPO)algorithm to solve it.We use multiple decision heads to generate decisions for each task and design the action mask to ensure the schedule meeting the platform constraints.Experimental results show that FIS-PPO could push the capability of the platform to the limit and improve the overall observation efficiency by 31.5%compared to rule-based plans currently used.Moreover,fairness is considered in the reward design and our method achieves much better performance in terms of equal task opportunities.Because of its low computational complexity,our task scheduling algorithm has the potential to be directly deployed on board for real-time task scheduling in future space projects.展开更多
基金financial support from National Natural Science Foundation of China(Grant No.61601491)Natural Science Foundation of Hubei Province,China(Grant No.2018CFC865)Military Research Project of China(-Grant No.YJ2020B117)。
文摘To solve the problem of multi-target hunting by an unmanned surface vehicle(USV)fleet,a hunting algorithm based on multi-agent reinforcement learning is proposed.Firstly,the hunting environment and kinematic model without boundary constraints are built,and the criteria for successful target capture are given.Then,the cooperative hunting problem of a USV fleet is modeled as a decentralized partially observable Markov decision process(Dec-POMDP),and a distributed partially observable multitarget hunting Proximal Policy Optimization(DPOMH-PPO)algorithm applicable to USVs is proposed.In addition,an observation model,a reward function and the action space applicable to multi-target hunting tasks are designed.To deal with the dynamic change of observational feature dimension input by partially observable systems,a feature embedding block is proposed.By combining the two feature compression methods of column-wise max pooling(CMP)and column-wise average-pooling(CAP),observational feature encoding is established.Finally,the centralized training and decentralized execution framework is adopted to complete the training of hunting strategy.Each USV in the fleet shares the same policy and perform actions independently.Simulation experiments have verified the effectiveness of the DPOMH-PPO algorithm in the test scenarios with different numbers of USVs.Moreover,the advantages of the proposed model are comprehensively analyzed from the aspects of algorithm performance,migration effect in task scenarios and self-organization capability after being damaged,the potential deployment and application of DPOMH-PPO in the real environment is verified.
基金supported in part by the National Natural Science Foundation of China under grants 61901078,61771082,61871062,and U20A20157in part by the Science and Technology Research Program of Chongqing Municipal Education Commission under grant KJQN201900609+2 种基金in part by the Natural Science Foundation of Chongqing under grant cstc2020jcyj-zdxmX0024in part by University Innovation Research Group of Chongqing under grant CXQT20017in part by the China University Industry-University-Research Collaborative Innovation Fund(Future Network Innovation Research and Application Project)under grant 2021FNA04008.
文摘To guarantee the heterogeneous delay requirements of the diverse vehicular services,it is necessary to design a full cooperative policy for both Vehicle to Infrastructure(V2I)and Vehicle to Vehicle(V2V)links.This paper investigates the reduction of the delay in edge information sharing for V2V links while satisfying the delay requirements of the V2I links.Specifically,a mean delay minimization problem and a maximum individual delay minimization problem are formulated to improve the global network performance and ensure the fairness of a single user,respectively.A multi-agent reinforcement learning framework is designed to solve these two problems,where a new reward function is proposed to evaluate the utilities of the two optimization objectives in a unified framework.Thereafter,a proximal policy optimization approach is proposed to enable each V2V user to learn its policy using the shared global network reward.The effectiveness of the proposed approach is finally validated by comparing the obtained results with those of the other baseline approaches through extensive simulation experiments.
文摘随着空间目标的数量逐渐增多、空中目标动态性日趋提升,对目标的观测定位问题变得愈发重要.由于需同时观测的目标多且目标动态性强,而星座观测资源有限,为了更高效地调用星座观测资源,需要动态调整多目标协同观测方案,使各目标均具有较好的定位精度,因此需解决星座协同观测多目标的任务规划问题.建立星座姿态轨道模型、目标飞行模型、目标协同探测及定位模型,提出基于几何精度衰减因子(geometric dilution of precision, GDOP)的目标观测定位误差预估模型及目标观测优先级模型,建立基于强化学习的协同观测任务规划框架,采用多头自注意力机制建立策略网络,以及近端策略优化算法开展任务规划算法训练.仿真验证论文提出的方法相比传统启发式方法提升了多目标观测精度和有效跟踪时间,相比遗传算法具有更快的计算速度.
文摘针对近端策略优化(PPO)算法难以严格约束新旧策略的差异和探索与利用效率较低这2个问题,提出一种基于裁剪优化和策略指导的PPO(COAPG-PPO)算法。首先,通过分析PPO的裁剪机制,设计基于Wasserstein距离的信任域裁剪方案,加强对新旧策略差异的约束;其次,在策略更新过程中,融入模拟退火和贪心算法的思想,提升算法的探索效率和学习速度。为了验证所提算法的有效性,使用MuJoCo测试基准对COAPG-PPO与CO-PPO(PPO based on Clipping Optimization)、PPO-CMA(PPO with Covariance Matrix Adaptation)、TR-PPO-RB(Trust Region-based PPO with RollBack)和PPO算法进行对比实验。实验结果表明,COAPG-PPO算法在大多数环境中具有更严格的约束能力、更高的探索和利用效率,以及更高的奖励值。
文摘为提高移动机器人在无地图情况下的视觉导航能力,提升导航成功率,提出了一种融合长短期记忆神经网络(long short term memory, LSTM)和近端策略优化算法(proximal policy optimization, PPO)算法的移动机器人视觉导航模型。首先,该模型融合LSTM和PPO算法作为视觉导航的网络模型;其次,通过移动机器人动作,与目标距离,运动时间等因素设计奖励函数,用以训练目标;最后,以移动机器人第一视角获得的RGB-D图像及目标点的极性坐标为输入,以移动机器人的连续动作值为输出,实现无地图的端到端视觉导航任务,并根据推理到达未接受过训练的新目标。对比前序算法,该模型在模拟环境中收敛速度更快,旧目标的导航成功率平均提高17.7%,新目标的导航成功率提高23.3%,具有较好的导航性能。
基金supported by the Strategic Priority Program on Space Science,Chinese Academy of Sciences。
文摘SATech-01 is an experimental satellite for space science exploration and on-orbit demonstration of advanced technologies.The satellite is equipped with 16 experimental payloads and supports multiple working modes to meet the observation requirements of various payloads.Due to the limitation of platform power supply and data storage systems,proposing reasonable mission planning schemes to improve scientific revenue of the payloads becomes a critical issue.In this article,we formulate the integrated task scheduling of SATech-01 as a multi-objective optimization problem and propose a novel Fair Integrated Scheduling with Proximal Policy Optimization(FIS-PPO)algorithm to solve it.We use multiple decision heads to generate decisions for each task and design the action mask to ensure the schedule meeting the platform constraints.Experimental results show that FIS-PPO could push the capability of the platform to the limit and improve the overall observation efficiency by 31.5%compared to rule-based plans currently used.Moreover,fairness is considered in the reward design and our method achieves much better performance in terms of equal task opportunities.Because of its low computational complexity,our task scheduling algorithm has the potential to be directly deployed on board for real-time task scheduling in future space projects.