期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Development of ground control station and path planning for autonomous MUAV
1
作者 洪晔 房建成 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2010年第6期789-794,共6页
For autonomous MUAV,the Ground Control Station(GCS)including hardware and modular software programming such as control modular,navigation modular,display modular and monitor modular becomes important equipment to be d... For autonomous MUAV,the Ground Control Station(GCS)including hardware and modular software programming such as control modular,navigation modular,display modular and monitor modular becomes important equipment to be developed.This paper emphasizes the global planning and the local replanning arithmetic based on three-dimensional velocity potential field for the moving threats.During the test on the ground and in the sky,GCS show the remote sensing information precisely and send the control command in time.The system can be used to assist in the function of autonomous complex task for MUAV. 展开更多
关键词 mini-unmanned aerial vehicle ground control station autonomous planning replanning MALFUNCTION
下载PDF
Cooperative path planning for multi-AUV in time-varying ocean flows 被引量:8
2
作者 Mingyong Liu Baogui Xu Xingguang Peng 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2016年第3期612-618,共7页
For low-speed underwater vehicles, the ocean currents has a great influence on them, and the changes in ocean currents is complex and continuous, thus whose impact must be taken into consideration in the path planning... For low-speed underwater vehicles, the ocean currents has a great influence on them, and the changes in ocean currents is complex and continuous, thus whose impact must be taken into consideration in the path planning. There are still lack of authoritative indicator and method for the cooperating path planning. The calculation of the voyage time is a difficult problem in the time-varying ocean, for the existing methods of the cooperating path planning, the computation time will increase exponentially as the autonomous underwater vehicle(AUV) counts increase, rendering them unfeasible. A collaborative path planning method is presehted for multi-AUV under the influence of time-varying ocean currents based on the dynamic programming algorithm. Each AUV cooperates with the one who has the longest estimated time of sailing, enabling the arrays of AUV to get their common goal in the shortest time with minimum timedifference. At the same time, they could avoid the obstacles along the way to the target. Simulation results show that the proposed method has a promising applicability. 展开更多
关键词 dynamic programming time-varying cooperate path planning autonomous underwater vehicle(AUV)
下载PDF
Efficient Route Planning for Real-Time Demand-Responsive Transit
3
作者 Hongle Li SeongKi Kim 《Computers, Materials & Continua》 SCIE EI 2024年第4期473-492,共20页
Demand Responsive Transit (DRT) responds to the dynamic users’ requests without any fixed routes and timetablesand determines the stop and the start according to the demands. This study explores the optimization of d... Demand Responsive Transit (DRT) responds to the dynamic users’ requests without any fixed routes and timetablesand determines the stop and the start according to the demands. This study explores the optimization of dynamicvehicle scheduling and real-time route planning in urban public transportation systems, with a focus on busservices. It addresses the limitations of current shared mobility routing algorithms, which are primarily designedfor simpler, single origin/destination scenarios, and do not meet the complex demands of bus transit systems. Theresearch introduces an route planning algorithm designed to dynamically accommodate passenger travel needsand enable real-time route modifications. Unlike traditional methods, this algorithm leverages a queue-based,multi-objective heuristic A∗ approach, offering a solution to the inflexibility and limited coverage of suburbanbus routes. Also, this study conducts a comparative analysis of the proposed algorithm with solutions based onGenetic Algorithm (GA) and Ant Colony Optimization Algorithm (ACO), focusing on calculation time, routelength, passenger waiting time, boarding time, and detour rate. The findings demonstrate that the proposedalgorithmsignificantly enhances route planning speed, achieving an 80–100-fold increase in efficiency over existingmodels, thereby supporting the real-time demands of Demand-Responsive Transportation (DRT) systems. Thestudy concludes that this algorithm not only optimizes route planning in bus transit but also presents a scalablesolution for improving urban mobility. 展开更多
关键词 autonomous bus route planning real-time dynamic route planning path finding DRT bus route optimization sustainable public transport
下载PDF
Relevant experience learning:A deep reinforcement learning method for UAV autonomous motion planning in complex unknown environments 被引量:14
4
作者 Zijian HU Xiaoguang GAO +2 位作者 Kaifang WAN Yiwei ZHAI Qianglong WANG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2021年第12期187-204,共18页
Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a ... Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a suitable method to solve the UAV Autonomous Motion Planning(AMP)problem can improve the success rate of UAV missions to a certain extent.In recent years,many studies have used Deep Reinforcement Learning(DRL)methods to address the AMP problem and have achieved good results.From the perspective of sampling,this paper designs a sampling method with double-screening,combines it with the Deep Deterministic Policy Gradient(DDPG)algorithm,and proposes the Relevant Experience Learning-DDPG(REL-DDPG)algorithm.The REL-DDPG algorithm uses a Prioritized Experience Replay(PER)mechanism to break the correlation of continuous experiences in the experience pool,finds the experiences most similar to the current state to learn according to the theory in human education,and expands the influence of the learning process on action selection at the current state.All experiments are applied in a complex unknown simulation environment constructed based on the parameters of a real UAV.The training experiments show that REL-DDPG improves the convergence speed and the convergence result compared to the state-of-the-art DDPG algorithm,while the testing experiments show the applicability of the algorithm and investigate the performance under different parameter conditions. 展开更多
关键词 autonomous Motion planning(AMP) Deep Deterministic Policy Gradient(DDPG) Deep Reinforcement Learning(DRL) Sampling method UAV
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部