In dense traffic unmanned aerial vehicle(UAV)ad-hoc networks,traffic congestion can cause increased delay and packet loss,which limit the performance of the networks;therefore,a traffic balancing strategy is required ...In dense traffic unmanned aerial vehicle(UAV)ad-hoc networks,traffic congestion can cause increased delay and packet loss,which limit the performance of the networks;therefore,a traffic balancing strategy is required to control the traffic.In this study,we propose TQNGPSR,a traffic-aware Q-network enhanced geographic routing protocol based on greedy perimeter stateless routing(GPSR),for UAV ad-hoc networks.The protocol enforces a traffic balancing strategy using the congestion information of neighbors,and evaluates the quality of a wireless link by the Q-network algorithm,which is a reinforcement learning algorithm.Based on the evaluation of each wireless link,the protocol makes routing decisions in multiple available choices to reduce delay and decrease packet loss.We simulate the performance of TQNGPSR and compare it with AODV,OLSR,GPSR,and QNGPSR.Simulation results show that TQNGPSR obtains higher packet delivery ratios and lower end-to-end delays than GPSR and QNGPSR.In high node density scenarios,it also outperforms AODV and OLSR in terms of the packet delivery ratio,end-to-end delay,and throughput.展开更多
基金Project supported by the National Natural Science Foundation of China(No.61501399)the National Key R&D Program of China(No.2018AAA0102302)。
文摘In dense traffic unmanned aerial vehicle(UAV)ad-hoc networks,traffic congestion can cause increased delay and packet loss,which limit the performance of the networks;therefore,a traffic balancing strategy is required to control the traffic.In this study,we propose TQNGPSR,a traffic-aware Q-network enhanced geographic routing protocol based on greedy perimeter stateless routing(GPSR),for UAV ad-hoc networks.The protocol enforces a traffic balancing strategy using the congestion information of neighbors,and evaluates the quality of a wireless link by the Q-network algorithm,which is a reinforcement learning algorithm.Based on the evaluation of each wireless link,the protocol makes routing decisions in multiple available choices to reduce delay and decrease packet loss.We simulate the performance of TQNGPSR and compare it with AODV,OLSR,GPSR,and QNGPSR.Simulation results show that TQNGPSR obtains higher packet delivery ratios and lower end-to-end delays than GPSR and QNGPSR.In high node density scenarios,it also outperforms AODV and OLSR in terms of the packet delivery ratio,end-to-end delay,and throughput.