Beamforming is significant for millimeter wave multi-user massive multi-input multi-output systems.In the meanwhile,the overhead cost of channel state information and beam training is considerable,especially in dynami...Beamforming is significant for millimeter wave multi-user massive multi-input multi-output systems.In the meanwhile,the overhead cost of channel state information and beam training is considerable,especially in dynamic environments.To reduce the overhead cost,we propose a multi-user beam tracking algorithm using a distributed deep Q-learning method.With online learning of users’moving trajectories,the proposed algorithm learns to scan a beam subspace to maximize the average effective sum rate.Considering practical implementation,we model the continuous beam tracking problem as a non-Markov decision process and thus develop a simplified training scheme of deep Q-learning to reduce the training complexity.Furthermore,we propose a scalable state-action-reward design for scenarios with different users and antenna numbers.Simulation results verify the effectiveness of the designed method.展开更多
Path planning and obstacle avoidance are two challenging problems in the study of intelligent robots. In this paper, we develop a new method to alleviate these problems based on deep Q-learning with experience replay ...Path planning and obstacle avoidance are two challenging problems in the study of intelligent robots. In this paper, we develop a new method to alleviate these problems based on deep Q-learning with experience replay and heuristic knowledge. In this method, a neural network has been used to resolve the "curse of dimensionality" issue of the Q-table in reinforcement learning. When a robot is walking in an unknown environment, it collects experience data which is used for training a neural network;such a process is called experience replay.Heuristic knowledge helps the robot avoid blind exploration and provides more effective data for training the neural network. The simulation results show that in comparison with the existing methods, our method can converge to an optimal action strategy with less time and can explore a path in an unknown environment with fewer steps and larger average reward.展开更多
To support dramatically increased traffic loads,communication networks become ultra-dense.Traditional cell association(CA)schemes are timeconsuming,forcing researchers to seek fast schemes.This paper proposes a deep Q...To support dramatically increased traffic loads,communication networks become ultra-dense.Traditional cell association(CA)schemes are timeconsuming,forcing researchers to seek fast schemes.This paper proposes a deep Q-learning based scheme,whose main idea is to train a deep neural network(DNN)to calculate the Q values of all the state-action pairs and the cell holding the maximum Q value is associated.In the training stage,the intelligent agent continuously generates samples through the trial-anderror method to train the DNN until convergence.In the application stage,state vectors of all the users are inputted to the trained DNN to quickly obtain a satisfied CA result of a scenario with the same BS locations and user distribution.Simulations demonstrate that the proposed scheme provides satisfied CA results in a computational time several orders of magnitudes shorter than traditional schemes.Meanwhile,performance metrics,such as capacity and fairness,can be guaranteed.展开更多
Deep Reinforcement Learning(DRL)is a class of Machine Learning(ML)that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environmen...Deep Reinforcement Learning(DRL)is a class of Machine Learning(ML)that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environment to select its efforts in the future efficiently.DRL has been used in many application fields,including games,robots,networks,etc.for creating autonomous systems that improve themselves with experience.It is well acknowledged that DRL is well suited to solve optimization problems in distributed systems in general and network routing especially.Therefore,a novel query routing approach called Deep Reinforcement Learning based Route Selection(DRLRS)is proposed for unstructured P2P networks based on a Deep Q-Learning algorithm.The main objective of this approach is to achieve better retrieval effectiveness with reduced searching cost by less number of connected peers,exchangedmessages,and reduced time.The simulation results shows a significantly improve searching a resource with compression to k-Random Walker and Directed BFS.Here,retrieval effectiveness,search cost in terms of connected peers,and average overhead are 1.28,106,149,respectively.展开更多
To reduce the transmission latency and mitigate the backhaul burden of the centralized cloud-based network services,the mobile edge computing(MEC)has been drawing increased attention from both industry and academia re...To reduce the transmission latency and mitigate the backhaul burden of the centralized cloud-based network services,the mobile edge computing(MEC)has been drawing increased attention from both industry and academia recently.This paper focuses on mobile users’computation offloading problem in wireless cellular networks with mobile edge computing for the purpose of optimizing the computation offloading decision making policy.Since wireless network states and computing requests have stochastic properties and the environment’s dynamics are unknown,we use the modelfree reinforcement learning(RL)framework to formulate and tackle the computation offloading problem.Each mobile user learns through interactions with the environment and the estimate of its performance in the form of value function,then it chooses the overhead-aware optimal computation offloading action(local computing or edge computing)based on its state.The state spaces are high-dimensional in our work and value function is unrealistic to estimate.Consequently,we use deep reinforcement learning algorithm,which combines RL method Q-learning with the deep neural network(DNN)to approximate the value functions for complicated control applications,and the optimal policy will be obtained when the value function reaches convergence.Simulation results showed that the effectiveness of the proposed method in comparison with baseline methods in terms of total overheads of all mobile users.展开更多
Currently,edge Artificial Intelligence(AI)systems have significantly facilitated the functionalities of intelligent devices such as smartphones and smart cars,and supported diverse applications and services.This funda...Currently,edge Artificial Intelligence(AI)systems have significantly facilitated the functionalities of intelligent devices such as smartphones and smart cars,and supported diverse applications and services.This fundamental supports come from continuous data analysis and computation over these devices.Considering the resource constraints of terminal devices,multi-layer edge artificial intelligence systems improve the overall computing power of the system by scheduling computing tasks to edge and cloud servers for execution.Previous efforts tend to ignore the nature of strong pipelined characteristics of processing tasks in edge AI systems,such as the encryption,decryption and consensus algorithm supporting the implementation of Blockchain techniques.Therefore,this paper proposes a new pipelined task scheduling algorithm(referred to as PTS-RDQN),which utilizes the system representation ability of deep reinforcement learning and integrates multiple dimensional information to achieve global task scheduling.Specifically,a co-optimization strategy based on Rainbow Deep Q-Learning(RainbowDQN)is proposed to allocate computation tasks for mobile devices,edge and cloud servers,which is able to comprehensively consider the balance of task turnaround time,link quality,and other factors,thus effectively improving system performance and user experience.In addition,a task scheduling strategy based on PTS-RDQN is proposed,which is capable of realizing dynamic task allocation according to device load.The results based on many simulation experiments show that the proposed method can effectively improve the resource utilization,and provide an effective task scheduling strategy for the edge computing system with cloud-edge-end architecture.展开更多
针对无线传感器网络中存在的安全问题,提出了基于Q-Learning的分簇无线传感网信任管理机制(Q-learning based trust management mechanism for clustered wireless sensor networks,QLTMM-CWSN).该机制主要考虑通信信任、数据信任和能...针对无线传感器网络中存在的安全问题,提出了基于Q-Learning的分簇无线传感网信任管理机制(Q-learning based trust management mechanism for clustered wireless sensor networks,QLTMM-CWSN).该机制主要考虑通信信任、数据信任和能量信任3个方面.在网络运行过程中,基于节点的通信行为、数据分布和能量消耗,使用Q-Learning算法更新节点信任值,并选择簇内信任值最高的节点作为可信簇头节点.当簇中主簇头节点的信任值低于阈值时,可信簇头节点代替主簇头节点管理簇内成员节点,维护正常的数据传输.研究结果表明,QLTMM-CWSN机制能有效抵御通信攻击、伪造本地数据攻击、能量攻击和混合攻击.展开更多
The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow S...The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow Shop Problems(DHFSP)by learning assisted meta-heuristics.This work addresses a DHFSP with minimizing the maximum completion time(Makespan).First,a mathematical model is developed for the concerned DHFSP.Second,four Q-learning-assisted meta-heuristics,e.g.,genetic algorithm(GA),artificial bee colony algorithm(ABC),particle swarm optimization(PSO),and differential evolution(DE),are proposed.According to the nature of DHFSP,six local search operations are designed for finding high-quality solutions in local space.Instead of randomselection,Q-learning assists meta-heuristics in choosing the appropriate local search operations during iterations.Finally,based on 60 cases,comprehensive numerical experiments are conducted to assess the effectiveness of the proposed algorithms.The experimental results and discussions prove that using Q-learning to select appropriate local search operations is more effective than the random strategy.To verify the competitiveness of the Q-learning assistedmeta-heuristics,they are compared with the improved iterated greedy algorithm(IIG),which is also for solving DHFSP.The Friedman test is executed on the results by five algorithms.It is concluded that the performance of four Q-learning-assisted meta-heuristics are better than IIG,and the Q-learning-assisted PSO shows the best competitiveness.展开更多
Intelligent traffic control requires accurate estimation of the road states and incorporation of adaptive or dynamically adjusted intelligent algorithms for making the decision.In this article,these issues are handled...Intelligent traffic control requires accurate estimation of the road states and incorporation of adaptive or dynamically adjusted intelligent algorithms for making the decision.In this article,these issues are handled by proposing a novel framework for traffic control using vehicular communications and Internet of Things data.The framework integrates Kalman filtering and Q-learning.Unlike smoothing Kalman filtering,our data fusion Kalman filter incorporates a process-aware model which makes it superior in terms of the prediction error.Unlike traditional Q-learning,our Q-learning algorithm enables adaptive state quantization by changing the threshold of separating low traffic from high traffic on the road according to the maximum number of vehicles in the junction roads.For evaluation,the model has been simulated on a single intersection consisting of four roads:east,west,north,and south.A comparison of the developed adaptive quantized Q-learning(AQQL)framework with state-of-the-art and greedy approaches shows the superiority of AQQL with an improvement percentage in terms of the released number of vehicles of AQQL is 5%over the greedy approach and 340%over the state-of-the-art approach.Hence,AQQL provides an effective traffic control that can be applied in today’s intelligent traffic system.展开更多
文摘Beamforming is significant for millimeter wave multi-user massive multi-input multi-output systems.In the meanwhile,the overhead cost of channel state information and beam training is considerable,especially in dynamic environments.To reduce the overhead cost,we propose a multi-user beam tracking algorithm using a distributed deep Q-learning method.With online learning of users’moving trajectories,the proposed algorithm learns to scan a beam subspace to maximize the average effective sum rate.Considering practical implementation,we model the continuous beam tracking problem as a non-Markov decision process and thus develop a simplified training scheme of deep Q-learning to reduce the training complexity.Furthermore,we propose a scalable state-action-reward design for scenarios with different users and antenna numbers.Simulation results verify the effectiveness of the designed method.
基金supported by the National Natural Science Foundation of China(61751210,61572441)。
文摘Path planning and obstacle avoidance are two challenging problems in the study of intelligent robots. In this paper, we develop a new method to alleviate these problems based on deep Q-learning with experience replay and heuristic knowledge. In this method, a neural network has been used to resolve the "curse of dimensionality" issue of the Q-table in reinforcement learning. When a robot is walking in an unknown environment, it collects experience data which is used for training a neural network;such a process is called experience replay.Heuristic knowledge helps the robot avoid blind exploration and provides more effective data for training the neural network. The simulation results show that in comparison with the existing methods, our method can converge to an optimal action strategy with less time and can explore a path in an unknown environment with fewer steps and larger average reward.
基金This work was supported by the Fundamental Research Funds for the Central Universities of China under grant no.PA2019GDQT0012by National Natural Science Foundation of China(Grant No.61971176)by the Applied Basic Research Program ofWuhan City,China,under grand 2017010201010117.
文摘To support dramatically increased traffic loads,communication networks become ultra-dense.Traditional cell association(CA)schemes are timeconsuming,forcing researchers to seek fast schemes.This paper proposes a deep Q-learning based scheme,whose main idea is to train a deep neural network(DNN)to calculate the Q values of all the state-action pairs and the cell holding the maximum Q value is associated.In the training stage,the intelligent agent continuously generates samples through the trial-anderror method to train the DNN until convergence.In the application stage,state vectors of all the users are inputted to the trained DNN to quickly obtain a satisfied CA result of a scenario with the same BS locations and user distribution.Simulations demonstrate that the proposed scheme provides satisfied CA results in a computational time several orders of magnitudes shorter than traditional schemes.Meanwhile,performance metrics,such as capacity and fairness,can be guaranteed.
基金Authors would like to thank the Deanship of Scientific Research at Shaqra University for supporting this work under Project No.g01/n04.
文摘Deep Reinforcement Learning(DRL)is a class of Machine Learning(ML)that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environment to select its efforts in the future efficiently.DRL has been used in many application fields,including games,robots,networks,etc.for creating autonomous systems that improve themselves with experience.It is well acknowledged that DRL is well suited to solve optimization problems in distributed systems in general and network routing especially.Therefore,a novel query routing approach called Deep Reinforcement Learning based Route Selection(DRLRS)is proposed for unstructured P2P networks based on a Deep Q-Learning algorithm.The main objective of this approach is to achieve better retrieval effectiveness with reduced searching cost by less number of connected peers,exchangedmessages,and reduced time.The simulation results shows a significantly improve searching a resource with compression to k-Random Walker and Directed BFS.Here,retrieval effectiveness,search cost in terms of connected peers,and average overhead are 1.28,106,149,respectively.
基金This work was supported by the National Natural Science Foundation of China(61571059 and 61871058).
文摘To reduce the transmission latency and mitigate the backhaul burden of the centralized cloud-based network services,the mobile edge computing(MEC)has been drawing increased attention from both industry and academia recently.This paper focuses on mobile users’computation offloading problem in wireless cellular networks with mobile edge computing for the purpose of optimizing the computation offloading decision making policy.Since wireless network states and computing requests have stochastic properties and the environment’s dynamics are unknown,we use the modelfree reinforcement learning(RL)framework to formulate and tackle the computation offloading problem.Each mobile user learns through interactions with the environment and the estimate of its performance in the form of value function,then it chooses the overhead-aware optimal computation offloading action(local computing or edge computing)based on its state.The state spaces are high-dimensional in our work and value function is unrealistic to estimate.Consequently,we use deep reinforcement learning algorithm,which combines RL method Q-learning with the deep neural network(DNN)to approximate the value functions for complicated control applications,and the optimal policy will be obtained when the value function reaches convergence.Simulation results showed that the effectiveness of the proposed method in comparison with baseline methods in terms of total overheads of all mobile users.
文摘Currently,edge Artificial Intelligence(AI)systems have significantly facilitated the functionalities of intelligent devices such as smartphones and smart cars,and supported diverse applications and services.This fundamental supports come from continuous data analysis and computation over these devices.Considering the resource constraints of terminal devices,multi-layer edge artificial intelligence systems improve the overall computing power of the system by scheduling computing tasks to edge and cloud servers for execution.Previous efforts tend to ignore the nature of strong pipelined characteristics of processing tasks in edge AI systems,such as the encryption,decryption and consensus algorithm supporting the implementation of Blockchain techniques.Therefore,this paper proposes a new pipelined task scheduling algorithm(referred to as PTS-RDQN),which utilizes the system representation ability of deep reinforcement learning and integrates multiple dimensional information to achieve global task scheduling.Specifically,a co-optimization strategy based on Rainbow Deep Q-Learning(RainbowDQN)is proposed to allocate computation tasks for mobile devices,edge and cloud servers,which is able to comprehensively consider the balance of task turnaround time,link quality,and other factors,thus effectively improving system performance and user experience.In addition,a task scheduling strategy based on PTS-RDQN is proposed,which is capable of realizing dynamic task allocation according to device load.The results based on many simulation experiments show that the proposed method can effectively improve the resource utilization,and provide an effective task scheduling strategy for the edge computing system with cloud-edge-end architecture.
文摘针对无线传感器网络中存在的安全问题,提出了基于Q-Learning的分簇无线传感网信任管理机制(Q-learning based trust management mechanism for clustered wireless sensor networks,QLTMM-CWSN).该机制主要考虑通信信任、数据信任和能量信任3个方面.在网络运行过程中,基于节点的通信行为、数据分布和能量消耗,使用Q-Learning算法更新节点信任值,并选择簇内信任值最高的节点作为可信簇头节点.当簇中主簇头节点的信任值低于阈值时,可信簇头节点代替主簇头节点管理簇内成员节点,维护正常的数据传输.研究结果表明,QLTMM-CWSN机制能有效抵御通信攻击、伪造本地数据攻击、能量攻击和混合攻击.
基金partially supported by the Guangdong Basic and Applied Basic Research Foundation(2023A1515011531)the National Natural Science Foundation of China under Grant 62173356+2 种基金the Science and Technology Development Fund(FDCT),Macao SAR,under Grant 0019/2021/AZhuhai Industry-University-Research Project with Hongkong and Macao under Grant ZH22017002210014PWCthe Key Technologies for Scheduling and Optimization of Complex Distributed Manufacturing Systems(22JR10KA007).
文摘The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow Shop Problems(DHFSP)by learning assisted meta-heuristics.This work addresses a DHFSP with minimizing the maximum completion time(Makespan).First,a mathematical model is developed for the concerned DHFSP.Second,four Q-learning-assisted meta-heuristics,e.g.,genetic algorithm(GA),artificial bee colony algorithm(ABC),particle swarm optimization(PSO),and differential evolution(DE),are proposed.According to the nature of DHFSP,six local search operations are designed for finding high-quality solutions in local space.Instead of randomselection,Q-learning assists meta-heuristics in choosing the appropriate local search operations during iterations.Finally,based on 60 cases,comprehensive numerical experiments are conducted to assess the effectiveness of the proposed algorithms.The experimental results and discussions prove that using Q-learning to select appropriate local search operations is more effective than the random strategy.To verify the competitiveness of the Q-learning assistedmeta-heuristics,they are compared with the improved iterated greedy algorithm(IIG),which is also for solving DHFSP.The Friedman test is executed on the results by five algorithms.It is concluded that the performance of four Q-learning-assisted meta-heuristics are better than IIG,and the Q-learning-assisted PSO shows the best competitiveness.
文摘Intelligent traffic control requires accurate estimation of the road states and incorporation of adaptive or dynamically adjusted intelligent algorithms for making the decision.In this article,these issues are handled by proposing a novel framework for traffic control using vehicular communications and Internet of Things data.The framework integrates Kalman filtering and Q-learning.Unlike smoothing Kalman filtering,our data fusion Kalman filter incorporates a process-aware model which makes it superior in terms of the prediction error.Unlike traditional Q-learning,our Q-learning algorithm enables adaptive state quantization by changing the threshold of separating low traffic from high traffic on the road according to the maximum number of vehicles in the junction roads.For evaluation,the model has been simulated on a single intersection consisting of four roads:east,west,north,and south.A comparison of the developed adaptive quantized Q-learning(AQQL)framework with state-of-the-art and greedy approaches shows the superiority of AQQL with an improvement percentage in terms of the released number of vehicles of AQQL is 5%over the greedy approach and 340%over the state-of-the-art approach.Hence,AQQL provides an effective traffic control that can be applied in today’s intelligent traffic system.