Q-learning is a popular temporal-difference reinforcement learning algorithm which often explicitly stores state values using lookup tables. This implementation has been proven to converge to the optimal solution, but...Q-learning is a popular temporal-difference reinforcement learning algorithm which often explicitly stores state values using lookup tables. This implementation has been proven to converge to the optimal solution, but it is often beneficial to use a function-approximation system, such as deep neural networks, to estimate state values. It has been previously observed that Q-learning can be unstable when using value function approximation or when operating in a stochastic environment. This instability can adversely affect the algorithm’s ability to maximize its returns. In this paper, we present a new algorithm called Multi Q-learning to attempt to overcome the instability seen in Q-learning. We test our algorithm on a 4 × 4 grid-world with different stochastic reward functions using various deep neural networks and convolutional networks. Our results show that in most cases, Multi Q-learning outperforms Q-learning, achieving average returns up to 2.5 times higher than Q-learning and having a standard deviation of state values as low as 0.58.展开更多
The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worke...The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.展开更多
针对无线传感器网络中存在的安全问题,提出了基于Q-Learning的分簇无线传感网信任管理机制(Q-learning based trust management mechanism for clustered wireless sensor networks,QLTMM-CWSN).该机制主要考虑通信信任、数据信任和能...针对无线传感器网络中存在的安全问题,提出了基于Q-Learning的分簇无线传感网信任管理机制(Q-learning based trust management mechanism for clustered wireless sensor networks,QLTMM-CWSN).该机制主要考虑通信信任、数据信任和能量信任3个方面.在网络运行过程中,基于节点的通信行为、数据分布和能量消耗,使用Q-Learning算法更新节点信任值,并选择簇内信任值最高的节点作为可信簇头节点.当簇中主簇头节点的信任值低于阈值时,可信簇头节点代替主簇头节点管理簇内成员节点,维护正常的数据传输.研究结果表明,QLTMM-CWSN机制能有效抵御通信攻击、伪造本地数据攻击、能量攻击和混合攻击.展开更多
The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow S...The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow Shop Problems(DHFSP)by learning assisted meta-heuristics.This work addresses a DHFSP with minimizing the maximum completion time(Makespan).First,a mathematical model is developed for the concerned DHFSP.Second,four Q-learning-assisted meta-heuristics,e.g.,genetic algorithm(GA),artificial bee colony algorithm(ABC),particle swarm optimization(PSO),and differential evolution(DE),are proposed.According to the nature of DHFSP,six local search operations are designed for finding high-quality solutions in local space.Instead of randomselection,Q-learning assists meta-heuristics in choosing the appropriate local search operations during iterations.Finally,based on 60 cases,comprehensive numerical experiments are conducted to assess the effectiveness of the proposed algorithms.The experimental results and discussions prove that using Q-learning to select appropriate local search operations is more effective than the random strategy.To verify the competitiveness of the Q-learning assistedmeta-heuristics,they are compared with the improved iterated greedy algorithm(IIG),which is also for solving DHFSP.The Friedman test is executed on the results by five algorithms.It is concluded that the performance of four Q-learning-assisted meta-heuristics are better than IIG,and the Q-learning-assisted PSO shows the best competitiveness.展开更多
Intelligent traffic control requires accurate estimation of the road states and incorporation of adaptive or dynamically adjusted intelligent algorithms for making the decision.In this article,these issues are handled...Intelligent traffic control requires accurate estimation of the road states and incorporation of adaptive or dynamically adjusted intelligent algorithms for making the decision.In this article,these issues are handled by proposing a novel framework for traffic control using vehicular communications and Internet of Things data.The framework integrates Kalman filtering and Q-learning.Unlike smoothing Kalman filtering,our data fusion Kalman filter incorporates a process-aware model which makes it superior in terms of the prediction error.Unlike traditional Q-learning,our Q-learning algorithm enables adaptive state quantization by changing the threshold of separating low traffic from high traffic on the road according to the maximum number of vehicles in the junction roads.For evaluation,the model has been simulated on a single intersection consisting of four roads:east,west,north,and south.A comparison of the developed adaptive quantized Q-learning(AQQL)framework with state-of-the-art and greedy approaches shows the superiority of AQQL with an improvement percentage in terms of the released number of vehicles of AQQL is 5%over the greedy approach and 340%over the state-of-the-art approach.Hence,AQQL provides an effective traffic control that can be applied in today’s intelligent traffic system.展开更多
The stability problem of power grids has become increasingly serious in recent years as the size of novel power systems increases.In order to improve and ensure the stable operation of the novel power system,this stud...The stability problem of power grids has become increasingly serious in recent years as the size of novel power systems increases.In order to improve and ensure the stable operation of the novel power system,this study proposes an artificial emotional lazy Q-learning method,which combines artificial emotion,lazy learning,and reinforcement learning for static security and stability analysis of power systems.Moreover,this study compares the analysis results of the proposed method with those of the small disturbance method for a stand-alone power system and verifies that the proposed lazy Q-learning method is able to effectively screen useful data for learning,and improve the static security stability of the new type of power system more effectively than the traditional proportional-integral-differential control and Q-learning methods.展开更多
Beamforming is significant for millimeter wave multi-user massive multi-input multi-output systems.In the meanwhile,the overhead cost of channel state information and beam training is considerable,especially in dynami...Beamforming is significant for millimeter wave multi-user massive multi-input multi-output systems.In the meanwhile,the overhead cost of channel state information and beam training is considerable,especially in dynamic environments.To reduce the overhead cost,we propose a multi-user beam tracking algorithm using a distributed deep Q-learning method.With online learning of users’moving trajectories,the proposed algorithm learns to scan a beam subspace to maximize the average effective sum rate.Considering practical implementation,we model the continuous beam tracking problem as a non-Markov decision process and thus develop a simplified training scheme of deep Q-learning to reduce the training complexity.Furthermore,we propose a scalable state-action-reward design for scenarios with different users and antenna numbers.Simulation results verify the effectiveness of the designed method.展开更多
We present the development of a bias compensating reinforcement learning(RL)algorithm that optimizes thermal comfort(by minimizing tracking error)and control utilization(by penalizing setpoint deviations)in a multi-zo...We present the development of a bias compensating reinforcement learning(RL)algorithm that optimizes thermal comfort(by minimizing tracking error)and control utilization(by penalizing setpoint deviations)in a multi-zone heating,ventilation,and air-conditioning(HVAC)lab facility subject to unmeasurable disturbances and unknown dynamics.It is shown that the presence of unmeasurable disturbance results in an inconsistent learning equation in traditional RL controllers leading to parameter estimation bias(even with integral action support),and in the extreme case,the divergence of the learning algorithm.We demonstrate this issue by applying the popular Q-learning algorithm to linear quadratic regulation(LQR)of a multi-zone HVAC environment and showing that,even with integral support,the algorithm exhibits bias issue during the learning phase when the HVAC disturbance is unmeasurable due to unknown heat gains,occupancy variations,light sources,and outside weather changes.To address this difficulty,we present a bias compensating learning equation that learns a lumped bias term as a result of disturbances(and possibly other sources)in conjunction with the optimal control parameters.Experimental results show that the proposed scheme not only recovers the bias-free optimal control parameters but it does so without explicitly learning the dynamic model or estimating the disturbances,demonstrating the effectiveness of the algorithm in addressing the above challenges.展开更多
文摘Q-learning is a popular temporal-difference reinforcement learning algorithm which often explicitly stores state values using lookup tables. This implementation has been proven to converge to the optimal solution, but it is often beneficial to use a function-approximation system, such as deep neural networks, to estimate state values. It has been previously observed that Q-learning can be unstable when using value function approximation or when operating in a stochastic environment. This instability can adversely affect the algorithm’s ability to maximize its returns. In this paper, we present a new algorithm called Multi Q-learning to attempt to overcome the instability seen in Q-learning. We test our algorithm on a 4 × 4 grid-world with different stochastic reward functions using various deep neural networks and convolutional networks. Our results show that in most cases, Multi Q-learning outperforms Q-learning, achieving average returns up to 2.5 times higher than Q-learning and having a standard deviation of state values as low as 0.58.
基金supported by the Natural Science Foundation of Anhui Province(Grant Number 2208085MG181)the Science Research Project of Higher Education Institutions in Anhui Province,Philosophy and Social Sciences(Grant Number 2023AH051063)the Open Fund of Key Laboratory of Anhui Higher Education Institutes(Grant Number CS2021-ZD01).
文摘The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.
文摘针对无线传感器网络中存在的安全问题,提出了基于Q-Learning的分簇无线传感网信任管理机制(Q-learning based trust management mechanism for clustered wireless sensor networks,QLTMM-CWSN).该机制主要考虑通信信任、数据信任和能量信任3个方面.在网络运行过程中,基于节点的通信行为、数据分布和能量消耗,使用Q-Learning算法更新节点信任值,并选择簇内信任值最高的节点作为可信簇头节点.当簇中主簇头节点的信任值低于阈值时,可信簇头节点代替主簇头节点管理簇内成员节点,维护正常的数据传输.研究结果表明,QLTMM-CWSN机制能有效抵御通信攻击、伪造本地数据攻击、能量攻击和混合攻击.
基金partially supported by the Guangdong Basic and Applied Basic Research Foundation(2023A1515011531)the National Natural Science Foundation of China under Grant 62173356+2 种基金the Science and Technology Development Fund(FDCT),Macao SAR,under Grant 0019/2021/AZhuhai Industry-University-Research Project with Hongkong and Macao under Grant ZH22017002210014PWCthe Key Technologies for Scheduling and Optimization of Complex Distributed Manufacturing Systems(22JR10KA007).
文摘The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow Shop Problems(DHFSP)by learning assisted meta-heuristics.This work addresses a DHFSP with minimizing the maximum completion time(Makespan).First,a mathematical model is developed for the concerned DHFSP.Second,four Q-learning-assisted meta-heuristics,e.g.,genetic algorithm(GA),artificial bee colony algorithm(ABC),particle swarm optimization(PSO),and differential evolution(DE),are proposed.According to the nature of DHFSP,six local search operations are designed for finding high-quality solutions in local space.Instead of randomselection,Q-learning assists meta-heuristics in choosing the appropriate local search operations during iterations.Finally,based on 60 cases,comprehensive numerical experiments are conducted to assess the effectiveness of the proposed algorithms.The experimental results and discussions prove that using Q-learning to select appropriate local search operations is more effective than the random strategy.To verify the competitiveness of the Q-learning assistedmeta-heuristics,they are compared with the improved iterated greedy algorithm(IIG),which is also for solving DHFSP.The Friedman test is executed on the results by five algorithms.It is concluded that the performance of four Q-learning-assisted meta-heuristics are better than IIG,and the Q-learning-assisted PSO shows the best competitiveness.
文摘Intelligent traffic control requires accurate estimation of the road states and incorporation of adaptive or dynamically adjusted intelligent algorithms for making the decision.In this article,these issues are handled by proposing a novel framework for traffic control using vehicular communications and Internet of Things data.The framework integrates Kalman filtering and Q-learning.Unlike smoothing Kalman filtering,our data fusion Kalman filter incorporates a process-aware model which makes it superior in terms of the prediction error.Unlike traditional Q-learning,our Q-learning algorithm enables adaptive state quantization by changing the threshold of separating low traffic from high traffic on the road according to the maximum number of vehicles in the junction roads.For evaluation,the model has been simulated on a single intersection consisting of four roads:east,west,north,and south.A comparison of the developed adaptive quantized Q-learning(AQQL)framework with state-of-the-art and greedy approaches shows the superiority of AQQL with an improvement percentage in terms of the released number of vehicles of AQQL is 5%over the greedy approach and 340%over the state-of-the-art approach.Hence,AQQL provides an effective traffic control that can be applied in today’s intelligent traffic system.
基金the Technology Project of China Southern Power Grid Digital Grid Research Institute Corporation,Ltd.(670000KK52220003)the National Key R&D Program of China(2020YFB0906000).
文摘The stability problem of power grids has become increasingly serious in recent years as the size of novel power systems increases.In order to improve and ensure the stable operation of the novel power system,this study proposes an artificial emotional lazy Q-learning method,which combines artificial emotion,lazy learning,and reinforcement learning for static security and stability analysis of power systems.Moreover,this study compares the analysis results of the proposed method with those of the small disturbance method for a stand-alone power system and verifies that the proposed lazy Q-learning method is able to effectively screen useful data for learning,and improve the static security stability of the new type of power system more effectively than the traditional proportional-integral-differential control and Q-learning methods.
文摘Beamforming is significant for millimeter wave multi-user massive multi-input multi-output systems.In the meanwhile,the overhead cost of channel state information and beam training is considerable,especially in dynamic environments.To reduce the overhead cost,we propose a multi-user beam tracking algorithm using a distributed deep Q-learning method.With online learning of users’moving trajectories,the proposed algorithm learns to scan a beam subspace to maximize the average effective sum rate.Considering practical implementation,we model the continuous beam tracking problem as a non-Markov decision process and thus develop a simplified training scheme of deep Q-learning to reduce the training complexity.Furthermore,we propose a scalable state-action-reward design for scenarios with different users and antenna numbers.Simulation results verify the effectiveness of the designed method.
文摘We present the development of a bias compensating reinforcement learning(RL)algorithm that optimizes thermal comfort(by minimizing tracking error)and control utilization(by penalizing setpoint deviations)in a multi-zone heating,ventilation,and air-conditioning(HVAC)lab facility subject to unmeasurable disturbances and unknown dynamics.It is shown that the presence of unmeasurable disturbance results in an inconsistent learning equation in traditional RL controllers leading to parameter estimation bias(even with integral action support),and in the extreme case,the divergence of the learning algorithm.We demonstrate this issue by applying the popular Q-learning algorithm to linear quadratic regulation(LQR)of a multi-zone HVAC environment and showing that,even with integral support,the algorithm exhibits bias issue during the learning phase when the HVAC disturbance is unmeasurable due to unknown heat gains,occupancy variations,light sources,and outside weather changes.To address this difficulty,we present a bias compensating learning equation that learns a lumped bias term as a result of disturbances(and possibly other sources)in conjunction with the optimal control parameters.Experimental results show that the proposed scheme not only recovers the bias-free optimal control parameters but it does so without explicitly learning the dynamic model or estimating the disturbances,demonstrating the effectiveness of the algorithm in addressing the above challenges.