We investigate the fixed-time containment control(FCC)problem of multi-agent systems(MASs)under discontinuous communication.A saturation function is used in the controller to achieve the containment control in MASs.On...We investigate the fixed-time containment control(FCC)problem of multi-agent systems(MASs)under discontinuous communication.A saturation function is used in the controller to achieve the containment control in MASs.One difference from using a symbolic function is that it avoids the differential calculation process for discontinuous functions,which further ensures the continuity of the control input.Considering the discontinuous communication,a dynamic variable is constructed,which is always non-negative between any two communications of the agent.Based on the designed variable,the dynamic event-triggered algorithm is proposed to achieve FCC,which can effectively reduce controller updating.In addition,we further design a new event-triggered algorithm to achieve FCC,called the team-trigger mechanism,which combines the self-triggering technique with the proposed dynamic event trigger mechanism.It has faster convergence than the proposed dynamic event triggering technique and achieves the tradeoff between communication cost,convergence time and number of triggers in MASs.Finally,Zeno behavior is excluded and the validity of the proposed theory is confirmed by simulation.展开更多
In public goods games, punishments and rewards have been shown to be effective mechanisms for maintaining individualcooperation. However, punishments and rewards are costly to incentivize cooperation. Therefore, the g...In public goods games, punishments and rewards have been shown to be effective mechanisms for maintaining individualcooperation. However, punishments and rewards are costly to incentivize cooperation. Therefore, the generation ofcostly penalties and rewards has been a complex problem in promoting the development of cooperation. In real society,specialized institutions exist to punish evil people or reward good people by collecting taxes. We propose a strong altruisticpunishment or reward strategy in the public goods game through this phenomenon. Through theoretical analysis and numericalcalculation, we can get that tax-based strong altruistic punishment (reward) has more evolutionary advantages thantraditional strong altruistic punishment (reward) in maintaining cooperation and tax-based strong altruistic reward leads toa higher level of cooperation than tax-based strong altruistic punishment.展开更多
To explore the green development of automobile enterprises and promote the achievement of the“dual carbon”target,based on the bounded rationality assumptions,this study constructed a tripartite evolutionary game mod...To explore the green development of automobile enterprises and promote the achievement of the“dual carbon”target,based on the bounded rationality assumptions,this study constructed a tripartite evolutionary game model of gov-ernment,commercial banks,and automobile enterprises;introduced a dynamic reward and punishment mechanism;and analyzed the development process of the three parties’strategic behavior under the static and dynamic reward and punish-ment mechanism.Vensim PLE was used for numerical simulation analysis.Our results indicate that the system could not reach a stable state under the static reward and punishment mechanism.A dynamic reward and punishment mechanism can effectively improve the system stability and better fit real situations.Under the dynamic reward and punishment mechan-ism,an increase in the initial probabilities of the three parties can promote the system stability,and the government can im-plement effective supervision by adjusting the upper limit of the reward and punishment intensity.Finally,the implementa-tion of green credit by commercial banks plays a significant role in promoting the green development of automobile enter-prises.展开更多
By integrating deep neural networks with reinforcement learning,the Double Deep Q Network(DDQN)algorithm overcomes the limitations of Q-learning in handling continuous spaces and is widely applied in the path planning...By integrating deep neural networks with reinforcement learning,the Double Deep Q Network(DDQN)algorithm overcomes the limitations of Q-learning in handling continuous spaces and is widely applied in the path planning of mobile robots.However,the traditional DDQN algorithm suffers from sparse rewards and inefficient utilization of high-quality data.Targeting those problems,an improved DDQN algorithm based on average Q-value estimation and reward redistribution was proposed.First,to enhance the precision of the target Q-value,the average of multiple previously learned Q-values from the target Q network is used to replace the single Q-value from the current target Q network.Next,a reward redistribution mechanism is designed to overcome the sparse reward problem by adjusting the final reward of each action using the round reward from trajectory information.Additionally,a reward-prioritized experience selection method is introduced,which ranks experience samples according to reward values to ensure frequent utilization of high-quality data.Finally,simulation experiments are conducted to verify the effectiveness of the proposed algorithm in fixed-position scenario and random environments.The experimental results show that compared to the traditional DDQN algorithm,the proposed algorithm achieves shorter average running time,higher average return and fewer average steps.The performance of the proposed algorithm is improved by 11.43%in the fixed scenario and 8.33%in random environments.It not only plans economic and safe paths but also significantly improves efficiency and generalization in path planning,making it suitable for widespread application in autonomous navigation and industrial automation.展开更多
Mobile adhoc networks have grown in prominence in recent years,and they are now utilized in a broader range of applications.The main challenges are related to routing techniques that are generally employed in them.Mob...Mobile adhoc networks have grown in prominence in recent years,and they are now utilized in a broader range of applications.The main challenges are related to routing techniques that are generally employed in them.Mobile Adhoc system management,on the other hand,requires further testing and improvements in terms of security.Traditional routing protocols,such as Adhoc On-Demand Distance Vector(AODV)and Dynamic Source Routing(DSR),employ the hop count to calculate the distance between two nodes.The main aim of this research work is to determine the optimum method for sending packets while also extending life time of the network.It is achieved by changing the residual energy of each network node.Also,in this paper,various algorithms for optimal routing based on parameters like energy,distance,mobility,and the pheromone value are proposed.Moreover,an approach based on a reward and penalty system is given in this paper to evaluate the efficiency of the proposed algorithms under the impact of parameters.The simulation results unveil that the reward penalty-based approach is quite effective for the selection of an optimal path for routing when the algorithms are implemented under the parameters of interest,which helps in achieving less packet drop and energy consumption of the nodes along with enhancing the network efficiency.展开更多
Goal-conditioned reinforcement learning(RL)is an interesting extension of the traditional RL framework,where the dynamic environment and reward sparsity can cause conventional learning algorithms to fail.Reward shapin...Goal-conditioned reinforcement learning(RL)is an interesting extension of the traditional RL framework,where the dynamic environment and reward sparsity can cause conventional learning algorithms to fail.Reward shaping is a practical approach to improving sample efficiency by embedding human domain knowledge into the learning process.Existing reward shaping methods for goal-conditioned RL are typically built on distance metrics with a linear and isotropic distribution,which may fail to provide sufficient information about the ever-changing environment with high complexity.This paper proposes a novel magnetic field-based reward shaping(MFRS)method for goal-conditioned RL tasks with dynamic target and obstacles.Inspired by the physical properties of magnets,we consider the target and obstacles as permanent magnets and establish the reward function according to the intensity values of the magnetic field generated by these magnets.The nonlinear and anisotropic distribution of the magnetic field intensity can provide more accessible and conducive information about the optimization landscape,thus introducing a more sophisticated magnetic reward compared to the distance-based setting.Further,we transform our magnetic reward to the form of potential-based reward shaping by learning a secondary potential function concurrently to ensure the optimal policy invariance of our method.Experiments results in both simulated and real-world robotic manipulation tasks demonstrate that MFRS outperforms relevant existing methods and effectively improves the sample efficiency of RL algorithms in goal-conditioned tasks with various dynamics of the target and obstacles.展开更多
为有效提高碳排放配额分配的合理性,并且避免年度结算时碳排放量超标导致环境污染加剧问题,提出基于奖惩因子的季节性碳交易机制,以园区综合能源系统(park integrated energy system,PIES)为对象进行低碳经济调度。首先,构建包含能量层...为有效提高碳排放配额分配的合理性,并且避免年度结算时碳排放量超标导致环境污染加剧问题,提出基于奖惩因子的季节性碳交易机制,以园区综合能源系统(park integrated energy system,PIES)为对象进行低碳经济调度。首先,构建包含能量层–碳流层–管理层的综合能源系统(integrated energy system,IES)运行框架,建立电气热多能流供需动态一致性模型;其次,分析系统内“日–季节–年度”碳排放特性,打破传统应用指标法的配额分配方法,采用灰色关联分析法建立碳排放配额分配模型,并基于奖惩阶梯碳价制定季节性碳交易机制;最后,以系统内全寿命周期运行成本及碳交易成本最小为目标,对执行季节性碳交易机制的PIES进行低碳经济调度,分析长时间尺度下季节性储能参与调度的减碳量。搭建IEEE 33节点电网5节点气网7节点热网的PIES,并基于多场景进行算例分析,验证此调度方法能够实现零碳经济运行,保证系统供能可靠性,为建立零碳园区奠定理论基础。展开更多
As assessment outcomes provide students with a sense of accomplishment that is boosted by the reward system,learning becomes more effective.This research aims to determine the effects of reward system prior to assessm...As assessment outcomes provide students with a sense of accomplishment that is boosted by the reward system,learning becomes more effective.This research aims to determine the effects of reward system prior to assessment in Mathematics.Quasi-experimental research design was used to examine whether there was a significant difference between the use of reward system and students’level of performance in Mathematics.Through purposive sampling,the respondents of the study involve 80 Grade 9 students belonging to two sections from Gaudencio B.Lontok Memorial Integrated School.Based on similar demographics and pre-test results,control and study group were involved as participants of the study.Data were treated and analyzed accordingly using statistical treatments such as mean and t-test for independent variables.There was a significant finding revealing the advantage of using the reward system compare to the non-reward system in increasing students’level of performance in Mathematics.It is concluded that the use of reward system is effective in improving the assessment outcomes in Mathematics.It is recommended to use reward system for persistent assessment outcomes prior to assessment,to be a reflection of the intended outcomes in Mathematics.展开更多
基于工作量证明(Proof of Work,POW)的共识机制在寻找随机数(Nonce)过程中算力主导记账,浪费计算资源及内存,存在51%算力的危险。针对此缺陷,提出一种改进的基于POW的区块链共识机制IPOW(Improve Proof of Work),引入控制权重、激励阈...基于工作量证明(Proof of Work,POW)的共识机制在寻找随机数(Nonce)过程中算力主导记账,浪费计算资源及内存,存在51%算力的危险。针对此缺陷,提出一种改进的基于POW的区块链共识机制IPOW(Improve Proof of Work),引入控制权重、激励阈值、有效时间和奖励因子,给出相关算法,通过控制权重等得出最终记账权R。实验结果表明,与POW相比,IPOW共识机制削弱了算力对于节点获得记账权的主导地位,控制权重越大,越容易获得记账权;降低节点作恶的概率,减少富人愈富现象的发生。展开更多
基金supported by the National Natural Science Foundation of China (Grant Nos.62173121,62002095,61961019,and 61803139)the Youth Key Project of Natural Science Foundation of Jiangxi Province of China (Grant No.20202ACBL212003)。
文摘We investigate the fixed-time containment control(FCC)problem of multi-agent systems(MASs)under discontinuous communication.A saturation function is used in the controller to achieve the containment control in MASs.One difference from using a symbolic function is that it avoids the differential calculation process for discontinuous functions,which further ensures the continuity of the control input.Considering the discontinuous communication,a dynamic variable is constructed,which is always non-negative between any two communications of the agent.Based on the designed variable,the dynamic event-triggered algorithm is proposed to achieve FCC,which can effectively reduce controller updating.In addition,we further design a new event-triggered algorithm to achieve FCC,called the team-trigger mechanism,which combines the self-triggering technique with the proposed dynamic event trigger mechanism.It has faster convergence than the proposed dynamic event triggering technique and achieves the tradeoff between communication cost,convergence time and number of triggers in MASs.Finally,Zeno behavior is excluded and the validity of the proposed theory is confirmed by simulation.
基金the National Natural Science Foun-dation of China(Grant No.71961003).
文摘In public goods games, punishments and rewards have been shown to be effective mechanisms for maintaining individualcooperation. However, punishments and rewards are costly to incentivize cooperation. Therefore, the generation ofcostly penalties and rewards has been a complex problem in promoting the development of cooperation. In real society,specialized institutions exist to punish evil people or reward good people by collecting taxes. We propose a strong altruisticpunishment or reward strategy in the public goods game through this phenomenon. Through theoretical analysis and numericalcalculation, we can get that tax-based strong altruistic punishment (reward) has more evolutionary advantages thantraditional strong altruistic punishment (reward) in maintaining cooperation and tax-based strong altruistic reward leads toa higher level of cooperation than tax-based strong altruistic punishment.
基金supported by the National Natural Science Foundation of China(71973001).
文摘To explore the green development of automobile enterprises and promote the achievement of the“dual carbon”target,based on the bounded rationality assumptions,this study constructed a tripartite evolutionary game model of gov-ernment,commercial banks,and automobile enterprises;introduced a dynamic reward and punishment mechanism;and analyzed the development process of the three parties’strategic behavior under the static and dynamic reward and punish-ment mechanism.Vensim PLE was used for numerical simulation analysis.Our results indicate that the system could not reach a stable state under the static reward and punishment mechanism.A dynamic reward and punishment mechanism can effectively improve the system stability and better fit real situations.Under the dynamic reward and punishment mechan-ism,an increase in the initial probabilities of the three parties can promote the system stability,and the government can im-plement effective supervision by adjusting the upper limit of the reward and punishment intensity.Finally,the implementa-tion of green credit by commercial banks plays a significant role in promoting the green development of automobile enter-prises.
基金funded by National Natural Science Foundation of China(No.62063006)Guangxi Science and Technology Major Program(No.2022AA05002)+1 种基金Key Laboratory of AI and Information Processing(Hechi University),Education Department of Guangxi Zhuang Autonomous Region(No.2022GXZDSY003)Central Leading Local Science and Technology Development Fund Project of Wuzhou(No.202201001).
文摘By integrating deep neural networks with reinforcement learning,the Double Deep Q Network(DDQN)algorithm overcomes the limitations of Q-learning in handling continuous spaces and is widely applied in the path planning of mobile robots.However,the traditional DDQN algorithm suffers from sparse rewards and inefficient utilization of high-quality data.Targeting those problems,an improved DDQN algorithm based on average Q-value estimation and reward redistribution was proposed.First,to enhance the precision of the target Q-value,the average of multiple previously learned Q-values from the target Q network is used to replace the single Q-value from the current target Q network.Next,a reward redistribution mechanism is designed to overcome the sparse reward problem by adjusting the final reward of each action using the round reward from trajectory information.Additionally,a reward-prioritized experience selection method is introduced,which ranks experience samples according to reward values to ensure frequent utilization of high-quality data.Finally,simulation experiments are conducted to verify the effectiveness of the proposed algorithm in fixed-position scenario and random environments.The experimental results show that compared to the traditional DDQN algorithm,the proposed algorithm achieves shorter average running time,higher average return and fewer average steps.The performance of the proposed algorithm is improved by 11.43%in the fixed scenario and 8.33%in random environments.It not only plans economic and safe paths but also significantly improves efficiency and generalization in path planning,making it suitable for widespread application in autonomous navigation and industrial automation.
文摘Mobile adhoc networks have grown in prominence in recent years,and they are now utilized in a broader range of applications.The main challenges are related to routing techniques that are generally employed in them.Mobile Adhoc system management,on the other hand,requires further testing and improvements in terms of security.Traditional routing protocols,such as Adhoc On-Demand Distance Vector(AODV)and Dynamic Source Routing(DSR),employ the hop count to calculate the distance between two nodes.The main aim of this research work is to determine the optimum method for sending packets while also extending life time of the network.It is achieved by changing the residual energy of each network node.Also,in this paper,various algorithms for optimal routing based on parameters like energy,distance,mobility,and the pheromone value are proposed.Moreover,an approach based on a reward and penalty system is given in this paper to evaluate the efficiency of the proposed algorithms under the impact of parameters.The simulation results unveil that the reward penalty-based approach is quite effective for the selection of an optimal path for routing when the algorithms are implemented under the parameters of interest,which helps in achieving less packet drop and energy consumption of the nodes along with enhancing the network efficiency.
基金supported in part by the National Natural Science Foundation of China(62006111,62073160)the Natural Science Foundation of Jiangsu Province of China(BK20200330)。
文摘Goal-conditioned reinforcement learning(RL)is an interesting extension of the traditional RL framework,where the dynamic environment and reward sparsity can cause conventional learning algorithms to fail.Reward shaping is a practical approach to improving sample efficiency by embedding human domain knowledge into the learning process.Existing reward shaping methods for goal-conditioned RL are typically built on distance metrics with a linear and isotropic distribution,which may fail to provide sufficient information about the ever-changing environment with high complexity.This paper proposes a novel magnetic field-based reward shaping(MFRS)method for goal-conditioned RL tasks with dynamic target and obstacles.Inspired by the physical properties of magnets,we consider the target and obstacles as permanent magnets and establish the reward function according to the intensity values of the magnetic field generated by these magnets.The nonlinear and anisotropic distribution of the magnetic field intensity can provide more accessible and conducive information about the optimization landscape,thus introducing a more sophisticated magnetic reward compared to the distance-based setting.Further,we transform our magnetic reward to the form of potential-based reward shaping by learning a secondary potential function concurrently to ensure the optimal policy invariance of our method.Experiments results in both simulated and real-world robotic manipulation tasks demonstrate that MFRS outperforms relevant existing methods and effectively improves the sample efficiency of RL algorithms in goal-conditioned tasks with various dynamics of the target and obstacles.
文摘为有效提高碳排放配额分配的合理性,并且避免年度结算时碳排放量超标导致环境污染加剧问题,提出基于奖惩因子的季节性碳交易机制,以园区综合能源系统(park integrated energy system,PIES)为对象进行低碳经济调度。首先,构建包含能量层–碳流层–管理层的综合能源系统(integrated energy system,IES)运行框架,建立电气热多能流供需动态一致性模型;其次,分析系统内“日–季节–年度”碳排放特性,打破传统应用指标法的配额分配方法,采用灰色关联分析法建立碳排放配额分配模型,并基于奖惩阶梯碳价制定季节性碳交易机制;最后,以系统内全寿命周期运行成本及碳交易成本最小为目标,对执行季节性碳交易机制的PIES进行低碳经济调度,分析长时间尺度下季节性储能参与调度的减碳量。搭建IEEE 33节点电网5节点气网7节点热网的PIES,并基于多场景进行算例分析,验证此调度方法能够实现零碳经济运行,保证系统供能可靠性,为建立零碳园区奠定理论基础。
文摘As assessment outcomes provide students with a sense of accomplishment that is boosted by the reward system,learning becomes more effective.This research aims to determine the effects of reward system prior to assessment in Mathematics.Quasi-experimental research design was used to examine whether there was a significant difference between the use of reward system and students’level of performance in Mathematics.Through purposive sampling,the respondents of the study involve 80 Grade 9 students belonging to two sections from Gaudencio B.Lontok Memorial Integrated School.Based on similar demographics and pre-test results,control and study group were involved as participants of the study.Data were treated and analyzed accordingly using statistical treatments such as mean and t-test for independent variables.There was a significant finding revealing the advantage of using the reward system compare to the non-reward system in increasing students’level of performance in Mathematics.It is concluded that the use of reward system is effective in improving the assessment outcomes in Mathematics.It is recommended to use reward system for persistent assessment outcomes prior to assessment,to be a reflection of the intended outcomes in Mathematics.
文摘基于工作量证明(Proof of Work,POW)的共识机制在寻找随机数(Nonce)过程中算力主导记账,浪费计算资源及内存,存在51%算力的危险。针对此缺陷,提出一种改进的基于POW的区块链共识机制IPOW(Improve Proof of Work),引入控制权重、激励阈值、有效时间和奖励因子,给出相关算法,通过控制权重等得出最终记账权R。实验结果表明,与POW相比,IPOW共识机制削弱了算力对于节点获得记账权的主导地位,控制权重越大,越容易获得记账权;降低节点作恶的概率,减少富人愈富现象的发生。