In this paper, we characterize lower semi-continuous pseudo-convex functions f : X → R ∪ {+ ∞} on convex subset of real Banach spaces K ⊂ X with respect to the pseudo-monotonicity of its Clarke-Rockafellar Su...In this paper, we characterize lower semi-continuous pseudo-convex functions f : X → R ∪ {+ ∞} on convex subset of real Banach spaces K ⊂ X with respect to the pseudo-monotonicity of its Clarke-Rockafellar Sub-differential. We extend the results on the characterizations of non-smooth convex functions f : X → R ∪ {+ ∞} on convex subset of real Banach spaces K ⊂ X with respect to the monotonicity of its sub-differentials to the lower semi-continuous pseudo-convex functions on real Banach spaces.展开更多
Policy evaluation(PE)is a critical sub-problem in reinforcement learning,which estimates the value function for a given policy and can be used for policy improvement.However,there still exist some limitations in curre...Policy evaluation(PE)is a critical sub-problem in reinforcement learning,which estimates the value function for a given policy and can be used for policy improvement.However,there still exist some limitations in current PE methods,such as low sample efficiency and local convergence,especially on complex tasks.In this study,a novel PE algorithm called Least-Squares Truncated Temporal-Difference learning(LST2D)is proposed.In LST2D,an adaptive truncation mechanism is designed,which effectively takes advantage of the fast convergence property of Least-Squares Temporal Difference learning and the asymptotic convergence property of Temporal Difference learning(TD).Then,two feature pre-training methods are utilised to improve the approximation ability of LST2D.Furthermore,an Actor-Critic algorithm based on LST2D and pre-trained feature representations(ACLPF)is proposed,where LST2D is integrated into the critic network to improve learning-prediction efficiency.Comprehensive simulation studies were conducted on four robotic tasks,and the corresponding results illustrate the effectiveness of LST2D.The proposed ACLPF algorithm outperformed DQN,ACER and PPO in terms of sample efficiency and stability,which demonstrated that LST2D can be applied to online learning control problems by incorporating it into the actor-critic architecture.展开更多
A peak norm is defined for Lp spaces of E-valued Bochner integrable functions, where E is a Banach space, and best approximations from a sun to elements of the space are characterized. Applications are given to some f...A peak norm is defined for Lp spaces of E-valued Bochner integrable functions, where E is a Banach space, and best approximations from a sun to elements of the space are characterized. Applications are given to some families of simultaneous best approximation problems.展开更多
Value function approximation plays an important role in reinforcement learning(RL)with continuous state space,which is widely used to build decision models in practice.Many traditional approaches require experienced d...Value function approximation plays an important role in reinforcement learning(RL)with continuous state space,which is widely used to build decision models in practice.Many traditional approaches require experienced designers to manually specify the formulization of the approximating function,leading to the rigid,non-adaptive representation of the value function.To address this problem,a novel Q-value function approximation method named‘Hierarchical fuzzy Adaptive Resonance Theory’(HiART)is proposed in this paper.HiART is based on the Fuzzy ART method and is an adaptive classification network that learns to segment the state space by classifying the training input automatically.HiART begins with a highly generalized structure where the number of the category nodes is limited,which is beneficial to speed up the learning process at the early stage.Then,the network is refined gradually by creating the attached subnetworks,and a layered network structure is formed during this process.Based on this adaptive structure,HiART alleviates the dependence on expert experience to design the network parameter.The effectiveness and adaptivity of HiART are demonstrated in the Mountain Car benchmark problem with both fast learning speed and low computation time.Finally,a simulation application example of the one versus one air combat decision problem illustrates the applicability of HiART.展开更多
文摘In this paper, we characterize lower semi-continuous pseudo-convex functions f : X → R ∪ {+ ∞} on convex subset of real Banach spaces K ⊂ X with respect to the pseudo-monotonicity of its Clarke-Rockafellar Sub-differential. We extend the results on the characterizations of non-smooth convex functions f : X → R ∪ {+ ∞} on convex subset of real Banach spaces K ⊂ X with respect to the monotonicity of its sub-differentials to the lower semi-continuous pseudo-convex functions on real Banach spaces.
基金Joint Funds of the National Natural Science Foundation of China,Grant/Award Number:U21A20518National Natural Science Foundation of China,Grant/Award Numbers:62106279,61903372。
文摘Policy evaluation(PE)is a critical sub-problem in reinforcement learning,which estimates the value function for a given policy and can be used for policy improvement.However,there still exist some limitations in current PE methods,such as low sample efficiency and local convergence,especially on complex tasks.In this study,a novel PE algorithm called Least-Squares Truncated Temporal-Difference learning(LST2D)is proposed.In LST2D,an adaptive truncation mechanism is designed,which effectively takes advantage of the fast convergence property of Least-Squares Temporal Difference learning and the asymptotic convergence property of Temporal Difference learning(TD).Then,two feature pre-training methods are utilised to improve the approximation ability of LST2D.Furthermore,an Actor-Critic algorithm based on LST2D and pre-trained feature representations(ACLPF)is proposed,where LST2D is integrated into the critic network to improve learning-prediction efficiency.Comprehensive simulation studies were conducted on four robotic tasks,and the corresponding results illustrate the effectiveness of LST2D.The proposed ACLPF algorithm outperformed DQN,ACER and PPO in terms of sample efficiency and stability,which demonstrated that LST2D can be applied to online learning control problems by incorporating it into the actor-critic architecture.
文摘A peak norm is defined for Lp spaces of E-valued Bochner integrable functions, where E is a Banach space, and best approximations from a sun to elements of the space are characterized. Applications are given to some families of simultaneous best approximation problems.
文摘Value function approximation plays an important role in reinforcement learning(RL)with continuous state space,which is widely used to build decision models in practice.Many traditional approaches require experienced designers to manually specify the formulization of the approximating function,leading to the rigid,non-adaptive representation of the value function.To address this problem,a novel Q-value function approximation method named‘Hierarchical fuzzy Adaptive Resonance Theory’(HiART)is proposed in this paper.HiART is based on the Fuzzy ART method and is an adaptive classification network that learns to segment the state space by classifying the training input automatically.HiART begins with a highly generalized structure where the number of the category nodes is limited,which is beneficial to speed up the learning process at the early stage.Then,the network is refined gradually by creating the attached subnetworks,and a layered network structure is formed during this process.Based on this adaptive structure,HiART alleviates the dependence on expert experience to design the network parameter.The effectiveness and adaptivity of HiART are demonstrated in the Mountain Car benchmark problem with both fast learning speed and low computation time.Finally,a simulation application example of the one versus one air combat decision problem illustrates the applicability of HiART.