期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
ON VECTOR VALUED FUNCTION APPROXIMATION USING A PEAK NORM
1
作者 G.A.Watson 《Analysis in Theory and Applications》 1996年第2期1-12,共12页
A peak norm is defined for Lp spaces of E-valued Bochner integrable functions, where E is a Banach space, and best approximations from a sun to elements of the space are characterized. Applications are given to some f... A peak norm is defined for Lp spaces of E-valued Bochner integrable functions, where E is a Banach space, and best approximations from a sun to elements of the space are characterized. Applications are given to some families of simultaneous best approximation problems. 展开更多
关键词 ON VECTOR valueD function APPROXIMATION USING A PEAK NORM 任气
下载PDF
Sub-Differential Characterizations of Non-Smooth Lower Semi-Continuous Pseudo-Convex Functions on Real Banach Spaces
2
作者 Akachukwu Offia Ugochukwu Osisiogu +4 位作者 Theresa Efor Friday Oyakhire Monday Ekhator Friday Nkume Sunday Aloke 《Open Journal of Optimization》 2023年第3期99-108,共10页
In this paper, we characterize lower semi-continuous pseudo-convex functions f : X → R ∪ {+ ∞} on convex subset of real Banach spaces K  ⊂ X with respect to the pseudo-monotonicity of its Clarke-Rockafellar Su... In this paper, we characterize lower semi-continuous pseudo-convex functions f : X → R ∪ {+ ∞} on convex subset of real Banach spaces K  ⊂ X with respect to the pseudo-monotonicity of its Clarke-Rockafellar Sub-differential. We extend the results on the characterizations of non-smooth convex functions f : X → R ∪ {+ ∞} on convex subset of real Banach spaces K  ⊂ X with respect to the monotonicity of its sub-differentials to the lower semi-continuous pseudo-convex functions on real Banach spaces. 展开更多
关键词 Real Banach Spaces Pseudo-Convex functions Pseudo-Monotone Maps Sub-Differentials Lower Semi-Continuous functions and Approximate Mean value Inequality
下载PDF
Deep reinforcement learning using least-squares truncated temporal-difference
3
作者 Junkai Ren Yixing Lan +3 位作者 Xin Xu Yichuan Zhang Qiang Fang Yujun Zeng 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第2期425-439,共15页
Policy evaluation(PE)is a critical sub-problem in reinforcement learning,which estimates the value function for a given policy and can be used for policy improvement.However,there still exist some limitations in curre... Policy evaluation(PE)is a critical sub-problem in reinforcement learning,which estimates the value function for a given policy and can be used for policy improvement.However,there still exist some limitations in current PE methods,such as low sample efficiency and local convergence,especially on complex tasks.In this study,a novel PE algorithm called Least-Squares Truncated Temporal-Difference learning(LST2D)is proposed.In LST2D,an adaptive truncation mechanism is designed,which effectively takes advantage of the fast convergence property of Least-Squares Temporal Difference learning and the asymptotic convergence property of Temporal Difference learning(TD).Then,two feature pre-training methods are utilised to improve the approximation ability of LST2D.Furthermore,an Actor-Critic algorithm based on LST2D and pre-trained feature representations(ACLPF)is proposed,where LST2D is integrated into the critic network to improve learning-prediction efficiency.Comprehensive simulation studies were conducted on four robotic tasks,and the corresponding results illustrate the effectiveness of LST2D.The proposed ACLPF algorithm outperformed DQN,ACER and PPO in terms of sample efficiency and stability,which demonstrated that LST2D can be applied to online learning control problems by incorporating it into the actor-critic architecture. 展开更多
关键词 Deep reinforcement learning policy evaluation temporal difference value function approximation
下载PDF
Hierarchical fuzzy ART for Q-learning and its application in air combat simulation 被引量:3
4
作者 Yanan Zhou Yaofei Ma +1 位作者 Xiao Song Guanghong Gong 《International Journal of Modeling, Simulation, and Scientific Computing》 EI 2017年第4期205-223,共19页
Value function approximation plays an important role in reinforcement learning(RL)with continuous state space,which is widely used to build decision models in practice.Many traditional approaches require experienced d... Value function approximation plays an important role in reinforcement learning(RL)with continuous state space,which is widely used to build decision models in practice.Many traditional approaches require experienced designers to manually specify the formulization of the approximating function,leading to the rigid,non-adaptive representation of the value function.To address this problem,a novel Q-value function approximation method named‘Hierarchical fuzzy Adaptive Resonance Theory’(HiART)is proposed in this paper.HiART is based on the Fuzzy ART method and is an adaptive classification network that learns to segment the state space by classifying the training input automatically.HiART begins with a highly generalized structure where the number of the category nodes is limited,which is beneficial to speed up the learning process at the early stage.Then,the network is refined gradually by creating the attached subnetworks,and a layered network structure is formed during this process.Based on this adaptive structure,HiART alleviates the dependence on expert experience to design the network parameter.The effectiveness and adaptivity of HiART are demonstrated in the Mountain Car benchmark problem with both fast learning speed and low computation time.Finally,a simulation application example of the one versus one air combat decision problem illustrates the applicability of HiART. 展开更多
关键词 Fuzzy ART Q-LEARNING value function approximation air combat simulation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部