Safety critical control is often trained in a simulated environment to mitigate risk.Subsequent migration of the biased controller requires further adjustments.In this paper,an experience inference human-behavior lear...Safety critical control is often trained in a simulated environment to mitigate risk.Subsequent migration of the biased controller requires further adjustments.In this paper,an experience inference human-behavior learning is proposed to solve the migration problem of optimal controllers applied to real-world nonlinear systems.The approach is inspired in the complementary properties that exhibits the hippocampus,the neocortex,and the striatum learning systems located in the brain.The hippocampus defines a physics informed reference model of the realworld nonlinear system for experience inference and the neocortex is the adaptive dynamic programming(ADP)or reinforcement learning(RL)algorithm that ensures optimal performance of the reference model.This optimal performance is inferred to the real-world nonlinear system by means of an adaptive neocortex/striatum control policy that forces the nonlinear system to behave as the reference model.Stability and convergence of the proposed approach is analyzed using Lyapunov stability theory.Simulation studies are carried out to verify the approach.展开更多
基金supported by the Royal Academy of Engineering and the Office of the Chie Science Adviser for National Security under the UK Intelligence Community Postdoctoral Research Fellowship programme。
文摘Safety critical control is often trained in a simulated environment to mitigate risk.Subsequent migration of the biased controller requires further adjustments.In this paper,an experience inference human-behavior learning is proposed to solve the migration problem of optimal controllers applied to real-world nonlinear systems.The approach is inspired in the complementary properties that exhibits the hippocampus,the neocortex,and the striatum learning systems located in the brain.The hippocampus defines a physics informed reference model of the realworld nonlinear system for experience inference and the neocortex is the adaptive dynamic programming(ADP)or reinforcement learning(RL)algorithm that ensures optimal performance of the reference model.This optimal performance is inferred to the real-world nonlinear system by means of an adaptive neocortex/striatum control policy that forces the nonlinear system to behave as the reference model.Stability and convergence of the proposed approach is analyzed using Lyapunov stability theory.Simulation studies are carried out to verify the approach.