This paper focuses on the reachable set estimation for Markovian jump neural networks with time delay.By allowing uncertainty in the transition probabilities,a framework unifies and enhances the generality and realism...This paper focuses on the reachable set estimation for Markovian jump neural networks with time delay.By allowing uncertainty in the transition probabilities,a framework unifies and enhances the generality and realism of these systems.To fully exploit the unified uncertain transition probabilities,an equivalent transformation technique is introduced as an alternative to traditional estimation methods,effectively utilizing the information of transition probabilities.Furthermore,a vector Wirtinger-based summation inequality is proposed,which captures more system information compared to existing ones.Building upon these components,a novel condition that guarantees a reachable set estimation is presented for Markovian jump neural networks with unified uncertain transition probabilities.A numerical example is illustrated to demonstrate the superiority of the approaches.展开更多
Optimal policies in Markov decision problems may be quite sensitive with regard to transition probabilities.In practice,some transition probabilities may be uncertain.The goals of the present study are to find the rob...Optimal policies in Markov decision problems may be quite sensitive with regard to transition probabilities.In practice,some transition probabilities may be uncertain.The goals of the present study are to find the robust range for a certain optimal policy and to obtain value intervals of exact transition probabilities.Our research yields powerful contributions for Markov decision processes(MDPs)with uncertain transition probabilities.We first propose a method for estimating unknown transition probabilities based on maximum likelihood.Since the estimation may be far from accurate,and the highest expected total reward of the MDP may be sensitive to these transition probabilities,we analyze the robustness of an optimal policy and propose an approach for robust analysis.After giving the definition of a robust optimal policy with uncertain transition probabilities represented as sets of numbers,we formulate a model to obtain the optimal policy.Finally,we define the value intervals of the exact transition probabilities and construct models to determine the lower and upper bounds.Numerical examples are given to show the practicability of our methods.展开更多
基金funded by National Key Research and Development Program of China under Grant 2022YFE0107300the Chongqing Technology Innovation and Application Development Special Key Project under Grant CSTB2022TIAD-KPX0162+3 种基金the National Natural Science Foundation of China under Grant U22A20101the Chongqing Technology Innovation and Application Development Special Key Project under Grant CSTB2022TIAD-CUX0015the Chongqing postdoctoral innovativetalents support program under Grant CQBX202205the China Postdoctoral Science Foundation under Grant 2023M730411.
文摘This paper focuses on the reachable set estimation for Markovian jump neural networks with time delay.By allowing uncertainty in the transition probabilities,a framework unifies and enhances the generality and realism of these systems.To fully exploit the unified uncertain transition probabilities,an equivalent transformation technique is introduced as an alternative to traditional estimation methods,effectively utilizing the information of transition probabilities.Furthermore,a vector Wirtinger-based summation inequality is proposed,which captures more system information compared to existing ones.Building upon these components,a novel condition that guarantees a reachable set estimation is presented for Markovian jump neural networks with unified uncertain transition probabilities.A numerical example is illustrated to demonstrate the superiority of the approaches.
基金Supported by the National Natural Science Foundation of China(71571019).
文摘Optimal policies in Markov decision problems may be quite sensitive with regard to transition probabilities.In practice,some transition probabilities may be uncertain.The goals of the present study are to find the robust range for a certain optimal policy and to obtain value intervals of exact transition probabilities.Our research yields powerful contributions for Markov decision processes(MDPs)with uncertain transition probabilities.We first propose a method for estimating unknown transition probabilities based on maximum likelihood.Since the estimation may be far from accurate,and the highest expected total reward of the MDP may be sensitive to these transition probabilities,we analyze the robustness of an optimal policy and propose an approach for robust analysis.After giving the definition of a robust optimal policy with uncertain transition probabilities represented as sets of numbers,we formulate a model to obtain the optimal policy.Finally,we define the value intervals of the exact transition probabilities and construct models to determine the lower and upper bounds.Numerical examples are given to show the practicability of our methods.