Statistical regression models are input-oriented estimation models that account for observation errors. On the other hand, an output-oriented possibility regression model that accounts for system fluctuations is propo...Statistical regression models are input-oriented estimation models that account for observation errors. On the other hand, an output-oriented possibility regression model that accounts for system fluctuations is proposed. Furthermore, the possibility Markov chain is proposed, which has a disidentifiable state (posterior) and a nondiscriminable state (prior). In this paper, we first take up the entity efficiency evaluation problem as a case study of the posterior non-discriminable production possibility region and mention Fuzzy DEA with fuzzy constraints. Next, the case study of the ex-ante non-discriminable event setting is discussed. Finally, we introduce the measure of the fuzzy number and the equality relation and attempt to model the possibility Markov chain mathematically. Furthermore, we show that under ergodic conditions, the direct sum state can be decomposed and reintegrated using fuzzy OR logic. We had already constructed the Possibility Markov process based on the indifferent state of this world. In this paper, we try to extend it to the indifferent event in another world. It should be noted that we can obtain the possibility transfer matrix by full use of possibility theory.展开更多
A nonhomogeneous Markov chain is applied to the study of the air quality classification in Mexico City when the so-called criterion pollutants are used. We consider the indices associated with air quality using two re...A nonhomogeneous Markov chain is applied to the study of the air quality classification in Mexico City when the so-called criterion pollutants are used. We consider the indices associated with air quality using two regulations where different ways of classification are taken into account. Parameters of the model are the initial and transition probabilities of the chain. They are estimated under the Bayesian point of view through samples generated directly from the corresponding posterior distributions. Using the estimated parameters, the probability of having an air quality index in a given hour of the day is obtained.展开更多
The properties of generalized flip Markov chains on connected regular digraphs are discussed.The 1-Flipper operation on Markov chains for undirected graphs is generalized to that for multi-digraphs.The generalized 1-F...The properties of generalized flip Markov chains on connected regular digraphs are discussed.The 1-Flipper operation on Markov chains for undirected graphs is generalized to that for multi-digraphs.The generalized 1-Flipper operation preserves the regularity and weak connectivity of multi-digraphs.The generalized 1-Flipper operation is proved to be symmetric.Moreover,it is presented that a series of random generalized 1-Flipper operations eventually lead to a uniform probability distribution over all connected d-regular multi-digraphs without loops.展开更多
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence cou...This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course.Then the paper presents a weighted Markov chain,a method which is used to predict the future incidence state.This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable.It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal.Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province.In summation,this paper proposes ways to improve the accuracy of the weighted Markov chain,specifically in the field of infection epidemiology.展开更多
This paper studies the strong law of large numbers and the Shannom-McMillan theorem for Markov chains field on Cayley tree. The authors first prove the strong law of large number on the frequencies of states and order...This paper studies the strong law of large numbers and the Shannom-McMillan theorem for Markov chains field on Cayley tree. The authors first prove the strong law of large number on the frequencies of states and orderd couples of states for Markov chains field on Cayley tree. Then they prove the Shannon-McMillan theorem with a.e. convergence for Markov chains field on Cayley tree. In the proof, a new technique in the study the strong limit theorem in probability theory is applied.展开更多
A general framework of stochastic model for a Markov chain in a space-time random environment is introduced, here the environment ξ^*:={ξ1,x∈N,x∈ X}is a random field. We study the dependence relations between th...A general framework of stochastic model for a Markov chain in a space-time random environment is introduced, here the environment ξ^*:={ξ1,x∈N,x∈ X}is a random field. We study the dependence relations between the environment and the original chain, especially the "feedback". Some equivalence theorems and law of large numbers are obtained.展开更多
We investigate the convergence of nonhomogeneous Markov chains in general state space by using the f norm and the coupling method,and thus,a sufficient condition for the convergence of nonhomogeneous Markov chains in ...We investigate the convergence of nonhomogeneous Markov chains in general state space by using the f norm and the coupling method,and thus,a sufficient condition for the convergence of nonhomogeneous Markov chains in general state space is obtained.展开更多
A novel land cover classification procedure is presented utilizing the infor</span><span style="font-family:Verdana;">mation content of fully polarimetric SAR images. The Cameron cohere</span&...A novel land cover classification procedure is presented utilizing the infor</span><span style="font-family:Verdana;">mation content of fully polarimetric SAR images. The Cameron cohere</span><span style="font-family:Verdana;">nt target decomposition (CTD) is employed to characterize land cover pixel by pixel. Cameron’s CTD is employed since it provides a complete set of elem</span><span style="font-family:Verdana;">entary scattering mechanisms to describe the physical properties of t</span><span style="font-family:Verdana;">he scatterer. The novelty of the proposed land classification approach lies on the fact that the features used for classification are not the types of the elementary </span><span style="font-family:Verdana;">scatterers themselves, but the way these types of scatterers alternate from p</span><span style="font-family:Verdana;">ixel </span><span style="font-family:Verdana;">to pixel on the SAR image. Thus, transition matrices that represent loc</span><span style="font-family:Verdana;">al Markov models are used as classification features for land cover classification. The classification rule employs only the most important transitions for decision making. The Frobenius inner product is employed as similarity measure. Ten different types of land cover are used for testing the proposed method. In this aspect, the classification performance is significantly high.展开更多
AIM: To study the natural progression of diabetic retinopathy in patients with type 2 diabetes.METHODS: This was an observational study of 153 cases with type 2 diabetes from 2010 to 2013. The state of patient was not...AIM: To study the natural progression of diabetic retinopathy in patients with type 2 diabetes.METHODS: This was an observational study of 153 cases with type 2 diabetes from 2010 to 2013. The state of patient was noted at end of each year and transition matrices were developed to model movement between years. Patients who progressed to severe non-proliferative diabetic retinopathy(NPDR) were treated.Markov Chains and Chi-square test were used for statistical analysis.RESULTS: We modelled the transition of 153 patients from NPDR to blindness on an annual basis. At the end of year 3, we compared results from the Markov model versus actual data. The results from Chi-square test confirmed that there was statistically no significant difference(P =0.70) which provided assurance that the model was robust to estimate mean sojourn times. The key finding was that a patient entering the system in mild NPDR state is expected to stay in that state for 5y followed by 1.07 y in moderate NPDR, be in the severe NPDR state for 1.33 y before moving into PDR for roughly8 y. It is therefore expected that such a patient entering the model in a state of mild NPDR will enter blindness after 15.29 y.CONCLUSION: Patients stay for long time periods in mild NPDR before transitioning into moderate NPDR.However, they move rapidly from moderate NPDR to proliferative diabetic retinopathy(PDR) and stay in that state for long periods before transitioning into blindness.展开更多
A countable Markov chain in a Markovian environment is considered.A Poisson limit theorem for the chain recurring to small cylindrical sets is mainly achieved.In order to prove this theorem,the entropy function h is i...A countable Markov chain in a Markovian environment is considered.A Poisson limit theorem for the chain recurring to small cylindrical sets is mainly achieved.In order to prove this theorem,the entropy function h is introduced and the Shannon-McMillan-Breiman theorem for the Markov chain in a Markovian environment is shown. It's well-known that a Markov process in a Markovian environment is generally not a standard Markov chain,so an example of Poisson approximation for a process which is not a Markov process is given.On the other hand,when the environmental process degenerates to a constant sequence,a Poisson limit theorem for countable Markov chains,which is the generalization of Pitskel's result for finite Markov chains is obtained.展开更多
We consider Markov chains in stationary random environments. The conservative set C of the corresponding skew Markov chain of this process can be thought of as a recurrent set of a standard Markov chain. In some s...We consider Markov chains in stationary random environments. The conservative set C of the corresponding skew Markov chain of this process can be thought of as a recurrent set of a standard Markov chain. In some simpler cases, we give some sufficient conditions under which the conservative set C can be decomposed into at most countable minimal closed sets.展开更多
The importance of evaluating the leaf area in red tomato plants aims to determine the growth and development of crops established two production cycles, spring-summer and autumn-winter to compare the influence of temp...The importance of evaluating the leaf area in red tomato plants aims to determine the growth and development of crops established two production cycles, spring-summer and autumn-winter to compare the influence of temperature on the growth of leaf area. Repeated, weekly samples were taken by identifying the week and determining the growth and leaf area development using Markov chains, using an array of transition to describe and represent in a flowchart the finite number of physiological States. With the analysis in the steady state process and applying the equations of odds, we get that leaf area growth is established from the seventh week shown in the first cycle (C1) with the chance of 0.266, 0.264 and 0.263, in the last two weeks. It was observed an increase of 6% in the cycle autumn-winter cycle compared spring-summer.展开更多
Some basic equations and the relations among various Markov chains are established. These works are the bases in the investigation of the theory of Markov chain in random environment.
A novel method for detecting anomalous program behavior is presented, which is applicable to hostbased intrusion detection systems that monitor system call activities. The method constructs a homogeneous Markov chain ...A novel method for detecting anomalous program behavior is presented, which is applicable to hostbased intrusion detection systems that monitor system call activities. The method constructs a homogeneous Markov chain model to characterize the normal behavior of a privileged program, and associates the states of the Markov chain with the unique system calls in the training data. At the detection stage, the probabilities that the Markov chain model supports the system call sequences generated by the program are computed. A low probability indicates an anomalous sequence that may result from intrusive activities. Then a decision rule based on the number of anomalous sequences in a locality frame is adopted to classify the program's behavior. The method gives attention to both computational efficiency and detection accuracy, and is especially suitable for on-line detection. It has been applied to practical host-based intrusion detection systems.展开更多
In Section 1, the authors establish the models of two kinds of Markov chains in space-time random environments (MCSTRE and MCSTRE(+)) with abstract state space. In Section 2, the authors construct a MCSTRE and a MCSTR...In Section 1, the authors establish the models of two kinds of Markov chains in space-time random environments (MCSTRE and MCSTRE(+)) with abstract state space. In Section 2, the authors construct a MCSTRE and a MCSTRE(+) by an initial distribution Φ and a random Markov kernel (RMK) p(γ). In Section 3, the authors es-tablish several equivalence theorems on MCSTRE and MCSTRE(+). Finally, the authors give two very important examples of MCMSTRE, the random walk in spce-time random environment and the Markov br...展开更多
Stochastic modeling techniques have been widely applied to oil-gas reservoir lithofacies. Markov chain simulation~ however~ is still under development~ mainly because of the difficulties in reasonably defining conditi...Stochastic modeling techniques have been widely applied to oil-gas reservoir lithofacies. Markov chain simulation~ however~ is still under development~ mainly because of the difficulties in reasonably defining conditional probabilities for multi-dimensional Markov chains and determining transition probabilities for horizontal strike and dip directions. The aim of this work is to solve these problems. Firstly~ the calculation formulae of conditional probabilities for multi-dimensional Markov chain models are proposed under the full independence and conditional independence assumptions. It is noted that multi-dimensional Markov models based on the conditional independence assumption are reasonable because these models avoid the small-class underestimation problem. Then~ the methods for determining transition probabilities are given. The vertical transition probabilities are obtained by computing the transition frequencies from drilling data~ while the horizontal transition probabilities are estimated by using well data and the elongation ratios according to Walther's law. Finally~ these models are used to simulate the reservoir lithofacies distribution of Tahe oilfield in China. The results show that the conditional independence method performs better than the full independence counterpart in maintaining the true percentage composition and reproducing lithofacies spatial features.展开更多
Suppose that C is a finite collection of patterns. Observe a Markov chain until one of the patterns in C occurs as a run. This time is denoted by τ. In this paper, we aim to give an easy way to calculate the mean wai...Suppose that C is a finite collection of patterns. Observe a Markov chain until one of the patterns in C occurs as a run. This time is denoted by τ. In this paper, we aim to give an easy way to calculate the mean waiting time E(τ) and the stopping probabilities P(τ = τA)with A ∈ C, where τA is the waiting time until the pattern A appears as a run.展开更多
The original modified method of the direct delayed reaction has been used for the evaluation of food-obtaining strategy across spatial learning tasks in T-maze alternation. The optimal behavioral algorithms for each e...The original modified method of the direct delayed reaction has been used for the evaluation of food-obtaining strategy across spatial learning tasks in T-maze alternation. The optimal behavioral algorithms for each experimental day have been identified so that the animals obtain maximum possible food amount with minimal number of mistakes. Markov chain method has been used for the prognosis of rat’s behavioral strategy during the spatial learning task. The learning and decision-making represent the probabilistic transition process where the animal choice at each step (state) depends on the learning experience from previous step (state).展开更多
First passage time in Markov chains is defined as the first time that a chain passes a specified state or lumped states. This state or lumped states may indicate first passage time of an interesting, rare and amazing ...First passage time in Markov chains is defined as the first time that a chain passes a specified state or lumped states. This state or lumped states may indicate first passage time of an interesting, rare and amazing event. In this study, obtaining distribution of the first passage time relating to lumped states which are constructed by gathering the states through lumping method for a irreducible Markov chain whose state space is finite was deliberated. Thanks to lumping method the chain's Markov property has been preserved. Another benefit of lumping method in the way of practice is reduction of the state space thanks to gathering states together. As the obtained first passage distributions are continuous, it may be used in many fields such as reliability and risk analysis展开更多
Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P....Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P. Let K denote the set of probability vectors on S. With every partition M of P we can associate a transition probability function PM on K defined in such a way that if p ∈ K and M ∈M are such that ||pM|| 〉 0, then, with probability ||pM|| the vector p is transferred to the vector pM/||pM||. Here ||·|| denotes the/1-norm. In this paper we investigate the convergence in distribution for Markov chains generated by transition probability functions induced by partitions of transition probability matrices. The main motivation for this investigation is the application of the convergence results obtained to filtering processes of partially observed Markov chains with denumerable state space.展开更多
文摘Statistical regression models are input-oriented estimation models that account for observation errors. On the other hand, an output-oriented possibility regression model that accounts for system fluctuations is proposed. Furthermore, the possibility Markov chain is proposed, which has a disidentifiable state (posterior) and a nondiscriminable state (prior). In this paper, we first take up the entity efficiency evaluation problem as a case study of the posterior non-discriminable production possibility region and mention Fuzzy DEA with fuzzy constraints. Next, the case study of the ex-ante non-discriminable event setting is discussed. Finally, we introduce the measure of the fuzzy number and the equality relation and attempt to model the possibility Markov chain mathematically. Furthermore, we show that under ergodic conditions, the direct sum state can be decomposed and reintegrated using fuzzy OR logic. We had already constructed the Possibility Markov process based on the indifferent state of this world. In this paper, we try to extend it to the indifferent event in another world. It should be noted that we can obtain the possibility transfer matrix by full use of possibility theory.
文摘A nonhomogeneous Markov chain is applied to the study of the air quality classification in Mexico City when the so-called criterion pollutants are used. We consider the indices associated with air quality using two regulations where different ways of classification are taken into account. Parameters of the model are the initial and transition probabilities of the chain. They are estimated under the Bayesian point of view through samples generated directly from the corresponding posterior distributions. Using the estimated parameters, the probability of having an air quality index in a given hour of the day is obtained.
基金National Natural Science Foundation of China(No.11671258)。
文摘The properties of generalized flip Markov chains on connected regular digraphs are discussed.The 1-Flipper operation on Markov chains for undirected graphs is generalized to that for multi-digraphs.The generalized 1-Flipper operation preserves the regularity and weak connectivity of multi-digraphs.The generalized 1-Flipper operation is proved to be symmetric.Moreover,it is presented that a series of random generalized 1-Flipper operations eventually lead to a uniform probability distribution over all connected d-regular multi-digraphs without loops.
基金supported in part by"National S&T Major Project Foundation of China"(2009ZX10004-904)Universities Natural Science Foundation of Jiangsu Province(09KJB330004),National Science Foundation Grant DMS-9971405National Institutes of Health Contract N01-HV-28183
文摘This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course.Then the paper presents a weighted Markov chain,a method which is used to predict the future incidence state.This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable.It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal.Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province.In summation,this paper proposes ways to improve the accuracy of the weighted Markov chain,specifically in the field of infection epidemiology.
文摘This paper studies the strong law of large numbers and the Shannom-McMillan theorem for Markov chains field on Cayley tree. The authors first prove the strong law of large number on the frequencies of states and orderd couples of states for Markov chains field on Cayley tree. Then they prove the Shannon-McMillan theorem with a.e. convergence for Markov chains field on Cayley tree. In the proof, a new technique in the study the strong limit theorem in probability theory is applied.
基金Supported by the National Natural Science Foundation of China (10371092)
文摘A general framework of stochastic model for a Markov chain in a space-time random environment is introduced, here the environment ξ^*:={ξ1,x∈N,x∈ X}is a random field. We study the dependence relations between the environment and the original chain, especially the "feedback". Some equivalence theorems and law of large numbers are obtained.
基金Supported by Hubei Province Key Laboratory of Systems Science in Metallurgical Process(Wuhan University of Science and Technology)(Y202003)Hubei Education Department Foundation(B2019150)Natural Science Foundation of Xiaogan(XGKJ2020010046).
文摘We investigate the convergence of nonhomogeneous Markov chains in general state space by using the f norm and the coupling method,and thus,a sufficient condition for the convergence of nonhomogeneous Markov chains in general state space is obtained.
文摘A novel land cover classification procedure is presented utilizing the infor</span><span style="font-family:Verdana;">mation content of fully polarimetric SAR images. The Cameron cohere</span><span style="font-family:Verdana;">nt target decomposition (CTD) is employed to characterize land cover pixel by pixel. Cameron’s CTD is employed since it provides a complete set of elem</span><span style="font-family:Verdana;">entary scattering mechanisms to describe the physical properties of t</span><span style="font-family:Verdana;">he scatterer. The novelty of the proposed land classification approach lies on the fact that the features used for classification are not the types of the elementary </span><span style="font-family:Verdana;">scatterers themselves, but the way these types of scatterers alternate from p</span><span style="font-family:Verdana;">ixel </span><span style="font-family:Verdana;">to pixel on the SAR image. Thus, transition matrices that represent loc</span><span style="font-family:Verdana;">al Markov models are used as classification features for land cover classification. The classification rule employs only the most important transitions for decision making. The Frobenius inner product is employed as similarity measure. Ten different types of land cover are used for testing the proposed method. In this aspect, the classification performance is significantly high.
文摘AIM: To study the natural progression of diabetic retinopathy in patients with type 2 diabetes.METHODS: This was an observational study of 153 cases with type 2 diabetes from 2010 to 2013. The state of patient was noted at end of each year and transition matrices were developed to model movement between years. Patients who progressed to severe non-proliferative diabetic retinopathy(NPDR) were treated.Markov Chains and Chi-square test were used for statistical analysis.RESULTS: We modelled the transition of 153 patients from NPDR to blindness on an annual basis. At the end of year 3, we compared results from the Markov model versus actual data. The results from Chi-square test confirmed that there was statistically no significant difference(P =0.70) which provided assurance that the model was robust to estimate mean sojourn times. The key finding was that a patient entering the system in mild NPDR state is expected to stay in that state for 5y followed by 1.07 y in moderate NPDR, be in the severe NPDR state for 1.33 y before moving into PDR for roughly8 y. It is therefore expected that such a patient entering the model in a state of mild NPDR will enter blindness after 15.29 y.CONCLUSION: Patients stay for long time periods in mild NPDR before transitioning into moderate NPDR.However, they move rapidly from moderate NPDR to proliferative diabetic retinopathy(PDR) and stay in that state for long periods before transitioning into blindness.
文摘A countable Markov chain in a Markovian environment is considered.A Poisson limit theorem for the chain recurring to small cylindrical sets is mainly achieved.In order to prove this theorem,the entropy function h is introduced and the Shannon-McMillan-Breiman theorem for the Markov chain in a Markovian environment is shown. It's well-known that a Markov process in a Markovian environment is generally not a standard Markov chain,so an example of Poisson approximation for a process which is not a Markov process is given.On the other hand,when the environmental process degenerates to a constant sequence,a Poisson limit theorem for countable Markov chains,which is the generalization of Pitskel's result for finite Markov chains is obtained.
文摘We consider Markov chains in stationary random environments. The conservative set C of the corresponding skew Markov chain of this process can be thought of as a recurrent set of a standard Markov chain. In some simpler cases, we give some sufficient conditions under which the conservative set C can be decomposed into at most countable minimal closed sets.
文摘The importance of evaluating the leaf area in red tomato plants aims to determine the growth and development of crops established two production cycles, spring-summer and autumn-winter to compare the influence of temperature on the growth of leaf area. Repeated, weekly samples were taken by identifying the week and determining the growth and leaf area development using Markov chains, using an array of transition to describe and represent in a flowchart the finite number of physiological States. With the analysis in the steady state process and applying the equations of odds, we get that leaf area growth is established from the seventh week shown in the first cycle (C1) with the chance of 0.266, 0.264 and 0.263, in the last two weeks. It was observed an increase of 6% in the cycle autumn-winter cycle compared spring-summer.
基金the National Natural Science Foundation of China(10 0 710 5 8-2 ) and Doctoral Programme Foundationof China
文摘Some basic equations and the relations among various Markov chains are established. These works are the bases in the investigation of the theory of Markov chain in random environment.
基金the National Grand Fundamental Research "973" Program of China (2004CB318109)the High-Technology Research and Development Plan of China (863-307-7-5)the National Information Security 242 Program ofChina (2005C39).
文摘A novel method for detecting anomalous program behavior is presented, which is applicable to hostbased intrusion detection systems that monitor system call activities. The method constructs a homogeneous Markov chain model to characterize the normal behavior of a privileged program, and associates the states of the Markov chain with the unique system calls in the training data. At the detection stage, the probabilities that the Markov chain model supports the system call sequences generated by the program are computed. A low probability indicates an anomalous sequence that may result from intrusive activities. Then a decision rule based on the number of anomalous sequences in a locality frame is adopted to classify the program's behavior. The method gives attention to both computational efficiency and detection accuracy, and is especially suitable for on-line detection. It has been applied to practical host-based intrusion detection systems.
基金Supported by the National Natural Science Foundation of China (10771185 and 10871200)
文摘In Section 1, the authors establish the models of two kinds of Markov chains in space-time random environments (MCSTRE and MCSTRE(+)) with abstract state space. In Section 2, the authors construct a MCSTRE and a MCSTRE(+) by an initial distribution Φ and a random Markov kernel (RMK) p(γ). In Section 3, the authors es-tablish several equivalence theorems on MCSTRE and MCSTRE(+). Finally, the authors give two very important examples of MCMSTRE, the random walk in spce-time random environment and the Markov br...
基金Project(2016YFB0503601) supported by the National Key Research and Development Program of China Project(41730105) supported by the National Natural Science Foundation of China
文摘Stochastic modeling techniques have been widely applied to oil-gas reservoir lithofacies. Markov chain simulation~ however~ is still under development~ mainly because of the difficulties in reasonably defining conditional probabilities for multi-dimensional Markov chains and determining transition probabilities for horizontal strike and dip directions. The aim of this work is to solve these problems. Firstly~ the calculation formulae of conditional probabilities for multi-dimensional Markov chain models are proposed under the full independence and conditional independence assumptions. It is noted that multi-dimensional Markov models based on the conditional independence assumption are reasonable because these models avoid the small-class underestimation problem. Then~ the methods for determining transition probabilities are given. The vertical transition probabilities are obtained by computing the transition frequencies from drilling data~ while the horizontal transition probabilities are estimated by using well data and the elongation ratios according to Walther's law. Finally~ these models are used to simulate the reservoir lithofacies distribution of Tahe oilfield in China. The results show that the conditional independence method performs better than the full independence counterpart in maintaining the true percentage composition and reproducing lithofacies spatial features.
基金Supported by the National Natural Science Foundation of China(11771286,11371317)the Zhejiang Provincial Natural Science Foundation of China(LQ18A010007)
文摘Suppose that C is a finite collection of patterns. Observe a Markov chain until one of the patterns in C occurs as a run. This time is denoted by τ. In this paper, we aim to give an easy way to calculate the mean waiting time E(τ) and the stopping probabilities P(τ = τA)with A ∈ C, where τA is the waiting time until the pattern A appears as a run.
文摘The original modified method of the direct delayed reaction has been used for the evaluation of food-obtaining strategy across spatial learning tasks in T-maze alternation. The optimal behavioral algorithms for each experimental day have been identified so that the animals obtain maximum possible food amount with minimal number of mistakes. Markov chain method has been used for the prognosis of rat’s behavioral strategy during the spatial learning task. The learning and decision-making represent the probabilistic transition process where the animal choice at each step (state) depends on the learning experience from previous step (state).
文摘First passage time in Markov chains is defined as the first time that a chain passes a specified state or lumped states. This state or lumped states may indicate first passage time of an interesting, rare and amazing event. In this study, obtaining distribution of the first passage time relating to lumped states which are constructed by gathering the states through lumping method for a irreducible Markov chain whose state space is finite was deliberated. Thanks to lumping method the chain's Markov property has been preserved. Another benefit of lumping method in the way of practice is reduction of the state space thanks to gathering states together. As the obtained first passage distributions are continuous, it may be used in many fields such as reliability and risk analysis
文摘Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P. Let K denote the set of probability vectors on S. With every partition M of P we can associate a transition probability function PM on K defined in such a way that if p ∈ K and M ∈M are such that ||pM|| 〉 0, then, with probability ||pM|| the vector p is transferred to the vector pM/||pM||. Here ||·|| denotes the/1-norm. In this paper we investigate the convergence in distribution for Markov chains generated by transition probability functions induced by partitions of transition probability matrices. The main motivation for this investigation is the application of the convergence results obtained to filtering processes of partially observed Markov chains with denumerable state space.