期刊文献+
共找到422篇文章
< 1 2 22 >
每页显示 20 50 100
Mathematical Modeling of Possibility Markov Chains by Possibility Theory
1
作者 Yoshiki Uemura Takemura Kazuhisa Kenji Kita 《Applied Mathematics》 2024年第8期499-507,共9页
Statistical regression models are input-oriented estimation models that account for observation errors. On the other hand, an output-oriented possibility regression model that accounts for system fluctuations is propo... Statistical regression models are input-oriented estimation models that account for observation errors. On the other hand, an output-oriented possibility regression model that accounts for system fluctuations is proposed. Furthermore, the possibility Markov chain is proposed, which has a disidentifiable state (posterior) and a nondiscriminable state (prior). In this paper, we first take up the entity efficiency evaluation problem as a case study of the posterior non-discriminable production possibility region and mention Fuzzy DEA with fuzzy constraints. Next, the case study of the ex-ante non-discriminable event setting is discussed. Finally, we introduce the measure of the fuzzy number and the equality relation and attempt to model the possibility Markov chain mathematically. Furthermore, we show that under ergodic conditions, the direct sum state can be decomposed and reintegrated using fuzzy OR logic. We had already constructed the Possibility Markov process based on the indifferent state of this world. In this paper, we try to extend it to the indifferent event in another world. It should be noted that we can obtain the possibility transfer matrix by full use of possibility theory. 展开更多
关键词 Possibility markov Chain Ergodic Condition Direct Sum State Prior Indiscriminate State Posterior Discriminatory State
下载PDF
Air Quality Estimation Using Nonhomogeneous Markov Chains: A Case Study Comparing Two Rules Applied to Mexico City Data
2
作者 Eliane R. Rodrigues Juan A. Cruz-Juárez +1 位作者 Hortensia J. Reyes-Cervantes Guadalupe Tzintzun 《Journal of Environmental Protection》 2023年第7期561-582,共22页
A nonhomogeneous Markov chain is applied to the study of the air quality classification in Mexico City when the so-called criterion pollutants are used. We consider the indices associated with air quality using two re... A nonhomogeneous Markov chain is applied to the study of the air quality classification in Mexico City when the so-called criterion pollutants are used. We consider the indices associated with air quality using two regulations where different ways of classification are taken into account. Parameters of the model are the initial and transition probabilities of the chain. They are estimated under the Bayesian point of view through samples generated directly from the corresponding posterior distributions. Using the estimated parameters, the probability of having an air quality index in a given hour of the day is obtained. 展开更多
关键词 Air Quality Index Air Pollution Mexico City Nonhomogeneous markov chains Bayesian Inference
下载PDF
Markov Chains Based on Random Generalized 1-Flipper Operations for Connected Regular Multi-digraphs
3
作者 邓爱平 伍陈晨 +1 位作者 王枫杰 胡宇庭 《Journal of Donghua University(English Edition)》 CAS 2023年第1期110-115,共6页
The properties of generalized flip Markov chains on connected regular digraphs are discussed.The 1-Flipper operation on Markov chains for undirected graphs is generalized to that for multi-digraphs.The generalized 1-F... The properties of generalized flip Markov chains on connected regular digraphs are discussed.The 1-Flipper operation on Markov chains for undirected graphs is generalized to that for multi-digraphs.The generalized 1-Flipper operation preserves the regularity and weak connectivity of multi-digraphs.The generalized 1-Flipper operation is proved to be symmetric.Moreover,it is presented that a series of random generalized 1-Flipper operations eventually lead to a uniform probability distribution over all connected d-regular multi-digraphs without loops. 展开更多
关键词 random graph transformation regular multi-digraph markov chain 1-Flipper triangle reverse
下载PDF
Weighted Markov chains for forecasting and analysis in Incidence of infectious diseases in jiangsu Province,China 被引量:10
4
作者 Zhihang Peng Changjun Bao +5 位作者 Yang Zhao Honggang Yi Letian Xia Hao Yu Hongbing Shen Feng Chen 《The Journal of Biomedical Research》 CAS 2010年第3期207-214,共8页
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence cou... This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course.Then the paper presents a weighted Markov chain,a method which is used to predict the future incidence state.This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable.It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal.Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province.In summation,this paper proposes ways to improve the accuracy of the weighted Markov chain,specifically in the field of infection epidemiology. 展开更多
关键词 weighted.markov chains sequential cluster infectious diseases forecasting and analysis markov chain Monte Carlo
下载PDF
STRONG LAW OF LARGE NUMBERS AND SHANNON-MCMILLAN THEOREM FOR MARKOV CHAINS FIELD ON CAYLEY TREE 被引量:2
5
作者 杨卫国 刘文 《Acta Mathematica Scientia》 SCIE CSCD 2001年第4期495-502,共8页
This paper studies the strong law of large numbers and the Shannom-McMillan theorem for Markov chains field on Cayley tree. The authors first prove the strong law of large number on the frequencies of states and order... This paper studies the strong law of large numbers and the Shannom-McMillan theorem for Markov chains field on Cayley tree. The authors first prove the strong law of large number on the frequencies of states and orderd couples of states for Markov chains field on Cayley tree. Then they prove the Shannon-McMillan theorem with a.e. convergence for Markov chains field on Cayley tree. In the proof, a new technique in the study the strong limit theorem in probability theory is applied. 展开更多
关键词 Cayley tree random field markov chains field strong law of large numbers Shannon-McMillan theorem
下载PDF
Model of Markov Chains in Space-Time Random Environments 被引量:2
6
作者 YANG Guangyu HU Dihe 《Wuhan University Journal of Natural Sciences》 CAS 2007年第2期225-229,共5页
A general framework of stochastic model for a Markov chain in a space-time random environment is introduced, here the environment ξ^*:={ξ1,x∈N,x∈ X}is a random field. We study the dependence relations between th... A general framework of stochastic model for a Markov chain in a space-time random environment is introduced, here the environment ξ^*:={ξ1,x∈N,x∈ X}is a random field. We study the dependence relations between the environment and the original chain, especially the "feedback". Some equivalence theorems and law of large numbers are obtained. 展开更多
关键词 markov chains in space-time random environments FEEDBACK markovian environments
下载PDF
THE CONVERGENCE OF NONHOMOGENEOUS MARKOV CHAINS IN GENERAL STATE SPACES BY THE COUPLING METHOD 被引量:1
7
作者 Zhifeng ZHU Shaoyi ZHANG Fanji TIAN 《Acta Mathematica Scientia》 SCIE CSCD 2021年第5期1777-1787,共11页
We investigate the convergence of nonhomogeneous Markov chains in general state space by using the f norm and the coupling method,and thus,a sufficient condition for the convergence of nonhomogeneous Markov chains in ... We investigate the convergence of nonhomogeneous Markov chains in general state space by using the f norm and the coupling method,and thus,a sufficient condition for the convergence of nonhomogeneous Markov chains in general state space is obtained. 展开更多
关键词 f norm COUPLING nonhomogeneous markov chains CONVERGENCE
下载PDF
Fully Polarimetric Land Cover Classification Based on Markov Chains 被引量:2
8
作者 Georgia Koukiou Vassilis Anastassopoulos 《Advances in Remote Sensing》 2021年第3期47-65,共19页
A novel land cover classification procedure is presented utilizing the infor</span><span style="font-family:Verdana;">mation content of fully polarimetric SAR images. The Cameron cohere</span&... A novel land cover classification procedure is presented utilizing the infor</span><span style="font-family:Verdana;">mation content of fully polarimetric SAR images. The Cameron cohere</span><span style="font-family:Verdana;">nt target decomposition (CTD) is employed to characterize land cover pixel by pixel. Cameron’s CTD is employed since it provides a complete set of elem</span><span style="font-family:Verdana;">entary scattering mechanisms to describe the physical properties of t</span><span style="font-family:Verdana;">he scatterer. The novelty of the proposed land classification approach lies on the fact that the features used for classification are not the types of the elementary </span><span style="font-family:Verdana;">scatterers themselves, but the way these types of scatterers alternate from p</span><span style="font-family:Verdana;">ixel </span><span style="font-family:Verdana;">to pixel on the SAR image. Thus, transition matrices that represent loc</span><span style="font-family:Verdana;">al Markov models are used as classification features for land cover classification. The classification rule employs only the most important transitions for decision making. The Frobenius inner product is employed as similarity measure. Ten different types of land cover are used for testing the proposed method. In this aspect, the classification performance is significantly high. 展开更多
关键词 Fully Polarimetric SAR Coherent Decomposition Elementary Scatterers markov chains Land Cover Classification
下载PDF
Using Markov chains to predict the natural progression of diabetic retinopathy
9
作者 Priyanka Srikanth 《International Journal of Ophthalmology(English edition)》 SCIE CAS 2015年第1期132-137,共6页
AIM: To study the natural progression of diabetic retinopathy in patients with type 2 diabetes.METHODS: This was an observational study of 153 cases with type 2 diabetes from 2010 to 2013. The state of patient was not... AIM: To study the natural progression of diabetic retinopathy in patients with type 2 diabetes.METHODS: This was an observational study of 153 cases with type 2 diabetes from 2010 to 2013. The state of patient was noted at end of each year and transition matrices were developed to model movement between years. Patients who progressed to severe non-proliferative diabetic retinopathy(NPDR) were treated.Markov Chains and Chi-square test were used for statistical analysis.RESULTS: We modelled the transition of 153 patients from NPDR to blindness on an annual basis. At the end of year 3, we compared results from the Markov model versus actual data. The results from Chi-square test confirmed that there was statistically no significant difference(P =0.70) which provided assurance that the model was robust to estimate mean sojourn times. The key finding was that a patient entering the system in mild NPDR state is expected to stay in that state for 5y followed by 1.07 y in moderate NPDR, be in the severe NPDR state for 1.33 y before moving into PDR for roughly8 y. It is therefore expected that such a patient entering the model in a state of mild NPDR will enter blindness after 15.29 y.CONCLUSION: Patients stay for long time periods in mild NPDR before transitioning into moderate NPDR.However, they move rapidly from moderate NPDR to proliferative diabetic retinopathy(PDR) and stay in that state for long periods before transitioning into blindness. 展开更多
关键词 diabetes mellitus diabetic retinopathy markov chains Chi-square test transition probability matrix
下载PDF
POISSON LIMIT THEOREM FOR COUNTABLE MARKOV CHAINS IN MARKOVIAN ENVIRONMENTS
10
作者 方大凡 王汉兴 唐矛宁 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI 2003年第3期298-306,共9页
A countable Markov chain in a Markovian environment is considered.A Poisson limit theorem for the chain recurring to small cylindrical sets is mainly achieved.In order to prove this theorem,the entropy function h is i... A countable Markov chain in a Markovian environment is considered.A Poisson limit theorem for the chain recurring to small cylindrical sets is mainly achieved.In order to prove this theorem,the entropy function h is introduced and the Shannon-McMillan-Breiman theorem for the Markov chain in a Markovian environment is shown. It's well-known that a Markov process in a Markovian environment is generally not a standard Markov chain,so an example of Poisson approximation for a process which is not a Markov process is given.On the other hand,when the environmental process degenerates to a constant sequence,a Poisson limit theorem for countable Markov chains,which is the generalization of Pitskel's result for finite Markov chains is obtained. 展开更多
关键词 Poisson distributions markov chains random environments
下载PDF
A Note on Markov Chains in Stationary Random Environments
11
作者 Wang Hanxing(College of Sciences, Shanghai University) Fang Dafan(Yueyang Normal College) 《Advances in Manufacturing》 SCIE CAS 1999年第2期16-20,共5页
We consider Markov chains in stationary random environments. The conservative set C of the corresponding skew Markov chain of this process can be thought of as a recurrent set of a standard Markov chain. In some s... We consider Markov chains in stationary random environments. The conservative set C of the corresponding skew Markov chain of this process can be thought of as a recurrent set of a standard Markov chain. In some simpler cases, we give some sufficient conditions under which the conservative set C can be decomposed into at most countable minimal closed sets. 展开更多
关键词 markov chains environmental processes conservative sets
下载PDF
Evaluation of Growth Area Foliar Red Tomato Crop Using Markov Chains
12
作者 Luz E. Marín Vaca Miguel Aguilar Cortes +2 位作者 Oscar G. Villegas Torres Nadia Lara Ruiz Martha Lilia Domínguez Patiño 《Agricultural Sciences》 2016年第6期420-424,共6页
The importance of evaluating the leaf area in red tomato plants aims to determine the growth and development of crops established two production cycles, spring-summer and autumn-winter to compare the influence of temp... The importance of evaluating the leaf area in red tomato plants aims to determine the growth and development of crops established two production cycles, spring-summer and autumn-winter to compare the influence of temperature on the growth of leaf area. Repeated, weekly samples were taken by identifying the week and determining the growth and leaf area development using Markov chains, using an array of transition to describe and represent in a flowchart the finite number of physiological States. With the analysis in the steady state process and applying the equations of odds, we get that leaf area growth is established from the seventh week shown in the first cycle (C1) with the chance of 0.266, 0.264 and 0.263, in the last two weeks. It was observed an increase of 6% in the cycle autumn-winter cycle compared spring-summer. 展开更多
关键词 Leaf Area Tomato and markov chains
下载PDF
The Relations Among Various Markov Chains 被引量:8
13
作者 Hu Di\|he College of Mathematics and Statistics,Wuhan University, Wuhan 430072, China 《Wuhan University Journal of Natural Sciences》 EI CAS 2001年第3期643-648,共6页
Some basic equations and the relations among various Markov chains are established. These works are the bases in the investigation of the theory of Markov chain in random environment.
关键词 Hopf markov chain markov chain in random environment skew product markov chain sample markov chain in random environment
下载PDF
Intrusion detection based on system calls and homogeneous Markov chains 被引量:8
14
作者 Tian Xinguang Duan Miyi +1 位作者 Sun Chunlai Li Wenfa 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2008年第3期598-605,共8页
A novel method for detecting anomalous program behavior is presented, which is applicable to hostbased intrusion detection systems that monitor system call activities. The method constructs a homogeneous Markov chain ... A novel method for detecting anomalous program behavior is presented, which is applicable to hostbased intrusion detection systems that monitor system call activities. The method constructs a homogeneous Markov chain model to characterize the normal behavior of a privileged program, and associates the states of the Markov chain with the unique system calls in the training data. At the detection stage, the probabilities that the Markov chain model supports the system call sequences generated by the program are computed. A low probability indicates an anomalous sequence that may result from intrusive activities. Then a decision rule based on the number of anomalous sequences in a locality frame is adopted to classify the program's behavior. The method gives attention to both computational efficiency and detection accuracy, and is especially suitable for on-line detection. It has been applied to practical host-based intrusion detection systems. 展开更多
关键词 intrusion detection markov chain anomaly detection system call.
下载PDF
ON MARKOV CHAINS IN SPACE-TIME RANDOM ENVIRONMENTS 被引量:7
15
作者 胡迪鹤 胡晓予 《Acta Mathematica Scientia》 SCIE CSCD 2009年第1期1-10,共10页
In Section 1, the authors establish the models of two kinds of Markov chains in space-time random environments (MCSTRE and MCSTRE(+)) with abstract state space. In Section 2, the authors construct a MCSTRE and a MCSTR... In Section 1, the authors establish the models of two kinds of Markov chains in space-time random environments (MCSTRE and MCSTRE(+)) with abstract state space. In Section 2, the authors construct a MCSTRE and a MCSTRE(+) by an initial distribution Φ and a random Markov kernel (RMK) p(γ). In Section 3, the authors es-tablish several equivalence theorems on MCSTRE and MCSTRE(+). Finally, the authors give two very important examples of MCMSTRE, the random walk in spce-time random environment and the Markov br... 展开更多
关键词 Random markov kernel markov chain in space-time random environemnt random walk in space-time random environment markov branching chain in space-time random environment
下载PDF
Oil-gas reservoir lithofacies stochastic modeling based on one- to three-dimensional Markov chains 被引量:2
16
作者 WANG Zhi-zhong HUANG Xiang LIANGYu-ru 《Journal of Central South University》 SCIE EI CAS CSCD 2018年第6期1399-1408,共10页
Stochastic modeling techniques have been widely applied to oil-gas reservoir lithofacies. Markov chain simulation~ however~ is still under development~ mainly because of the difficulties in reasonably defining conditi... Stochastic modeling techniques have been widely applied to oil-gas reservoir lithofacies. Markov chain simulation~ however~ is still under development~ mainly because of the difficulties in reasonably defining conditional probabilities for multi-dimensional Markov chains and determining transition probabilities for horizontal strike and dip directions. The aim of this work is to solve these problems. Firstly~ the calculation formulae of conditional probabilities for multi-dimensional Markov chain models are proposed under the full independence and conditional independence assumptions. It is noted that multi-dimensional Markov models based on the conditional independence assumption are reasonable because these models avoid the small-class underestimation problem. Then~ the methods for determining transition probabilities are given. The vertical transition probabilities are obtained by computing the transition frequencies from drilling data~ while the horizontal transition probabilities are estimated by using well data and the elongation ratios according to Walther's law. Finally~ these models are used to simulate the reservoir lithofacies distribution of Tahe oilfield in China. The results show that the conditional independence method performs better than the full independence counterpart in maintaining the true percentage composition and reproducing lithofacies spatial features. 展开更多
关键词 independence assumption markov chain reservoir lithofacies small-class underestimation transitionprobability
下载PDF
Waiting times and stopping probabilities for patterns in Markov chains
17
作者 ZHAO Min-zhi XU Dong ZHANG Hui-zeng 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2018年第1期25-34,共10页
Suppose that C is a finite collection of patterns. Observe a Markov chain until one of the patterns in C occurs as a run. This time is denoted by τ. In this paper, we aim to give an easy way to calculate the mean wai... Suppose that C is a finite collection of patterns. Observe a Markov chain until one of the patterns in C occurs as a run. This time is denoted by τ. In this paper, we aim to give an easy way to calculate the mean waiting time E(τ) and the stopping probabilities P(τ = τA)with A ∈ C, where τA is the waiting time until the pattern A appears as a run. 展开更多
关键词 PATTERN markov chain stopping probability waiting time
下载PDF
The Prognosis of Delayed Reactions in Rats Using Markov Chains Method
18
作者 Sulkhan N. Tsagareli Nino G. Archvadze +1 位作者 Otar Tavdishvili Marika Gvajaia 《Journal of Behavioral and Brain Science》 2016年第1期19-27,共9页
The original modified method of the direct delayed reaction has been used for the evaluation of food-obtaining strategy across spatial learning tasks in T-maze alternation. The optimal behavioral algorithms for each e... The original modified method of the direct delayed reaction has been used for the evaluation of food-obtaining strategy across spatial learning tasks in T-maze alternation. The optimal behavioral algorithms for each experimental day have been identified so that the animals obtain maximum possible food amount with minimal number of mistakes. Markov chain method has been used for the prognosis of rat’s behavioral strategy during the spatial learning task. The learning and decision-making represent the probabilistic transition process where the animal choice at each step (state) depends on the learning experience from previous step (state). 展开更多
关键词 Delayed Reactions Behavioral Algorithm markov Chain
下载PDF
Distribution of First Passage Times for Lumped States in Markov Chains
19
作者 Murat Gul Salih Celebioglu 《Journal of Mathematics and System Science》 2015年第8期315-329,共15页
First passage time in Markov chains is defined as the first time that a chain passes a specified state or lumped states. This state or lumped states may indicate first passage time of an interesting, rare and amazing ... First passage time in Markov chains is defined as the first time that a chain passes a specified state or lumped states. This state or lumped states may indicate first passage time of an interesting, rare and amazing event. In this study, obtaining distribution of the first passage time relating to lumped states which are constructed by gathering the states through lumping method for a irreducible Markov chain whose state space is finite was deliberated. Thanks to lumping method the chain's Markov property has been preserved. Another benefit of lumping method in the way of practice is reduction of the state space thanks to gathering states together. As the obtained first passage distributions are continuous, it may be used in many fields such as reliability and risk analysis 展开更多
关键词 markov chain distribution of first passage time lumped states.
下载PDF
On Markov Chains Induced by Partitioned Transition Probability Matrices 被引量:1
20
作者 Thomas KAIJSER 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2011年第3期441-476,共36页
Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P.... Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P. Let K denote the set of probability vectors on S. With every partition M of P we can associate a transition probability function PM on K defined in such a way that if p ∈ K and M ∈M are such that ||pM|| 〉 0, then, with probability ||pM|| the vector p is transferred to the vector pM/||pM||. Here ||·|| denotes the/1-norm. In this paper we investigate the convergence in distribution for Markov chains generated by transition probability functions induced by partitions of transition probability matrices. The main motivation for this investigation is the application of the convergence results obtained to filtering processes of partially observed Markov chains with denumerable state space. 展开更多
关键词 markov chains on nonlocally compact spaces filtering processes hidden markov chains Kantorovich metric barycenter
原文传递
上一页 1 2 22 下一页 到第
使用帮助 返回顶部