The analysis of bird ringing data often comes with some potential sources of error and bias,as ring wear and/or loss could affect mark-recapture analyses and produce erroneous estimates of survival.Furthermore,ring we...The analysis of bird ringing data often comes with some potential sources of error and bias,as ring wear and/or loss could affect mark-recapture analyses and produce erroneous estimates of survival.Furthermore,ring wear and loss rates may differ between and within species based on the habitat they use or the species’ life-history traits and behaviour as well as the type of the ring.In this study we use resighting data from a long-term double marking experiment to directly estimate the rate of colour-ring loss among different Dalmatian Pelican colonies over time,evaluate any possible factors that could contribute to differential ring loss and assess how it may bias the results of mark-resighting analyses.Based on 14,849 resightings from 1275 individuals and using multi-state continuous-time hidden Markov models(HMMs) we showed that probability of ring loss was markedly different among colonies,ranging from 0.10 to 0.42 within the first year of marking,whereas the cumulative probability of losing a ring after ten years ranged 0.64 to 0.99.These rates are among the highest estimated when compared to previous studies in waterbirds.Our approach assessing the intra-specific variance in ring loss provided several factors potentially involved,such as the use of glue and the fledgling age accuracy and we could further hypothesise the effect of environmental factors.Finally,our results showed that ring loss can be a significant challenge for the assessment of the species’ population dynamics using mark-recapture methods as survival was consistently underestimated when not accounting for ring loss and varied significantly among different colonies.展开更多
Epilepsy is one of the most prevalent neurological disorders affecting 70 million people worldwide.The present work is focused on designing an efficient algorithm for automatic seizure detection by using electroenceph...Epilepsy is one of the most prevalent neurological disorders affecting 70 million people worldwide.The present work is focused on designing an efficient algorithm for automatic seizure detection by using electroencephalogram(EEG) as a noninvasive procedure to record neuronal activities in the brain.EEG signals' underlying dynamics are extracted to differentiate healthy and seizure EEG signals.Shannon entropy,collision entropy,transfer entropy,conditional probability,and Hjorth parameter features are extracted from subbands of tunable Q wavelet transform.Efficient decomposition level for different feature vector is selected using the Kruskal-Wallis test to achieve good classification.Different features are combined using the discriminant correlation analysis fusion technique to form a single fused feature vector.The accuracy of the proposed approach is higher for Q=2 and J=10.Transfer entropy is observed to be significant for different class combinations.Proposed approach achieved 100% accuracy in classifying healthy-seizure EEG signal using simple and robust features and hidden Markov model with less computation time.The proposed approach efficiency is evaluated in classifying seizure and non-seizure surface EEG signals.The system has achieved 96.87% accuracy in classifying surface seizure and nonseizure EEG segments using efficient features extracted from different J level.展开更多
This paper presents an anomaly detection approach to detect intrusions into computer systems. In this approach, a hierarchical hidden Markov model (HHMM) is used to represent a temporal profile of normal behavior in...This paper presents an anomaly detection approach to detect intrusions into computer systems. In this approach, a hierarchical hidden Markov model (HHMM) is used to represent a temporal profile of normal behavior in a computer system. The HHMM of the norm profile is learned from historic data of the system's normal behavior. The observed behavior of the system is analyzed to infer the probability that the HHMM of the norm profile supports the observed behavior. A low probability of support indicates an anomalous behavior that may result from intrusive activities. The model was implemented and tested on the UNIX system call sequences collected by the University of New Mexico group. The testing results showed that the model can clearly identify the anomaly activities and has a better performance than hidden Markov model.展开更多
Aiming at solving the problems of machine-learning in fault diagnosis, a diagnosis approach is proposed based on hidden Markov model (HMM) and support vector machine (SVM). HMM usually describes intra-class measur...Aiming at solving the problems of machine-learning in fault diagnosis, a diagnosis approach is proposed based on hidden Markov model (HMM) and support vector machine (SVM). HMM usually describes intra-class measure well and is good at dealing with continuous dynamic signals. SVM expresses inter-class difference effectively and has perfect classify ability. This approach is built on the merit of HMM and SVM. Then, the experiment is made in the transmission system of a helicopter. With the features extracted from vibration signals in gearbox, this HMM-SVM based diagnostic approach is trained and used to monitor and diagnose the gearbox's faults. The result shows that this method is better than HMM-based and SVM-based diagnosing methods in higher diagnostic accuracy with small training samples.展开更多
With the increasing availability of precipitation radar data from space,enhancement of the resolution of spaceborne precipitation observations is important,particularly for hazard prediction and climate modeling at lo...With the increasing availability of precipitation radar data from space,enhancement of the resolution of spaceborne precipitation observations is important,particularly for hazard prediction and climate modeling at local scales relevant to extreme precipitation intensities and gradients.In this paper,the statistical characteristics of radar precipitation reflectivity data are studied and modeled using a hidden Markov tree(HMT)in the wavelet domain.Then,a high-resolution interpolation algorithm is proposed for spaceborne radar reflectivity using the HMT model as prior information.Owing to the small and transient storm elements embedded in the larger and slowly varying elements,the radar precipitation data exhibit distinct multiscale statistical properties,including a non-Gaussian structure and scale-to-scale dependency.An HMT model can capture well the statistical properties of radar precipitation,where the wavelet coefficients in each sub-band are characterized as a Gaussian mixture model(GMM),and the wavelet coefficients from the coarse scale to fine scale are described using a multiscale Markov process.The state probabilities of the GMM are determined using the expectation maximization method,and other parameters,for instance,the variance decay parameters in the HMT model are learned and estimated from high-resolution ground radar reflectivity images.Using the prior model,the wavelet coefficients at finer scales are estimated using local Wiener filtering.The interpolation algorithm is validated using data from the precipitation radar onboard the Tropical Rainfall Measurement Mission satellite,and the reconstructed results are found to be able to enhance the spatial resolution while optimally reproducing the local extremes and gradients.展开更多
Translation software has become an important tool for communication between different languages.People’s requirements for translation are higher and higher,mainly reflected in people’s desire for barrier free cultur...Translation software has become an important tool for communication between different languages.People’s requirements for translation are higher and higher,mainly reflected in people’s desire for barrier free cultural exchange.With a large corpus,the performance of statistical machine translation based on words and phrases is limited due to the small size of modeling units.Previous statistical methods rely primarily on the size of corpus and number of its statistical results to avoid ambiguity in translation,ignoring context.To support the ongoing improvement of translation methods built upon deep learning,we propose a translation algorithm based on the Hidden Markov Model to improve the use of context in the process of translation.During translation,our Hidden Markov Model prediction chain selects a number of phrases with the highest result probability to form a sentence.The collection of all of the generated sentences forms a topic sequence.Using probabilities and article sequences determined from the training set,our method again applies the Hidden Markov Model to form the final translation to improve the context relevance in the process of translation.This algorithm improves the accuracy of translation,avoids the combination of invalid words,and enhances the readability and meaning of the resulting translation.展开更多
In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance...In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance degradation in noisy conditions or distorted channels. It is necessary to search for more robust feature extraction methods to gain better performance in adverse conditions. This paper investigates the performance of conventional and new hybrid speech feature extraction algorithms of Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Coding Coefficient (LPCC), perceptual linear production (PLP), and RASTA-PLP in noisy conditions through using multivariate Hidden Markov Model (HMM) classifier. The behavior of the proposal system is evaluated using TIDIGIT human voice dataset corpora, recorded from 208 different adult speakers in both training and testing process. The theoretical basis for speech processing and classifier procedures were presented, and the recognition results were obtained based on word recognition rate.展开更多
The vibration signals of an aeroengine are a very important information source for fault diagnosis and condition monitoring. Considering the nonstationarity and low repeatability of the vibration signals, it is necess...The vibration signals of an aeroengine are a very important information source for fault diagnosis and condition monitoring. Considering the nonstationarity and low repeatability of the vibration signals, it is necessary to find a corresponding method for feature extraction and fault recognition. In this paper, based on Independent Component Analysis (ICA) and the Discrete Hidden Markov Model (DHMM), a new fault diagnosis approach named ICA-DHMM is proposed. In this method, ICA separates the source signals from the mixed vibration signals and then extracts features from them, DHMM works as a classifier to recognize the conditions of the aeroengine. Compared with the DHMM, which use the amplitude spectrum of mixed signals as feature parameters, experimental results show this method has higher diagnosis accuracy.展开更多
With the emergence of the Internet of Things(IoT), there has been a proliferation of urban studies using big data. Yet, another type of urban research innovations that involve interdisciplinary thinking and methods re...With the emergence of the Internet of Things(IoT), there has been a proliferation of urban studies using big data. Yet, another type of urban research innovations that involve interdisciplinary thinking and methods remains underdeveloped. This paper represents an attempt to adopt a Hidden Markov Model(HMM) toolbox developed in Computer Science for the analysis of eye movement patterns in Psychology to answer urban mobility questions in Geography. The main idea is that both people’s eye movements and travel behavior follow the stop-travel-stop pattern, which can be summarized using HMM. Methodological challenges were addressed by adjusting the HMM to analyze territory-wide travel survey data in Hong Kong, China. By using the adjusted toolbox to identify the activitytravel patterns of working adults in Hong Kong, two distinctive groups of balanced(38.4%) and work-oriented(61.6%) lifestyles were identified. With some notable exceptions, working adults living in the urban core were having a more work-oriented lifestyle. Those with a balanced lifestyle were having a relatively compact zone of non-work activities around their homes but a relatively long commuting distance. Furthermore, working females tend to spend more time at home than their counterparts, regardless of their marital status and lifestyle. Overall, this interdisciplinary research demonstrates an attempt to integrate spatial, temporal, and sequential information for understanding people’s behavior in urban mobility research.展开更多
Ad hoc mobile cloud computing networks are affected by various issues,like delay,energy consumption,flexibility,infrastructure,network lifetime,security,stability,data transition,and link accomplishment.Given the issu...Ad hoc mobile cloud computing networks are affected by various issues,like delay,energy consumption,flexibility,infrastructure,network lifetime,security,stability,data transition,and link accomplishment.Given the issues above,route failure is prevalent in ad hoc mobile cloud computing networks,which increases energy consumption and delay and reduces stability.These issues may affect several interconnected nodes in an ad hoc mobile cloud computing network.To address these weaknesses,which raise many concerns about privacy and security,this study formulated clustering-based storage and search optimization approaches using cross-layer analysis.The proposed approaches were formed by cross-layer analysis based on intrusion detection methods.First,the clustering process based on storage and search optimization was formulated for clustering and route maintenance in ad hoc mobile cloud computing networks.Moreover,delay,energy consumption,network lifetime,and link accomplishment are highly addressed by the proposed algorithm.The hidden Markov model is used to maintain the data transition and distributions in the network.Every data communication network,like ad hoc mobile cloud computing,faces security and confidentiality issues.However,the main security issues in this article are addressed using the storage and search optimization approach.Hence,the new algorithm developed helps detect intruders through intelligent cross layer analysis with theMarkov model.The proposed model was simulated in Network Simulator 3,and the outcomes were compared with those of prevailing methods for evaluating parameters,like accuracy,end-to-end delay,energy consumption,network lifetime,packet delivery ratio,and throughput.展开更多
This paper investigates the feedback control of hidden Markov process(HMP) in the face of loss of some observation processes.The control action facilitates or impedes some particular transitions from an inferred cur...This paper investigates the feedback control of hidden Markov process(HMP) in the face of loss of some observation processes.The control action facilitates or impedes some particular transitions from an inferred current state in the attempt to maximize the probability that the HMP is driven to a desirable absorbing state.This control problem is motivated by the need for judicious resource allocation to win an air operation involving two opposing forces.The effectiveness of a receding horizon control scheme based on the inferred discrete state is examined.Tolerance to loss of sensors that help determine the state of the air operation is achieved through a decentralized scheme that estimates a continuous state from measurements of linear models with additive noise.The discrete state of the HMP is identified using three well-known detection schemes.The sub-optimal control policy based on the detected state is implemented on-line in a closed-loop,where the air operation is simulated as a stochastic process with SimEvents,and the measurement process is simulated for a range of single sensor loss rates.展开更多
In this letter, we briefly describe a program of self adapting hidden Markov model (SA HMM) and its application in multiple sequences alignment. Program consists of two stage optimisation algorithm.
A novel algorithm for Bayesian document segmentation is proposed based on the wavelet domain hidden Markov tree (HMT) model. Once the parameters of model are known, according to the sequential maximum a posterior prob...A novel algorithm for Bayesian document segmentation is proposed based on the wavelet domain hidden Markov tree (HMT) model. Once the parameters of model are known, according to the sequential maximum a posterior probability (SMAP) rule, firstly, the likelihood probability of HMT model for each pattern is computed from fine to coarse procedure. Then, the interscale state transition probability is solved using Expectation Maximum (EM) algorithm based on hybrid-quadtree and multiscale context information is fused from coarse to fine procedure. In order to get pixel-level segmentation, the redundant wavelet domain Gaussian mixture model (GMM) is employed to formulate pixel-level statistical property. The experiment results show that the proposed scheme is feasible and robust.展开更多
Because performance parameters of gear have degradation,a method is proposed to recognize and analyze its faults using the hidden Markov model( HMM). In this method,firstly,the delayed correlation-envelope method is u...Because performance parameters of gear have degradation,a method is proposed to recognize and analyze its faults using the hidden Markov model( HMM). In this method,firstly,the delayed correlation-envelope method is used to extract features from vibration signals. Then,HMMs are trained respectively using data under normal condition,gear root crack condition and gear root breaking condition. Further,the trained HMMs are used in pattern recognition and model assessment. Finally,the results from standard HMM and the proposed method are compared, which shows that the proposed methodology is feasible and effective.展开更多
In this paper,we present a comparison of Khasi speech representations with four different spectral features and novel extension towards the development of Khasi speech corpora.These four features include linear predic...In this paper,we present a comparison of Khasi speech representations with four different spectral features and novel extension towards the development of Khasi speech corpora.These four features include linear predictive coding(LPC),linear prediction cepstrum coefficient(LPCC),perceptual linear prediction(PLP),and Mel frequency cepstral coefficient(MFCC).The 10-hour speech data were used for training and 3-hour data for testing.For each spectral feature,different hidden Markov model(HMM)based recognizers with variations in HMM states and different Gaussian mixture models(GMMs)were built.The performance was evaluated by using the word error rate(WER).The experimental results show that MFCC provides a better representation for Khasi speech compared with the other three spectral features.展开更多
The links between low temperature and the incidence of disease have been studied by many researchers. What remains still unclear is the exact nature of the relation, especially the mechanism by which the change of wea...The links between low temperature and the incidence of disease have been studied by many researchers. What remains still unclear is the exact nature of the relation, especially the mechanism by which the change of weather effects on the onset of diseases. The existence of lag period between exposure to temperature and its effect on mortality may reflect the nature of the onset of diseases. Therefore, to assess lagged effects becomes potentially important. The most of studies on lags used the method by Lag-distributed Poisson Regression, and neglected extreme case as random noise to get correlations. In order to assess the lagged effect, we proposed a new approach, i.e., Hidden Markov Model by Self Organized Map (HMM by SOM) apart from well-known regression models. HMM by SOM includes the randomness in its nature and encompasses the extreme cases which were neglected by auto-regression models. The daily data of the number of patients transported by ambulance in Nagoya, Japan, were used. SOM was carried out to classify the meteorological elements into six classes. These classes were used as “states” of HMM. HMM was used to describe a background process which might produce the time series of the incidence of diseases. The background process was considered to change randomly weather states, classified by SOM. We estimated the lagged effects of weather change on the onset of both cerebral infarction and ischemic heart disease. This fact is potentially important in that if one could trace a path in the chain of events leading from temperature change to death, one might be able to prevent it and avert the fatal outcome.展开更多
The stage of a tumor is sometimes hard to predict, especially early in its development. The size and complexity of its observations are the major problems that lead to false diagnoses. Even experienced doctors can mak...The stage of a tumor is sometimes hard to predict, especially early in its development. The size and complexity of its observations are the major problems that lead to false diagnoses. Even experienced doctors can make a mistake in causing terrible consequences for the patient. We propose a mathematical tool for the diagnosis of breast cancer. The aim is to help specialists in making a decision on the likelihood of a patient’s condition knowing the series of observations available. This may increase the patient’s chances of recovery. With a multivariate observational hidden Markov model, we describe the evolution of the disease by taking the geometric properties of the tumor as observable variables. The latent variable corresponds to the type of tumor: malignant or benign. The analysis of the covariance matrix makes it possible to delineate the zones of occurrence for each group belonging to a type of tumors. It is therefore possible to summarize the properties that characterize each of the tumor categories using the parameters of the model. These parameters highlight the differences between the types of tumors.展开更多
A land cover classification procedure is presented utilizing the information content of fully polarimetric SAR images. The Cameron coherent target decomposition (CTD) is employed to characterize each pixel, using a se...A land cover classification procedure is presented utilizing the information content of fully polarimetric SAR images. The Cameron coherent target decomposition (CTD) is employed to characterize each pixel, using a set of canonical scattering mechanisms in order to describe the physical properties of the scatterer. The novelty of the proposed classification approach lies on the use of Hidden Markov Models (HMM) to uniquely characterize each type of land cover. The motivation to this approach is the investigation of the alternation between scattering mechanisms from SAR pixel to pixel. Depending </span><span style="font-family:Verdana;">on the observations-scattering mechanisms and exploiting the transitions </span><span style="font-family:Verdana;">between the scattering mechanisms we decide upon the HMM-land cover type. The classification process is based on the likelihood of observation sequences </span><span style="font-family:Verdana;">been evaluated by each model. The performance of the classification ap</span><span style="font-family:Verdana;">proach is assessed my means of fully polarimetric SLC SAR data from the broader </span><span style="font-family:Verdana;">area of Vancouver, Canada and was found satisfactory, reaching a success</span><span style="font-family:Verdana;"> from 87% to over 99%.展开更多
Several studies were devoted to investigate the effects of meteorological factors on the occurrence of stroke. Regression models had been mostly used to assess the correlation between weather and stroke incidence. How...Several studies were devoted to investigate the effects of meteorological factors on the occurrence of stroke. Regression models had been mostly used to assess the correlation between weather and stroke incidence. However, these methods could not describe the process proceeding in the back-ground of stroke incidence. The purpose of this study was to provide a new approach based on Hidden Markov Models (HMMs) and self-organizing maps (SOM), interpreting the background from the viewpoint of weather variability. Based on meteorological data, SOM was performed to classify weather patterns. Using these classes by SOM as randomly changing “states”, our Hidden Markov Models were constructed with “observation data” that were extracted from the daily data of emergency transport at Nagoya City in Japan. We showed that SOM was an effective method to get weather patterns that would serve as “states” of Hidden Markov Models. Our Hidden Markov Models provided effective models to clarify background process for stroke incidence. The effectiveness of these Hidden Markov Models was estimated by stochastic test for root mean square errors (RMSE). “HMMs with states by SOM” would serve as a description of the background process of stroke incidence and were useful to show the influence of weather on stroke onset. This finding will contribute to an improvement of our understanding for links between weather variability and stroke incidence.展开更多
The label text is a very important tool for the automatic processing of language. It is used in several applications such as morphological and syntactic text analysis, index-ing, retrieval, finished networks determini...The label text is a very important tool for the automatic processing of language. It is used in several applications such as morphological and syntactic text analysis, index-ing, retrieval, finished networks deterministic (in which all combinations of words that are accepted by the grammar are listed) or by statistical grammars (e.g., an n-gram in which the probabilities of sequences of n words in a specific order are given), etc. In this article, we developed a morphosyntactic labeling system language “Baoule” using hidden Markov models. This will allow us to build a tagged reference corpus and rep-resent major grammatical rules faced “Baoule” language in general. To estimate the parameters of this model, we used a training corpus manually labeled using a set of morpho-syntactic labels. We then proceed to an improvement of the system through the re-estimation procedure parameters of this model.展开更多
基金supported by MAVA Foundation and Tour du Valatsupported financially by the MAVA Foundationby the Prespa Ohrid Nature Trust (PONT)。
文摘The analysis of bird ringing data often comes with some potential sources of error and bias,as ring wear and/or loss could affect mark-recapture analyses and produce erroneous estimates of survival.Furthermore,ring wear and loss rates may differ between and within species based on the habitat they use or the species’ life-history traits and behaviour as well as the type of the ring.In this study we use resighting data from a long-term double marking experiment to directly estimate the rate of colour-ring loss among different Dalmatian Pelican colonies over time,evaluate any possible factors that could contribute to differential ring loss and assess how it may bias the results of mark-resighting analyses.Based on 14,849 resightings from 1275 individuals and using multi-state continuous-time hidden Markov models(HMMs) we showed that probability of ring loss was markedly different among colonies,ranging from 0.10 to 0.42 within the first year of marking,whereas the cumulative probability of losing a ring after ten years ranged 0.64 to 0.99.These rates are among the highest estimated when compared to previous studies in waterbirds.Our approach assessing the intra-specific variance in ring loss provided several factors potentially involved,such as the use of glue and the fledgling age accuracy and we could further hypothesise the effect of environmental factors.Finally,our results showed that ring loss can be a significant challenge for the assessment of the species’ population dynamics using mark-recapture methods as survival was consistently underestimated when not accounting for ring loss and varied significantly among different colonies.
文摘Epilepsy is one of the most prevalent neurological disorders affecting 70 million people worldwide.The present work is focused on designing an efficient algorithm for automatic seizure detection by using electroencephalogram(EEG) as a noninvasive procedure to record neuronal activities in the brain.EEG signals' underlying dynamics are extracted to differentiate healthy and seizure EEG signals.Shannon entropy,collision entropy,transfer entropy,conditional probability,and Hjorth parameter features are extracted from subbands of tunable Q wavelet transform.Efficient decomposition level for different feature vector is selected using the Kruskal-Wallis test to achieve good classification.Different features are combined using the discriminant correlation analysis fusion technique to form a single fused feature vector.The accuracy of the proposed approach is higher for Q=2 and J=10.Transfer entropy is observed to be significant for different class combinations.Proposed approach achieved 100% accuracy in classifying healthy-seizure EEG signal using simple and robust features and hidden Markov model with less computation time.The proposed approach efficiency is evaluated in classifying seizure and non-seizure surface EEG signals.The system has achieved 96.87% accuracy in classifying surface seizure and nonseizure EEG segments using efficient features extracted from different J level.
基金Supported by the Science and Technology Development Project Foundation of Tianjin (033800611, 05YFGZGX24200)
文摘This paper presents an anomaly detection approach to detect intrusions into computer systems. In this approach, a hierarchical hidden Markov model (HHMM) is used to represent a temporal profile of normal behavior in a computer system. The HHMM of the norm profile is learned from historic data of the system's normal behavior. The observed behavior of the system is analyzed to infer the probability that the HHMM of the norm profile supports the observed behavior. A low probability of support indicates an anomalous behavior that may result from intrusive activities. The model was implemented and tested on the UNIX system call sequences collected by the University of New Mexico group. The testing results showed that the model can clearly identify the anomaly activities and has a better performance than hidden Markov model.
基金This project is supported by National Natural Science Foundation of China(No.50375153).
文摘Aiming at solving the problems of machine-learning in fault diagnosis, a diagnosis approach is proposed based on hidden Markov model (HMM) and support vector machine (SVM). HMM usually describes intra-class measure well and is good at dealing with continuous dynamic signals. SVM expresses inter-class difference effectively and has perfect classify ability. This approach is built on the merit of HMM and SVM. Then, the experiment is made in the transmission system of a helicopter. With the features extracted from vibration signals in gearbox, this HMM-SVM based diagnostic approach is trained and used to monitor and diagnose the gearbox's faults. The result shows that this method is better than HMM-based and SVM-based diagnosing methods in higher diagnostic accuracy with small training samples.
基金This study was funded by the National Natural Science Foundation of China(Grant No.41975027)the Natural Science Foundation of Jiangsu Province(Grant No.BK20171457)the National Key R&D Program on Monitoring,Early Warning and Prevention of Major Natural Disasters(Grant No.2017YFC1501401).
文摘With the increasing availability of precipitation radar data from space,enhancement of the resolution of spaceborne precipitation observations is important,particularly for hazard prediction and climate modeling at local scales relevant to extreme precipitation intensities and gradients.In this paper,the statistical characteristics of radar precipitation reflectivity data are studied and modeled using a hidden Markov tree(HMT)in the wavelet domain.Then,a high-resolution interpolation algorithm is proposed for spaceborne radar reflectivity using the HMT model as prior information.Owing to the small and transient storm elements embedded in the larger and slowly varying elements,the radar precipitation data exhibit distinct multiscale statistical properties,including a non-Gaussian structure and scale-to-scale dependency.An HMT model can capture well the statistical properties of radar precipitation,where the wavelet coefficients in each sub-band are characterized as a Gaussian mixture model(GMM),and the wavelet coefficients from the coarse scale to fine scale are described using a multiscale Markov process.The state probabilities of the GMM are determined using the expectation maximization method,and other parameters,for instance,the variance decay parameters in the HMT model are learned and estimated from high-resolution ground radar reflectivity images.Using the prior model,the wavelet coefficients at finer scales are estimated using local Wiener filtering.The interpolation algorithm is validated using data from the precipitation radar onboard the Tropical Rainfall Measurement Mission satellite,and the reconstructed results are found to be able to enhance the spatial resolution while optimally reproducing the local extremes and gradients.
基金support provided from the Cooperative Education Fund of China Ministry of Education(201702113002 and 201801193119)Hunan Natural Science Foundation(2018JJ2138)Degree and Graduate Education Reform Project of Hunan Province(JG2018B096)are greatly appreciated by the authors.
文摘Translation software has become an important tool for communication between different languages.People’s requirements for translation are higher and higher,mainly reflected in people’s desire for barrier free cultural exchange.With a large corpus,the performance of statistical machine translation based on words and phrases is limited due to the small size of modeling units.Previous statistical methods rely primarily on the size of corpus and number of its statistical results to avoid ambiguity in translation,ignoring context.To support the ongoing improvement of translation methods built upon deep learning,we propose a translation algorithm based on the Hidden Markov Model to improve the use of context in the process of translation.During translation,our Hidden Markov Model prediction chain selects a number of phrases with the highest result probability to form a sentence.The collection of all of the generated sentences forms a topic sequence.Using probabilities and article sequences determined from the training set,our method again applies the Hidden Markov Model to form the final translation to improve the context relevance in the process of translation.This algorithm improves the accuracy of translation,avoids the combination of invalid words,and enhances the readability and meaning of the resulting translation.
文摘In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance degradation in noisy conditions or distorted channels. It is necessary to search for more robust feature extraction methods to gain better performance in adverse conditions. This paper investigates the performance of conventional and new hybrid speech feature extraction algorithms of Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Coding Coefficient (LPCC), perceptual linear production (PLP), and RASTA-PLP in noisy conditions through using multivariate Hidden Markov Model (HMM) classifier. The behavior of the proposal system is evaluated using TIDIGIT human voice dataset corpora, recorded from 208 different adult speakers in both training and testing process. The theoretical basis for speech processing and classifier procedures were presented, and the recognition results were obtained based on word recognition rate.
基金supported by the National Natural Science Foundation of China under Grant No.60672184
文摘The vibration signals of an aeroengine are a very important information source for fault diagnosis and condition monitoring. Considering the nonstationarity and low repeatability of the vibration signals, it is necessary to find a corresponding method for feature extraction and fault recognition. In this paper, based on Independent Component Analysis (ICA) and the Discrete Hidden Markov Model (DHMM), a new fault diagnosis approach named ICA-DHMM is proposed. In this method, ICA separates the source signals from the mixed vibration signals and then extracts features from them, DHMM works as a classifier to recognize the conditions of the aeroengine. Compared with the DHMM, which use the amplitude spectrum of mixed signals as feature parameters, experimental results show this method has higher diagnosis accuracy.
文摘With the emergence of the Internet of Things(IoT), there has been a proliferation of urban studies using big data. Yet, another type of urban research innovations that involve interdisciplinary thinking and methods remains underdeveloped. This paper represents an attempt to adopt a Hidden Markov Model(HMM) toolbox developed in Computer Science for the analysis of eye movement patterns in Psychology to answer urban mobility questions in Geography. The main idea is that both people’s eye movements and travel behavior follow the stop-travel-stop pattern, which can be summarized using HMM. Methodological challenges were addressed by adjusting the HMM to analyze territory-wide travel survey data in Hong Kong, China. By using the adjusted toolbox to identify the activitytravel patterns of working adults in Hong Kong, two distinctive groups of balanced(38.4%) and work-oriented(61.6%) lifestyles were identified. With some notable exceptions, working adults living in the urban core were having a more work-oriented lifestyle. Those with a balanced lifestyle were having a relatively compact zone of non-work activities around their homes but a relatively long commuting distance. Furthermore, working females tend to spend more time at home than their counterparts, regardless of their marital status and lifestyle. Overall, this interdisciplinary research demonstrates an attempt to integrate spatial, temporal, and sequential information for understanding people’s behavior in urban mobility research.
基金This research was supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘Ad hoc mobile cloud computing networks are affected by various issues,like delay,energy consumption,flexibility,infrastructure,network lifetime,security,stability,data transition,and link accomplishment.Given the issues above,route failure is prevalent in ad hoc mobile cloud computing networks,which increases energy consumption and delay and reduces stability.These issues may affect several interconnected nodes in an ad hoc mobile cloud computing network.To address these weaknesses,which raise many concerns about privacy and security,this study formulated clustering-based storage and search optimization approaches using cross-layer analysis.The proposed approaches were formed by cross-layer analysis based on intrusion detection methods.First,the clustering process based on storage and search optimization was formulated for clustering and route maintenance in ad hoc mobile cloud computing networks.Moreover,delay,energy consumption,network lifetime,and link accomplishment are highly addressed by the proposed algorithm.The hidden Markov model is used to maintain the data transition and distributions in the network.Every data communication network,like ad hoc mobile cloud computing,faces security and confidentiality issues.However,the main security issues in this article are addressed using the storage and search optimization approach.Hence,the new algorithm developed helps detect intruders through intelligent cross layer analysis with theMarkov model.The proposed model was simulated in Network Simulator 3,and the outcomes were compared with those of prevailing methods for evaluating parameters,like accuracy,end-to-end delay,energy consumption,network lifetime,packet delivery ratio,and throughput.
文摘This paper investigates the feedback control of hidden Markov process(HMP) in the face of loss of some observation processes.The control action facilitates or impedes some particular transitions from an inferred current state in the attempt to maximize the probability that the HMP is driven to a desirable absorbing state.This control problem is motivated by the need for judicious resource allocation to win an air operation involving two opposing forces.The effectiveness of a receding horizon control scheme based on the inferred discrete state is examined.Tolerance to loss of sensors that help determine the state of the air operation is achieved through a decentralized scheme that estimates a continuous state from measurements of linear models with additive noise.The discrete state of the HMP is identified using three well-known detection schemes.The sub-optimal control policy based on the detected state is implemented on-line in a closed-loop,where the air operation is simulated as a stochastic process with SimEvents,and the measurement process is simulated for a range of single sensor loss rates.
文摘In this letter, we briefly describe a program of self adapting hidden Markov model (SA HMM) and its application in multiple sequences alignment. Program consists of two stage optimisation algorithm.
文摘A novel algorithm for Bayesian document segmentation is proposed based on the wavelet domain hidden Markov tree (HMT) model. Once the parameters of model are known, according to the sequential maximum a posterior probability (SMAP) rule, firstly, the likelihood probability of HMT model for each pattern is computed from fine to coarse procedure. Then, the interscale state transition probability is solved using Expectation Maximum (EM) algorithm based on hybrid-quadtree and multiscale context information is fused from coarse to fine procedure. In order to get pixel-level segmentation, the redundant wavelet domain Gaussian mixture model (GMM) is employed to formulate pixel-level statistical property. The experiment results show that the proposed scheme is feasible and robust.
文摘Because performance parameters of gear have degradation,a method is proposed to recognize and analyze its faults using the hidden Markov model( HMM). In this method,firstly,the delayed correlation-envelope method is used to extract features from vibration signals. Then,HMMs are trained respectively using data under normal condition,gear root crack condition and gear root breaking condition. Further,the trained HMMs are used in pattern recognition and model assessment. Finally,the results from standard HMM and the proposed method are compared, which shows that the proposed methodology is feasible and effective.
基金supported by the Visvesvaraya Ph.D.Scheme for Electronics and IT students launched by the Ministry of Electronics and Information Technology(MeiTY),Government of India under Grant No.PhD-MLA/4(95)/2015-2016.
文摘In this paper,we present a comparison of Khasi speech representations with four different spectral features and novel extension towards the development of Khasi speech corpora.These four features include linear predictive coding(LPC),linear prediction cepstrum coefficient(LPCC),perceptual linear prediction(PLP),and Mel frequency cepstral coefficient(MFCC).The 10-hour speech data were used for training and 3-hour data for testing.For each spectral feature,different hidden Markov model(HMM)based recognizers with variations in HMM states and different Gaussian mixture models(GMMs)were built.The performance was evaluated by using the word error rate(WER).The experimental results show that MFCC provides a better representation for Khasi speech compared with the other three spectral features.
文摘The links between low temperature and the incidence of disease have been studied by many researchers. What remains still unclear is the exact nature of the relation, especially the mechanism by which the change of weather effects on the onset of diseases. The existence of lag period between exposure to temperature and its effect on mortality may reflect the nature of the onset of diseases. Therefore, to assess lagged effects becomes potentially important. The most of studies on lags used the method by Lag-distributed Poisson Regression, and neglected extreme case as random noise to get correlations. In order to assess the lagged effect, we proposed a new approach, i.e., Hidden Markov Model by Self Organized Map (HMM by SOM) apart from well-known regression models. HMM by SOM includes the randomness in its nature and encompasses the extreme cases which were neglected by auto-regression models. The daily data of the number of patients transported by ambulance in Nagoya, Japan, were used. SOM was carried out to classify the meteorological elements into six classes. These classes were used as “states” of HMM. HMM was used to describe a background process which might produce the time series of the incidence of diseases. The background process was considered to change randomly weather states, classified by SOM. We estimated the lagged effects of weather change on the onset of both cerebral infarction and ischemic heart disease. This fact is potentially important in that if one could trace a path in the chain of events leading from temperature change to death, one might be able to prevent it and avert the fatal outcome.
文摘The stage of a tumor is sometimes hard to predict, especially early in its development. The size and complexity of its observations are the major problems that lead to false diagnoses. Even experienced doctors can make a mistake in causing terrible consequences for the patient. We propose a mathematical tool for the diagnosis of breast cancer. The aim is to help specialists in making a decision on the likelihood of a patient’s condition knowing the series of observations available. This may increase the patient’s chances of recovery. With a multivariate observational hidden Markov model, we describe the evolution of the disease by taking the geometric properties of the tumor as observable variables. The latent variable corresponds to the type of tumor: malignant or benign. The analysis of the covariance matrix makes it possible to delineate the zones of occurrence for each group belonging to a type of tumors. It is therefore possible to summarize the properties that characterize each of the tumor categories using the parameters of the model. These parameters highlight the differences between the types of tumors.
文摘A land cover classification procedure is presented utilizing the information content of fully polarimetric SAR images. The Cameron coherent target decomposition (CTD) is employed to characterize each pixel, using a set of canonical scattering mechanisms in order to describe the physical properties of the scatterer. The novelty of the proposed classification approach lies on the use of Hidden Markov Models (HMM) to uniquely characterize each type of land cover. The motivation to this approach is the investigation of the alternation between scattering mechanisms from SAR pixel to pixel. Depending </span><span style="font-family:Verdana;">on the observations-scattering mechanisms and exploiting the transitions </span><span style="font-family:Verdana;">between the scattering mechanisms we decide upon the HMM-land cover type. The classification process is based on the likelihood of observation sequences </span><span style="font-family:Verdana;">been evaluated by each model. The performance of the classification ap</span><span style="font-family:Verdana;">proach is assessed my means of fully polarimetric SLC SAR data from the broader </span><span style="font-family:Verdana;">area of Vancouver, Canada and was found satisfactory, reaching a success</span><span style="font-family:Verdana;"> from 87% to over 99%.
文摘Several studies were devoted to investigate the effects of meteorological factors on the occurrence of stroke. Regression models had been mostly used to assess the correlation between weather and stroke incidence. However, these methods could not describe the process proceeding in the back-ground of stroke incidence. The purpose of this study was to provide a new approach based on Hidden Markov Models (HMMs) and self-organizing maps (SOM), interpreting the background from the viewpoint of weather variability. Based on meteorological data, SOM was performed to classify weather patterns. Using these classes by SOM as randomly changing “states”, our Hidden Markov Models were constructed with “observation data” that were extracted from the daily data of emergency transport at Nagoya City in Japan. We showed that SOM was an effective method to get weather patterns that would serve as “states” of Hidden Markov Models. Our Hidden Markov Models provided effective models to clarify background process for stroke incidence. The effectiveness of these Hidden Markov Models was estimated by stochastic test for root mean square errors (RMSE). “HMMs with states by SOM” would serve as a description of the background process of stroke incidence and were useful to show the influence of weather on stroke onset. This finding will contribute to an improvement of our understanding for links between weather variability and stroke incidence.
文摘The label text is a very important tool for the automatic processing of language. It is used in several applications such as morphological and syntactic text analysis, index-ing, retrieval, finished networks deterministic (in which all combinations of words that are accepted by the grammar are listed) or by statistical grammars (e.g., an n-gram in which the probabilities of sequences of n words in a specific order are given), etc. In this article, we developed a morphosyntactic labeling system language “Baoule” using hidden Markov models. This will allow us to build a tagged reference corpus and rep-resent major grammatical rules faced “Baoule” language in general. To estimate the parameters of this model, we used a training corpus manually labeled using a set of morpho-syntactic labels. We then proceed to an improvement of the system through the re-estimation procedure parameters of this model.