Efficient iterative unsupervised machine learning involving probabilistic clustering analysis with the expectation-maximization(EM)clustering algorithm is applied to categorize reservoir facies by exploiting latent an...Efficient iterative unsupervised machine learning involving probabilistic clustering analysis with the expectation-maximization(EM)clustering algorithm is applied to categorize reservoir facies by exploiting latent and observable well-log variables from a clastic reservoir in the Majnoon oilfield,southern Iraq.The observable well-log variables consist of conventional open-hole,well-log data and the computer-processed interpretation of gamma rays,bulk density,neutron porosity,compressional sonic,deep resistivity,shale volume,total porosity,and water saturation,from three wells located in the Nahr Umr reservoir.The latent variables include shale volume and water saturation.The EM algorithm efficiently characterizes electrofacies through iterative machine learning to identify the local maximum likelihood estimates(MLE)of the observable and latent variables in the studied dataset.The optimized EM model developed successfully predicts the core-derived facies classification in two of the studied wells.The EM model clusters the data into three distinctive reservoir electrofacies(F1,F2,and F3).F1 represents a gas-bearing electrofacies with low shale volume(Vsh)and water saturation(Sw)and high porosity and permeability values identifying it as an attractive reservoir target.The results of the EM model are validated using nuclear magnetic resonance(NMR)data from the third studied well for which no cores were recovered.The NMR results confirm the effectiveness and accuracy of the EM model in predicting electrofacies.The utilization of the EM algorithm for electrofacies classification/cluster analysis is innovative.Specifically,the clusters it establishes are less rigidly constrained than those derived from the more commonly used K-means clustering method.The EM methodology developed generates dependable electrofacies estimates in the studied reservoir intervals where core samples are not available.Therefore,once calibrated with core data in some wells,the model is suitable for application to other wells that lack core data.展开更多
Based on the major gene and polygene mixed inheritance model for multiple correlated quantitative traits, the authors proposed a new joint segregation analysis method of major gene controlling multiple correlated quan...Based on the major gene and polygene mixed inheritance model for multiple correlated quantitative traits, the authors proposed a new joint segregation analysis method of major gene controlling multiple correlated quantitative traits, which include major gene detection and its effect and variation estimation. The effect and variation of major gene are estimated by the maximum likelihood method implemented via expectation-maximization (EM) algorithm. Major gene is tested with the likelihood ratio (LR) test statistic. Extensive simulation studies showed that joint analysis not only increases the statistical power of major gene detection but also improves the precision and accuracy of major gene effect estimates. An example of the plant height and the number of tiller of F2 population in rice cross Duonieai x Zhonghua 11 was used in the illustration. The results indicated that the genetic difference of these two traits in this cross refers to only one pleiotropic major gene. The additive effect and dominance effect of the major gene are estimated as -21.3 and 40.6 cm on plant height, and 22.7 and -25.3 on number of tiller, respectively. The major gene shows overdominance for plant height and close to complete dominance for number of tillers.展开更多
The uncertainty during the period of software project development often brings huge risks to contractors and clients. If we can find an effective method to predict the cost and quality of software projects based on fa...The uncertainty during the period of software project development often brings huge risks to contractors and clients. If we can find an effective method to predict the cost and quality of software projects based on facts like the project character and two-side cooperating capability at the beginning of the project,we can reduce the risk. Bayesian Belief Network(BBN) is a good tool for analyzing uncertain consequences, but it is difficult to produce precise network structure and conditional probability table.In this paper,we built up network structure by Delphi method for conditional probability table learning,and learn update probability table and nodes’confidence levels continuously according to the application cases, which made the evaluation network have learning abilities, and evaluate the software development risk of organization more accurately.This paper also introduces EM algorithm, which will enhance the ability to produce hidden nodes caused by variant software projects.展开更多
Introduction: Disaster damage to health systems is a human and health tragedy, results in huge economic losses, deals devastating blows to development goals, and shakes social confidence. Hospital disaster preparednes...Introduction: Disaster damage to health systems is a human and health tragedy, results in huge economic losses, deals devastating blows to development goals, and shakes social confidence. Hospital disaster preparedness presents complex clinical operation. It is difficult philosophical challenge. It is difficult to determine how much time, money, and effort should be spent in preparing for an event that may not occur. Health facilities whether hospitals or rural health clinics, should be a source of strength during emergencies and disasters. They should be ready to save lives and to continue providing essential emergencies and disasters. Jeddah has relatively a level of disaster risk which is attributable to its geographical location, climate variability, topography, etc. This study investigates the hospital disaster preparedness (HDP) in Jeddah. Methods: Questionnaire was designed according to five Likert scales. It was divided into eight fields of 33 indicators: structure, architectural and furnishings, lifeline facilities’ safety, hospital location, utilities maintenance, surge capacity, emergency and disaster plan, and control of communication and coordination. Sample of six hospitals participated in the study and rated to the extent of disaster preparedness for each hospital disaster preparedness indicators. Two hazard tools were used to find out the hazards for each hospital. An assessment tool was designed to monitor progress and effectiveness of the hospitals’ improvement. Weakness was found in HDP level in the surveyed hospitals. Disaster mitigation needs more action including: risk assessment, structural and non-structural prevention, and preparedness for contingency planning and warning and evacuation. Conclusion: The finding shows that hospitals included in this study have tools and indicators in hospital preparedness but with lack of training and management during disaster. So the research shed light on hospital disaster preparedness. Considering the importance of preparedness in disaster, it is necessary for hospitals to understand that most of hospital disaster preparedness is built in the hospital system.展开更多
Biology is a challenging and complicated mess. Understanding this challenging complexity is the realm of the biological sciences: Trying to make sense of the massive, messy data in terms of discovering patterns and re...Biology is a challenging and complicated mess. Understanding this challenging complexity is the realm of the biological sciences: Trying to make sense of the massive, messy data in terms of discovering patterns and revealing its underlying general rules. Among the most powerful mathematical tools for organizing and helping to structure complex, heterogeneous and noisy data are the tools provided by multivariate statistical analysis (MSA) approaches. These eigenvector/eigenvalue data-compression approaches were first introduced to electron microscopy (EM) in 1980 to help sort out different views of macromolecules in a micrograph. After 35 years of continuous use and developments, new MSA applications are still being proposed regularly. The speed of computing has increased dramatically in the decades since their first use in electron microscopy. However, we have also seen a possibly even more rapid increase in the size and complexity of the EM data sets to be studied. MSA computations had thus become a very serious bottleneck limiting its general use. The parallelization of our programs—speeding up the process by orders of magnitude—has opened whole new avenues of research. The speed of the automatic classification in the compressed eigenvector space had also become a bottleneck which needed to be removed. In this paper we explain the basic principles of multivariate statistical eigenvector-eigenvalue data compression;we provide practical tips and application examples for those working in structural biology, and we provide the more experienced researcher in this and other fields with the formulas associated with these powerful MSA approaches.展开更多
文摘Efficient iterative unsupervised machine learning involving probabilistic clustering analysis with the expectation-maximization(EM)clustering algorithm is applied to categorize reservoir facies by exploiting latent and observable well-log variables from a clastic reservoir in the Majnoon oilfield,southern Iraq.The observable well-log variables consist of conventional open-hole,well-log data and the computer-processed interpretation of gamma rays,bulk density,neutron porosity,compressional sonic,deep resistivity,shale volume,total porosity,and water saturation,from three wells located in the Nahr Umr reservoir.The latent variables include shale volume and water saturation.The EM algorithm efficiently characterizes electrofacies through iterative machine learning to identify the local maximum likelihood estimates(MLE)of the observable and latent variables in the studied dataset.The optimized EM model developed successfully predicts the core-derived facies classification in two of the studied wells.The EM model clusters the data into three distinctive reservoir electrofacies(F1,F2,and F3).F1 represents a gas-bearing electrofacies with low shale volume(Vsh)and water saturation(Sw)and high porosity and permeability values identifying it as an attractive reservoir target.The results of the EM model are validated using nuclear magnetic resonance(NMR)data from the third studied well for which no cores were recovered.The NMR results confirm the effectiveness and accuracy of the EM model in predicting electrofacies.The utilization of the EM algorithm for electrofacies classification/cluster analysis is innovative.Specifically,the clusters it establishes are less rigidly constrained than those derived from the more commonly used K-means clustering method.The EM methodology developed generates dependable electrofacies estimates in the studied reservoir intervals where core samples are not available.Therefore,once calibrated with core data in some wells,the model is suitable for application to other wells that lack core data.
基金This research was supported by the National Natural Science Foundation of China to Xu Chenwu (39900080, 30270724 and 30370758).
文摘Based on the major gene and polygene mixed inheritance model for multiple correlated quantitative traits, the authors proposed a new joint segregation analysis method of major gene controlling multiple correlated quantitative traits, which include major gene detection and its effect and variation estimation. The effect and variation of major gene are estimated by the maximum likelihood method implemented via expectation-maximization (EM) algorithm. Major gene is tested with the likelihood ratio (LR) test statistic. Extensive simulation studies showed that joint analysis not only increases the statistical power of major gene detection but also improves the precision and accuracy of major gene effect estimates. An example of the plant height and the number of tiller of F2 population in rice cross Duonieai x Zhonghua 11 was used in the illustration. The results indicated that the genetic difference of these two traits in this cross refers to only one pleiotropic major gene. The additive effect and dominance effect of the major gene are estimated as -21.3 and 40.6 cm on plant height, and 22.7 and -25.3 on number of tiller, respectively. The major gene shows overdominance for plant height and close to complete dominance for number of tillers.
文摘The uncertainty during the period of software project development often brings huge risks to contractors and clients. If we can find an effective method to predict the cost and quality of software projects based on facts like the project character and two-side cooperating capability at the beginning of the project,we can reduce the risk. Bayesian Belief Network(BBN) is a good tool for analyzing uncertain consequences, but it is difficult to produce precise network structure and conditional probability table.In this paper,we built up network structure by Delphi method for conditional probability table learning,and learn update probability table and nodes’confidence levels continuously according to the application cases, which made the evaluation network have learning abilities, and evaluate the software development risk of organization more accurately.This paper also introduces EM algorithm, which will enhance the ability to produce hidden nodes caused by variant software projects.
文摘Introduction: Disaster damage to health systems is a human and health tragedy, results in huge economic losses, deals devastating blows to development goals, and shakes social confidence. Hospital disaster preparedness presents complex clinical operation. It is difficult philosophical challenge. It is difficult to determine how much time, money, and effort should be spent in preparing for an event that may not occur. Health facilities whether hospitals or rural health clinics, should be a source of strength during emergencies and disasters. They should be ready to save lives and to continue providing essential emergencies and disasters. Jeddah has relatively a level of disaster risk which is attributable to its geographical location, climate variability, topography, etc. This study investigates the hospital disaster preparedness (HDP) in Jeddah. Methods: Questionnaire was designed according to five Likert scales. It was divided into eight fields of 33 indicators: structure, architectural and furnishings, lifeline facilities’ safety, hospital location, utilities maintenance, surge capacity, emergency and disaster plan, and control of communication and coordination. Sample of six hospitals participated in the study and rated to the extent of disaster preparedness for each hospital disaster preparedness indicators. Two hazard tools were used to find out the hazards for each hospital. An assessment tool was designed to monitor progress and effectiveness of the hospitals’ improvement. Weakness was found in HDP level in the surveyed hospitals. Disaster mitigation needs more action including: risk assessment, structural and non-structural prevention, and preparedness for contingency planning and warning and evacuation. Conclusion: The finding shows that hospitals included in this study have tools and indicators in hospital preparedness but with lack of training and management during disaster. So the research shed light on hospital disaster preparedness. Considering the importance of preparedness in disaster, it is necessary for hospitals to understand that most of hospital disaster preparedness is built in the hospital system.
文摘Biology is a challenging and complicated mess. Understanding this challenging complexity is the realm of the biological sciences: Trying to make sense of the massive, messy data in terms of discovering patterns and revealing its underlying general rules. Among the most powerful mathematical tools for organizing and helping to structure complex, heterogeneous and noisy data are the tools provided by multivariate statistical analysis (MSA) approaches. These eigenvector/eigenvalue data-compression approaches were first introduced to electron microscopy (EM) in 1980 to help sort out different views of macromolecules in a micrograph. After 35 years of continuous use and developments, new MSA applications are still being proposed regularly. The speed of computing has increased dramatically in the decades since their first use in electron microscopy. However, we have also seen a possibly even more rapid increase in the size and complexity of the EM data sets to be studied. MSA computations had thus become a very serious bottleneck limiting its general use. The parallelization of our programs—speeding up the process by orders of magnitude—has opened whole new avenues of research. The speed of the automatic classification in the compressed eigenvector space had also become a bottleneck which needed to be removed. In this paper we explain the basic principles of multivariate statistical eigenvector-eigenvalue data compression;we provide practical tips and application examples for those working in structural biology, and we provide the more experienced researcher in this and other fields with the formulas associated with these powerful MSA approaches.