The zero_failure data research is a new field in the recent years, but it is required urgently in practical projects, so the work has more theory and practical values. In this paper, for zero_failure data (t i,n i...The zero_failure data research is a new field in the recent years, but it is required urgently in practical projects, so the work has more theory and practical values. In this paper, for zero_failure data (t i,n i) at moment t i , if the prior distribution of the failure probability p i=p{T【t i} is quasi_exponential distribution, the author gives the p i Bayesian estimation and hierarchical Bayesian estimation and the reliability under zero_failure date condition is also obtained.展开更多
BACKGROUND Chronic heart failure is a complex clinical syndrome.The Chinese herbal compound preparation Jianpi Huatan Quyu recipe has been used to treat chronic heart failure;however,the underlying molecular mechanism...BACKGROUND Chronic heart failure is a complex clinical syndrome.The Chinese herbal compound preparation Jianpi Huatan Quyu recipe has been used to treat chronic heart failure;however,the underlying molecular mechanism is still not clear.AIM To identify the effective active ingredients of Jianpi Huatan Quyu recipe and explore its molecular mechanism in the treatment of chronic heart failure.METHODS The effective active ingredients of eight herbs composing Jianpi Huatan Quyu recipe were identified using the Traditional Chinese Medicine Systems Pharmacology Database and Analysis Platform.The target genes of chronic heart failure were searched in the Genecards database.The target proteins of active ingredients were mapped to chronic heart failure target genes to obtain the common drugdisease targets,which were then used to construct a key chemical componenttarget network using Cytoscape 3.7.2 software.The protein-protein interaction network was constructed using the String database.Gene Ontology and Kyoto Encyclopedia of Genes and Genomes enrichment analyses were performed through the Metascape database.Finally,our previously published relevant articles were searched to verify the results obtained via network pharmacology.RESULTS A total of 227 effective active ingredients for Jianpi Huatan Quyu recipe were identified,of which quercetin,kaempferol,7-methoxy-2-methyl isoflavone,formononetin,and isorhamnetin may be key active ingredients and involved in the therapeutic effects of TCM by acting on STAT3,MAPK3,AKT1,JUN,MAPK1,TP53,TNF,HSP90AA1,p65,MAPK8,MAPK14,IL6,EGFR,EDN1,FOS,and other proteins.The pathways identified by KEGG enrichment analysis include pathways in cancer,IL-17 signaling pathway,PI3K-Akt signaling pathway,HIF-1 signaling pathway,calcium signaling pathway,cAMP signaling pathway,NF-kappaB signaling pathway,AMPK signaling pathway,etc.Previous studies on Jianpi Huatan Quyu recipe suggested that this Chinese compound preparation can regulate the TNF-α,IL-6,MAPK,cAMP,and AMPK pathways to affect the mitochondrial structure of myocardial cells,oxidative stress,and energy metabolism,thus achieving the therapeutic effects on chronic heart failure.CONCLUSION The Chinese medicine compound preparation Jianpi Huatan Quyu recipe exerts therapeutic effects on chronic heart failure possibly by influencing the mitochondrial structure of cardiomyocytes,oxidative stress,energy metabolism,and other processes.Future studies are warranted to investigate the role of the IL-17 signaling pathway,PI3K-Akt signaling pathway,HIF-1 signaling pathway,and other pathways in mediating the therapeutic effects of Jianpi Huatan Quyu recipe on chronic heart failure.展开更多
Remaining useful life(RUL) prediction is one of the most crucial elements in prognostics and health management(PHM). Aiming at the imperfect prior information, this paper proposes an RUL prediction method based on a n...Remaining useful life(RUL) prediction is one of the most crucial elements in prognostics and health management(PHM). Aiming at the imperfect prior information, this paper proposes an RUL prediction method based on a nonlinear random coefficient regression(RCR) model with fusing failure time data.Firstly, some interesting natures of parameters estimation based on the nonlinear RCR model are given. Based on these natures,the failure time data can be fused as the prior information reasonably. Specifically, the fixed parameters are calculated by the field degradation data of the evaluated equipment and the prior information of random coefficient is estimated with fusing the failure time data of congeneric equipment. Then, the prior information of the random coefficient is updated online under the Bayesian framework, the probability density function(PDF) of the RUL with considering the limitation of the failure threshold is performed. Finally, two case studies are used for experimental verification. Compared with the traditional Bayesian method, the proposed method can effectively reduce the influence of imperfect prior information and improve the accuracy of RUL prediction.展开更多
Prediction of machine failure is challenging as the dataset is often imbalanced with a low failure rate.The common approach to han-dle classification involving imbalanced data is to balance the data using a sampling a...Prediction of machine failure is challenging as the dataset is often imbalanced with a low failure rate.The common approach to han-dle classification involving imbalanced data is to balance the data using a sampling approach such as random undersampling,random oversampling,or Synthetic Minority Oversampling Technique(SMOTE)algorithms.This paper compared the classification performance of three popular classifiers(Logistic Regression,Gaussian Naïve Bayes,and Support Vector Machine)in predicting machine failure in the Oil and Gas industry.The original machine failure dataset consists of 20,473 hourly data and is imbalanced with 19945(97%)‘non-failure’and 528(3%)‘failure data’.The three independent variables to predict machine failure were pressure indicator,flow indicator,and level indicator.The accuracy of the classifiers is very high and close to 100%,but the sensitivity of all classifiers using the original dataset was close to zero.The performance of the three classifiers was then evaluated for data with different imbalance rates(10%to 50%)generated from the original data using SMOTE,SMOTE-Support Vector Machine(SMOTE-SVM)and SMOTE-Edited Nearest Neighbour(SMOTE-ENN).The classifiers were evaluated based on improvement in sensitivity and F-measure.Results showed that the sensitivity of all classifiers increases as the imbalance rate increases.SVM with radial basis function(RBF)kernel has the highest sensitivity when data is balanced(50:50)using SMOTE(Sensitivitytest=0.5686,Ftest=0.6927)compared to Naïve Bayes(Sensitivitytest=0.4033,Ftest=0.6218)and Logistic Regression(Sensitivitytest=0.4194,Ftest=0.621).Overall,the Gaussian Naïve Bayes model consistently improves sensitivity and F-measure as the imbalance ratio increases,but the sensitivity is below 50%.The classifiers performed better when data was balanced using SMOTE-SVM compared to SMOTE and SMOTE-ENN.展开更多
Current traffic signal split failure (SF) estimations derived from high-resolution controller event data rely on detector occupancy ratios and preset thresholds. The reliability of these techniques depends on the sele...Current traffic signal split failure (SF) estimations derived from high-resolution controller event data rely on detector occupancy ratios and preset thresholds. The reliability of these techniques depends on the selected thresholds, detector lengths, and vehicle arrival patterns. Connected vehicle (CV) trajectory data can more definitively show when a vehicle split fails by evaluating the number of stops it experiences as it approaches an intersection, but it has limited market penetration. This paper compares cycle-by-cycle SF estimations from both high-resolution controller event data and CV trajectory data, and evaluates the effect of data aggregation on SF agreement between the two techniques. Results indicate that, in general, split failure events identified from CV data are likely to also be captured from high-resolution data, but split failure events identified from high-resolution data are less likely to be captured from CV data. This is due to the CV market penetration rate (MPR) of ~5% being too low to capture representative data for every controller cycle. However, data aggregation can increase the ratio in which CV data captures split failure events. For example, day-of-week data aggregation increased the percentage of split failures identified with high-resolution data that were also captured with CV data from 35% to 56%. It is recommended that aggregated CV data be used to estimate SF as it provides conservative and actionable results without the limitations of intersection and detector configuration. As the CV MPR increases, the accuracy of CV-based SF estimation will also improve.展开更多
In this paper, an estimation method for reliability parameter in the case of zero-failuare data-synthetic estimation method is given. For zero-failure data of double-parameter exponential distribution, a hierarchical ...In this paper, an estimation method for reliability parameter in the case of zero-failuare data-synthetic estimation method is given. For zero-failure data of double-parameter exponential distribution, a hierarchical Bayesian estimation of the failure probability is presented. After failure information is introduced, hierarchical Bayesian estimation and synthetic estimation of the failure probability, as well as synthetic estimation of reliability are given. Calculation and analysis are performed regarding practical problems in case that life distribution of an engine obeys double-parameter exponential distribution.展开更多
For many products,distributions of their life mostly comply with increasing failure rates in average(IFRA).Aiming to these distributions,using properties of IFRA classification,this paper gives a non-parametric method...For many products,distributions of their life mostly comply with increasing failure rates in average(IFRA).Aiming to these distributions,using properties of IFRA classification,this paper gives a non-parametric method for processing zero-failure data.Estimations of reliabilities in any time are first obtained,and based on a regression model of failure rates,estimations of reliability indexes are given.Finally,a practical example is processed with this method.展开更多
Data obtained from accelerated life testing (ALT) when there are two or more failure modes, which is commonly referred to as competing failure modes, are often incomplete. The incompleteness is mainly due to censori...Data obtained from accelerated life testing (ALT) when there are two or more failure modes, which is commonly referred to as competing failure modes, are often incomplete. The incompleteness is mainly due to censoring, as well as masking which might be the case that the failure time is observed, but its corresponding failure mode is not identified. Because the identification of the failure mode may be expensive, or very difficult to investigate due to lack of appropriate diagnostics. A method is proposed for analyzing incomplete data of constant stress ALT with competing failure modes. It is assumed that failure modes have s-independent latent lifetimes and the log lifetime of each failure mode can be written as a linear function of stress. The parameters of the model are estimated by using the expectation maximum (EM) algorithm with incomplete data. Simulation studies are performed to check'model validity and investigate the properties of estimates. For further validation, the method is also illustrated by an example, which shows the process of analyze incomplete data from ALT of some insulation system. Because of considering the incompleteness of data in modeling and making use of the EM algorithm in estimating, the method becomes more flexible in ALT analysis.展开更多
The bearings of a certain type have their lives following a Weibull distribution. In a life test with 20 sets of bearings, only one set failed within the specified time, and none of the remainder failed even after th...The bearings of a certain type have their lives following a Weibull distribution. In a life test with 20 sets of bearings, only one set failed within the specified time, and none of the remainder failed even after the time of test has been extended. With a set of testing data like that in Table 1, it is required to estimate the reliability at the mission time. In this paper, we first use hierarchical Bayesian method to determine the prior distribution and the Bayesian estimates of various probabilities of failures, p i 's, then use the method of least squares to estimate the parameters of the Weibull distribution and the reliability. Actual computation shows that the estimates so obtained are rather robust. And the results have been adopted for practical use.展开更多
We show that an aggregated Interest in Named Data Networking (NDN) may fail to retrieve desired data since the Interest previously sent upstream for the same content is judged as a duplicate one and then dropped by an...We show that an aggregated Interest in Named Data Networking (NDN) may fail to retrieve desired data since the Interest previously sent upstream for the same content is judged as a duplicate one and then dropped by an upstream node due to its multipath forwarding. Furthermore, we propose NDRUDAF, a NACK based mechanism that enhances the Interest forwarding and enables Detection and fast Recovery from such Unanticipated Data Access Failure. In the NDN enhanced with NDRUDAF, the router that aggregates the Interest detects such unanticipated data access failure based on a negative acknowledgement from the upstream node that judges the Interest as a duplicate one. Then the router retransmits the Interest as soon as possible on behalf of the requester whose Interest is aggregated to fast recover from the data access failure. We qualitatively and quantitatively analyze the performance of the NDN enhanced with our proposed NDRUDAF and compare it with that of the present NDN. Our experimental results validate that NDRUDAF improves the system performance in case of such unanticipated data access failure in terms of data access delay and network resource utilization efficiency at routers.展开更多
The development of cloud computing and virtualization technology has brought great challenges to the reliability of data center services.Data centers typically contain a large number of compute and storage nodes which...The development of cloud computing and virtualization technology has brought great challenges to the reliability of data center services.Data centers typically contain a large number of compute and storage nodes which may fail and affect the quality of service.Failure prediction is an important means of ensuring service availability.Predicting node failure in cloud-based data centers is challenging because the failure symptoms reflected have complex characteristics,and the distribution imbalance between the failure sample and the normal sample is widespread,resulting in inaccurate failure prediction.Targeting these challenges,this paper proposes a novel failure prediction method FP-STE(Failure Prediction based on Spatio-temporal Feature Extraction).Firstly,an improved recurrent neural network HW-GRU(Improved GRU based on HighWay network)and a convolutional neural network CNN are used to extract the temporal features and spatial features of multivariate data respectively to increase the discrimination of different types of failure symptoms which improves the accuracy of prediction.Then the intermediate results of the two models are added as features into SCSXGBoost to predict the possibility and the precise type of node failure in the future.SCS-XGBoost is an ensemble learning model that is improved by the integrated strategy of oversampling and cost-sensitive learning.Experimental results based on real data sets confirm the effectiveness and superiority of FP-STE.展开更多
Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learni...Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learning models to predict heart failure.The fundamental concept is to compare the correctness of various Machine Learning(ML)algorithms and boost algorithms to improve models’accuracy for prediction.Some supervised algorithms like K-Nearest Neighbor(KNN),Support Vector Machine(SVM),Decision Trees(DT),Random Forest(RF),Logistic Regression(LR)are considered to achieve the best results.Some boosting algorithms like Extreme Gradient Boosting(XGBoost)and Cat-Boost are also used to improve the prediction using Artificial Neural Networks(ANN).This research also focuses on data visualization to identify patterns,trends,and outliers in a massive data set.Python and Scikit-learns are used for ML.Tensor Flow and Keras,along with Python,are used for ANN model train-ing.The DT and RF algorithms achieved the highest accuracy of 95%among the classifiers.Meanwhile,KNN obtained a second height accuracy of 93.33%.XGBoost had a gratified accuracy of 91.67%,SVM,CATBoost,and ANN had an accuracy of 90%,and LR had 88.33%accuracy.展开更多
As a bladder accumulator is a high reliable and long life component in a hydraulic system,its cost is high and it takes a lot of time to test its reliability,therefore,a reliability test with small sample is performed...As a bladder accumulator is a high reliable and long life component in a hydraulic system,its cost is high and it takes a lot of time to test its reliability,therefore,a reliability test with small sample is performed,and no failure data is obtained using the method of fixed time truncation. In the case of Weibull distribution,a life reliability model of bladder energy storage is established by Bayesian method using the optimal confidence intervals method,a model of one-sided lower confidence intervals of the reliability and one-sided lower confidence intervals model of the reliability life are established. Results of experiments show that the evaluation method of no failure data under Weibull distribution is a good way to evaluate the reliability of the accumulator,which is convenient for engineering application,and the reliability of the accumulator has theoretical and practical significance.展开更多
This article introduces a novel variant of the generalized linear exponential(GLE)distribution,known as the sine generalized linear exponential(SGLE)distribution.The SGLE distribution utilizes the sine transformation ...This article introduces a novel variant of the generalized linear exponential(GLE)distribution,known as the sine generalized linear exponential(SGLE)distribution.The SGLE distribution utilizes the sine transformation to enhance its capabilities.The updated distribution is very adaptable and may be efficiently used in the modeling of survival data and dependability issues.The suggested model incorporates a hazard rate function(HRF)that may display a rising,J-shaped,or bathtub form,depending on its unique characteristics.This model includes many well-known lifespan distributions as separate sub-models.The suggested model is accompanied with a range of statistical features.The model parameters are examined using the techniques of maximum likelihood and Bayesian estimation using progressively censored data.In order to evaluate the effectiveness of these techniques,we provide a set of simulated data for testing purposes.The relevance of the newly presented model is shown via two real-world dataset applications,highlighting its superiority over other respected similar models.展开更多
Objective Clinical medical record data associated with hepatitis B-related acute-on-chronic liver failure(HBV-ACLF)generally have small sample sizes and a class imbalance.However,most machine learning models are desig...Objective Clinical medical record data associated with hepatitis B-related acute-on-chronic liver failure(HBV-ACLF)generally have small sample sizes and a class imbalance.However,most machine learning models are designed based on balanced data and lack interpretability.This study aimed to propose a traditional Chinese medicine(TCM)diagnostic model for HBV-ACLF based on the TCM syndrome differentiation and treatment theory,which is clinically interpretable and highly accurate.Methods We collected medical records from 261 patients diagnosed with HBV-ACLF,including three syndromes:Yang jaundice(214 cases),Yang-Yin jaundice(41 cases),and Yin jaundice(6 cases).To avoid overfitting of the machine learning model,we excluded the cases of Yin jaundice.After data standardization and cleaning,we obtained 255 relevant medical records of Yang jaundice and Yang-Yin jaundice.To address the class imbalance issue,we employed the oversampling method and five machine learning methods,including logistic regression(LR),support vector machine(SVM),decision tree(DT),random forest(RF),and extreme gradient boosting(XGBoost)to construct the syndrome diagnosis models.This study used precision,F1 score,the area under the receiver operating characteristic(ROC)curve(AUC),and accuracy as model evaluation metrics.The model with the best classification performance was selected to extract the diagnostic rule,and its clinical significance was thoroughly analyzed.Furthermore,we proposed a novel multiple-round stable rule extraction(MRSRE)method to obtain a stable rule set of features that can exhibit the model’s clinical interpretability.Results The precision of the five machine learning models built using oversampled balanced data exceeded 0.90.Among these models,the accuracy of RF classification of syndrome types was 0.92,and the mean F1 scores of the two categories of Yang jaundice and Yang-Yin jaundice were 0.93 and 0.94,respectively.Additionally,the AUC was 0.98.The extraction rules of the RF syndrome differentiation model based on the MRSRE method revealed that the common features of Yang jaundice and Yang-Yin jaundice were wiry pulse,yellowing of the urine,skin,and eyes,normal tongue body,healthy sublingual vessel,nausea,oil loathing,and poor appetite.The main features of Yang jaundice were a red tongue body and thickened sublingual vessels,whereas those of Yang-Yin jaundice were a dark tongue body,pale white tongue body,white tongue coating,lack of strength,slippery pulse,light red tongue body,slimy tongue coating,and abdominal distension.This is aligned with the classifications made by TCM experts based on TCM syndrome differentiation and treatment theory.Conclusion Our model can be utilized for differentiating HBV-ACLF syndromes,which has the potential to be applied to generate other clinically interpretable models with high accuracy on clinical data characterized by small sample sizes and a class imbalance.展开更多
In this letter, a distributed protocol for sampled-data synchronization of coupled harmonic oscillators with controller failure and communication delays is proposed, and a brief procedure of convergence analysis for s...In this letter, a distributed protocol for sampled-data synchronization of coupled harmonic oscillators with controller failure and communication delays is proposed, and a brief procedure of convergence analysis for such algorithm over undirected connected graphs is provided. Furthermore, a simple yet generic criterion is also presented to guarantee synchronized oscillatory motions in coupled harmonic oscillators. Subsequently, the simulation results are worked out to demonstrate the efficiency and feasibility of the theoretical results.展开更多
This paper presents an procedure for purifying training data sets (i.e., past occurrences of slope failures) for inverse estimation on unobserved trigger factors of "different types of simultaneous slope failures"...This paper presents an procedure for purifying training data sets (i.e., past occurrences of slope failures) for inverse estimation on unobserved trigger factors of "different types of simultaneous slope failures". Due to difficulties in pixel-by-pixel observations of trigger factors, as one of the measures, the authors had proposed an inverse analysis algorithm on trigger factors based on SEM (structural equation modeling). Through a measurement equation, the trigger factor is inversely estimated, and a TFI (trigger factor influence) map can be also produced. As a subsequence subject, a purification procedure of training data set should be constructed to improve the accuracy of TFI map which depends on the representativeness of given training data sets of different types of slope failures. The proposed procedure resamples the matched pixels between original groups of past slope failures (i.e., surface slope failures, deep-seated slope failures, landslides) and classified three groups by K-means clustering for all pixels corresponding to those slope failures. For all cases of three types of slope failures, the improvement of success rates with respect to resampled training data sets was confirmed. As a final outcome, the differences between TFI maps produced by using original and resampled training data sets, respectively, are delineated on a DIF map (difference map) which is useful for analyzing trigger factor influence in terms of "risky- and safe-side assessment" sub-areas with respect to "different types of simultaneous slope failures".展开更多
It is now recognized that many geomaterials have nonlinear failure envelopes. This non-linearity is most marked at lower stress levels, the failure envelope being of quasi-parabolic shape. It is not easy to calibrate ...It is now recognized that many geomaterials have nonlinear failure envelopes. This non-linearity is most marked at lower stress levels, the failure envelope being of quasi-parabolic shape. It is not easy to calibrate these nonlinear failure envelopes from triaxial test data. Currently only the power-type failure envelope has been studied with an established formal procedure for its determination from triaxial test data. In this paper, a simplified procedure is evolved for the development of four different types of nonlinear envelopes. These are of invaluable assistance in the evaluation of true factors of safety in problems of slope stability and correct computation of lateral earth pressure and bearing capacity. The use of the Mohr-Coulomb failure envelopes leads to an overestimation of the factors of safety and other geotechnical quantities.展开更多
Failure mode and effects analysis (FMEA) offers a quick and easy way for identifying ranking-order for all failure modes in a system or a product. In FMEA the ranking methods is so called risk priority number (RPN...Failure mode and effects analysis (FMEA) offers a quick and easy way for identifying ranking-order for all failure modes in a system or a product. In FMEA the ranking methods is so called risk priority number (RPN), which is a mathematical product of severity (S), occurrence (0), and detection (D). One of major disadvantages of this ranking-order is that the failure mode with different combination of SODs may generate same RPN resulting in difficult decision-making. Another shortfall of FMEA is lacking of discerning contribution factors, which lead to insufficient information about scaling of improving effort. Through data envelopment analysis (DEA) technique and its extension, the proposed approach evolves the current rankings for failure modes by exclusively investigating SOD in lieu of RPN and to furnish with improving sca.les for SOD. The purpose of present study is to propose a state-of-the-art new approach to enhance assessment capabilities of failure mode and effects analysis (FMEA). The paper proposes a state-of-the-art new approach, robust, structured and useful in practice, for failure analysis.展开更多
Considering the dependence and competitive relation-ship between traumatic failure and degradation,the reliability assessment of products based on competing failure analysis is studied.The hazard rate of traumatic fai...Considering the dependence and competitive relation-ship between traumatic failure and degradation,the reliability assessment of products based on competing failure analysis is studied.The hazard rate of traumatic failure is regarded as a Weibull distribution of the degradation performance,and the Wiener process is used to describe the degradation process.The parameters are estimated with the maximum likelihood estimation(MLE)method.A reliability model based on competing failure analysis is proposed.A case study of the GaAs lasers is given to validate the effectiveness of the model and its solving method.The results indicate that if only the degradation failure is considered,the estimated result will be comparably optimistic.Meanwhile,the correlation between the degradation and traumatic failure has a great influence on the accuracy of reliability assessment.展开更多
文摘The zero_failure data research is a new field in the recent years, but it is required urgently in practical projects, so the work has more theory and practical values. In this paper, for zero_failure data (t i,n i) at moment t i , if the prior distribution of the failure probability p i=p{T【t i} is quasi_exponential distribution, the author gives the p i Bayesian estimation and hierarchical Bayesian estimation and the reliability under zero_failure date condition is also obtained.
基金Supported by 2021 Shenyang Science and Technology Program-Public Health R&D Special Project(Joint Project)of Shenyang Municipal Science and Technology Bureau,No.21-174-9-04.
文摘BACKGROUND Chronic heart failure is a complex clinical syndrome.The Chinese herbal compound preparation Jianpi Huatan Quyu recipe has been used to treat chronic heart failure;however,the underlying molecular mechanism is still not clear.AIM To identify the effective active ingredients of Jianpi Huatan Quyu recipe and explore its molecular mechanism in the treatment of chronic heart failure.METHODS The effective active ingredients of eight herbs composing Jianpi Huatan Quyu recipe were identified using the Traditional Chinese Medicine Systems Pharmacology Database and Analysis Platform.The target genes of chronic heart failure were searched in the Genecards database.The target proteins of active ingredients were mapped to chronic heart failure target genes to obtain the common drugdisease targets,which were then used to construct a key chemical componenttarget network using Cytoscape 3.7.2 software.The protein-protein interaction network was constructed using the String database.Gene Ontology and Kyoto Encyclopedia of Genes and Genomes enrichment analyses were performed through the Metascape database.Finally,our previously published relevant articles were searched to verify the results obtained via network pharmacology.RESULTS A total of 227 effective active ingredients for Jianpi Huatan Quyu recipe were identified,of which quercetin,kaempferol,7-methoxy-2-methyl isoflavone,formononetin,and isorhamnetin may be key active ingredients and involved in the therapeutic effects of TCM by acting on STAT3,MAPK3,AKT1,JUN,MAPK1,TP53,TNF,HSP90AA1,p65,MAPK8,MAPK14,IL6,EGFR,EDN1,FOS,and other proteins.The pathways identified by KEGG enrichment analysis include pathways in cancer,IL-17 signaling pathway,PI3K-Akt signaling pathway,HIF-1 signaling pathway,calcium signaling pathway,cAMP signaling pathway,NF-kappaB signaling pathway,AMPK signaling pathway,etc.Previous studies on Jianpi Huatan Quyu recipe suggested that this Chinese compound preparation can regulate the TNF-α,IL-6,MAPK,cAMP,and AMPK pathways to affect the mitochondrial structure of myocardial cells,oxidative stress,and energy metabolism,thus achieving the therapeutic effects on chronic heart failure.CONCLUSION The Chinese medicine compound preparation Jianpi Huatan Quyu recipe exerts therapeutic effects on chronic heart failure possibly by influencing the mitochondrial structure of cardiomyocytes,oxidative stress,energy metabolism,and other processes.Future studies are warranted to investigate the role of the IL-17 signaling pathway,PI3K-Akt signaling pathway,HIF-1 signaling pathway,and other pathways in mediating the therapeutic effects of Jianpi Huatan Quyu recipe on chronic heart failure.
基金supported by National Natural Science Foundation of China (61703410,61873175,62073336,61873273,61773386,61922089)。
文摘Remaining useful life(RUL) prediction is one of the most crucial elements in prognostics and health management(PHM). Aiming at the imperfect prior information, this paper proposes an RUL prediction method based on a nonlinear random coefficient regression(RCR) model with fusing failure time data.Firstly, some interesting natures of parameters estimation based on the nonlinear RCR model are given. Based on these natures,the failure time data can be fused as the prior information reasonably. Specifically, the fixed parameters are calculated by the field degradation data of the evaluated equipment and the prior information of random coefficient is estimated with fusing the failure time data of congeneric equipment. Then, the prior information of the random coefficient is updated online under the Bayesian framework, the probability density function(PDF) of the RUL with considering the limitation of the failure threshold is performed. Finally, two case studies are used for experimental verification. Compared with the traditional Bayesian method, the proposed method can effectively reduce the influence of imperfect prior information and improve the accuracy of RUL prediction.
基金supported under the research Grant(PO Number:920138936)from the Institute of Technology PETRONAS Sdn Bhd,32610,Bandar Seri Iskandar,Perak,Malaysia.
文摘Prediction of machine failure is challenging as the dataset is often imbalanced with a low failure rate.The common approach to han-dle classification involving imbalanced data is to balance the data using a sampling approach such as random undersampling,random oversampling,or Synthetic Minority Oversampling Technique(SMOTE)algorithms.This paper compared the classification performance of three popular classifiers(Logistic Regression,Gaussian Naïve Bayes,and Support Vector Machine)in predicting machine failure in the Oil and Gas industry.The original machine failure dataset consists of 20,473 hourly data and is imbalanced with 19945(97%)‘non-failure’and 528(3%)‘failure data’.The three independent variables to predict machine failure were pressure indicator,flow indicator,and level indicator.The accuracy of the classifiers is very high and close to 100%,but the sensitivity of all classifiers using the original dataset was close to zero.The performance of the three classifiers was then evaluated for data with different imbalance rates(10%to 50%)generated from the original data using SMOTE,SMOTE-Support Vector Machine(SMOTE-SVM)and SMOTE-Edited Nearest Neighbour(SMOTE-ENN).The classifiers were evaluated based on improvement in sensitivity and F-measure.Results showed that the sensitivity of all classifiers increases as the imbalance rate increases.SVM with radial basis function(RBF)kernel has the highest sensitivity when data is balanced(50:50)using SMOTE(Sensitivitytest=0.5686,Ftest=0.6927)compared to Naïve Bayes(Sensitivitytest=0.4033,Ftest=0.6218)and Logistic Regression(Sensitivitytest=0.4194,Ftest=0.621).Overall,the Gaussian Naïve Bayes model consistently improves sensitivity and F-measure as the imbalance ratio increases,but the sensitivity is below 50%.The classifiers performed better when data was balanced using SMOTE-SVM compared to SMOTE and SMOTE-ENN.
文摘Current traffic signal split failure (SF) estimations derived from high-resolution controller event data rely on detector occupancy ratios and preset thresholds. The reliability of these techniques depends on the selected thresholds, detector lengths, and vehicle arrival patterns. Connected vehicle (CV) trajectory data can more definitively show when a vehicle split fails by evaluating the number of stops it experiences as it approaches an intersection, but it has limited market penetration. This paper compares cycle-by-cycle SF estimations from both high-resolution controller event data and CV trajectory data, and evaluates the effect of data aggregation on SF agreement between the two techniques. Results indicate that, in general, split failure events identified from CV data are likely to also be captured from high-resolution data, but split failure events identified from high-resolution data are less likely to be captured from CV data. This is due to the CV market penetration rate (MPR) of ~5% being too low to capture representative data for every controller cycle. However, data aggregation can increase the ratio in which CV data captures split failure events. For example, day-of-week data aggregation increased the percentage of split failures identified with high-resolution data that were also captured with CV data from 35% to 56%. It is recommended that aggregated CV data be used to estimate SF as it provides conservative and actionable results without the limitations of intersection and detector configuration. As the CV MPR increases, the accuracy of CV-based SF estimation will also improve.
文摘In this paper, an estimation method for reliability parameter in the case of zero-failuare data-synthetic estimation method is given. For zero-failure data of double-parameter exponential distribution, a hierarchical Bayesian estimation of the failure probability is presented. After failure information is introduced, hierarchical Bayesian estimation and synthetic estimation of the failure probability, as well as synthetic estimation of reliability are given. Calculation and analysis are performed regarding practical problems in case that life distribution of an engine obeys double-parameter exponential distribution.
文摘For many products,distributions of their life mostly comply with increasing failure rates in average(IFRA).Aiming to these distributions,using properties of IFRA classification,this paper gives a non-parametric method for processing zero-failure data.Estimations of reliabilities in any time are first obtained,and based on a regression model of failure rates,estimations of reliability indexes are given.Finally,a practical example is processed with this method.
基金supported by Sustentation Program of National Ministries and Commissions of China (Grant No. 203020102)
文摘Data obtained from accelerated life testing (ALT) when there are two or more failure modes, which is commonly referred to as competing failure modes, are often incomplete. The incompleteness is mainly due to censoring, as well as masking which might be the case that the failure time is observed, but its corresponding failure mode is not identified. Because the identification of the failure mode may be expensive, or very difficult to investigate due to lack of appropriate diagnostics. A method is proposed for analyzing incomplete data of constant stress ALT with competing failure modes. It is assumed that failure modes have s-independent latent lifetimes and the log lifetime of each failure mode can be written as a linear function of stress. The parameters of the model are estimated by using the expectation maximum (EM) algorithm with incomplete data. Simulation studies are performed to check'model validity and investigate the properties of estimates. For further validation, the method is also illustrated by an example, which shows the process of analyze incomplete data from ALT of some insulation system. Because of considering the incompleteness of data in modeling and making use of the EM algorithm in estimating, the method becomes more flexible in ALT analysis.
文摘The bearings of a certain type have their lives following a Weibull distribution. In a life test with 20 sets of bearings, only one set failed within the specified time, and none of the remainder failed even after the time of test has been extended. With a set of testing data like that in Table 1, it is required to estimate the reliability at the mission time. In this paper, we first use hierarchical Bayesian method to determine the prior distribution and the Bayesian estimates of various probabilities of failures, p i 's, then use the method of least squares to estimate the parameters of the Weibull distribution and the reliability. Actual computation shows that the estimates so obtained are rather robust. And the results have been adopted for practical use.
基金supported in part by the National Natural Science Foundation of China (No.61602114)part by the National Key Research and Development Program of China (2017YFB0801703)+1 种基金part by the CERNET Innovation Project (NGII20170406)part by Jiangsu Provincial Key Laboratory of Network and Information Security (BM2003201)
文摘We show that an aggregated Interest in Named Data Networking (NDN) may fail to retrieve desired data since the Interest previously sent upstream for the same content is judged as a duplicate one and then dropped by an upstream node due to its multipath forwarding. Furthermore, we propose NDRUDAF, a NACK based mechanism that enhances the Interest forwarding and enables Detection and fast Recovery from such Unanticipated Data Access Failure. In the NDN enhanced with NDRUDAF, the router that aggregates the Interest detects such unanticipated data access failure based on a negative acknowledgement from the upstream node that judges the Interest as a duplicate one. Then the router retransmits the Interest as soon as possible on behalf of the requester whose Interest is aggregated to fast recover from the data access failure. We qualitatively and quantitatively analyze the performance of the NDN enhanced with our proposed NDRUDAF and compare it with that of the present NDN. Our experimental results validate that NDRUDAF improves the system performance in case of such unanticipated data access failure in terms of data access delay and network resource utilization efficiency at routers.
基金supported in part by National Key Research and Development Program of China(2019YFB2103200)NSFC(61672108),Open Subject Funds of Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory(SKX182010049)+1 种基金the Fundamental Research Funds for the Central Universities(5004193192019PTB-019)the Industrial Internet Innovation and Development Project 2018 of China.
文摘The development of cloud computing and virtualization technology has brought great challenges to the reliability of data center services.Data centers typically contain a large number of compute and storage nodes which may fail and affect the quality of service.Failure prediction is an important means of ensuring service availability.Predicting node failure in cloud-based data centers is challenging because the failure symptoms reflected have complex characteristics,and the distribution imbalance between the failure sample and the normal sample is widespread,resulting in inaccurate failure prediction.Targeting these challenges,this paper proposes a novel failure prediction method FP-STE(Failure Prediction based on Spatio-temporal Feature Extraction).Firstly,an improved recurrent neural network HW-GRU(Improved GRU based on HighWay network)and a convolutional neural network CNN are used to extract the temporal features and spatial features of multivariate data respectively to increase the discrimination of different types of failure symptoms which improves the accuracy of prediction.Then the intermediate results of the two models are added as features into SCSXGBoost to predict the possibility and the precise type of node failure in the future.SCS-XGBoost is an ensemble learning model that is improved by the integrated strategy of oversampling and cost-sensitive learning.Experimental results based on real data sets confirm the effectiveness and superiority of FP-STE.
基金Taif University Researchers Supporting Project Number(TURSP-2020/73)Taif University,Taif,Saudi Arabia.
文摘Heart failure is now widely spread throughout the world.Heart disease affects approximately 48%of the population.It is too expensive and also difficult to cure the disease.This research paper represents machine learning models to predict heart failure.The fundamental concept is to compare the correctness of various Machine Learning(ML)algorithms and boost algorithms to improve models’accuracy for prediction.Some supervised algorithms like K-Nearest Neighbor(KNN),Support Vector Machine(SVM),Decision Trees(DT),Random Forest(RF),Logistic Regression(LR)are considered to achieve the best results.Some boosting algorithms like Extreme Gradient Boosting(XGBoost)and Cat-Boost are also used to improve the prediction using Artificial Neural Networks(ANN).This research also focuses on data visualization to identify patterns,trends,and outliers in a massive data set.Python and Scikit-learns are used for ML.Tensor Flow and Keras,along with Python,are used for ANN model train-ing.The DT and RF algorithms achieved the highest accuracy of 95%among the classifiers.Meanwhile,KNN obtained a second height accuracy of 93.33%.XGBoost had a gratified accuracy of 91.67%,SVM,CATBoost,and ANN had an accuracy of 90%,and LR had 88.33%accuracy.
基金Supported by the National Natural Science Foundation of China(No.51405424,51675461,11673040)
文摘As a bladder accumulator is a high reliable and long life component in a hydraulic system,its cost is high and it takes a lot of time to test its reliability,therefore,a reliability test with small sample is performed,and no failure data is obtained using the method of fixed time truncation. In the case of Weibull distribution,a life reliability model of bladder energy storage is established by Bayesian method using the optimal confidence intervals method,a model of one-sided lower confidence intervals of the reliability and one-sided lower confidence intervals model of the reliability life are established. Results of experiments show that the evaluation method of no failure data under Weibull distribution is a good way to evaluate the reliability of the accumulator,which is convenient for engineering application,and the reliability of the accumulator has theoretical and practical significance.
基金This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-RG23142).
文摘This article introduces a novel variant of the generalized linear exponential(GLE)distribution,known as the sine generalized linear exponential(SGLE)distribution.The SGLE distribution utilizes the sine transformation to enhance its capabilities.The updated distribution is very adaptable and may be efficiently used in the modeling of survival data and dependability issues.The suggested model incorporates a hazard rate function(HRF)that may display a rising,J-shaped,or bathtub form,depending on its unique characteristics.This model includes many well-known lifespan distributions as separate sub-models.The suggested model is accompanied with a range of statistical features.The model parameters are examined using the techniques of maximum likelihood and Bayesian estimation using progressively censored data.In order to evaluate the effectiveness of these techniques,we provide a set of simulated data for testing purposes.The relevance of the newly presented model is shown via two real-world dataset applications,highlighting its superiority over other respected similar models.
基金Key research project of Hunan Provincial Administration of Traditional Chinese Medicine(A2023048)Key Research Foundation of Education Bureau of Hunan Province,China(23A0273).
文摘Objective Clinical medical record data associated with hepatitis B-related acute-on-chronic liver failure(HBV-ACLF)generally have small sample sizes and a class imbalance.However,most machine learning models are designed based on balanced data and lack interpretability.This study aimed to propose a traditional Chinese medicine(TCM)diagnostic model for HBV-ACLF based on the TCM syndrome differentiation and treatment theory,which is clinically interpretable and highly accurate.Methods We collected medical records from 261 patients diagnosed with HBV-ACLF,including three syndromes:Yang jaundice(214 cases),Yang-Yin jaundice(41 cases),and Yin jaundice(6 cases).To avoid overfitting of the machine learning model,we excluded the cases of Yin jaundice.After data standardization and cleaning,we obtained 255 relevant medical records of Yang jaundice and Yang-Yin jaundice.To address the class imbalance issue,we employed the oversampling method and five machine learning methods,including logistic regression(LR),support vector machine(SVM),decision tree(DT),random forest(RF),and extreme gradient boosting(XGBoost)to construct the syndrome diagnosis models.This study used precision,F1 score,the area under the receiver operating characteristic(ROC)curve(AUC),and accuracy as model evaluation metrics.The model with the best classification performance was selected to extract the diagnostic rule,and its clinical significance was thoroughly analyzed.Furthermore,we proposed a novel multiple-round stable rule extraction(MRSRE)method to obtain a stable rule set of features that can exhibit the model’s clinical interpretability.Results The precision of the five machine learning models built using oversampled balanced data exceeded 0.90.Among these models,the accuracy of RF classification of syndrome types was 0.92,and the mean F1 scores of the two categories of Yang jaundice and Yang-Yin jaundice were 0.93 and 0.94,respectively.Additionally,the AUC was 0.98.The extraction rules of the RF syndrome differentiation model based on the MRSRE method revealed that the common features of Yang jaundice and Yang-Yin jaundice were wiry pulse,yellowing of the urine,skin,and eyes,normal tongue body,healthy sublingual vessel,nausea,oil loathing,and poor appetite.The main features of Yang jaundice were a red tongue body and thickened sublingual vessels,whereas those of Yang-Yin jaundice were a dark tongue body,pale white tongue body,white tongue coating,lack of strength,slippery pulse,light red tongue body,slimy tongue coating,and abdominal distension.This is aligned with the classifications made by TCM experts based on TCM syndrome differentiation and treatment theory.Conclusion Our model can be utilized for differentiating HBV-ACLF syndromes,which has the potential to be applied to generate other clinically interpretable models with high accuracy on clinical data characterized by small sample sizes and a class imbalance.
基金partially supported by the National Science Foundation of China(11272791,61364003,and 61203006)the Innovation Program of Shanghai Municipal Education Commission(10ZZ61 and 14ZZ151)the Science and Technology Foundation of Guizhou Province(20122316)
文摘In this letter, a distributed protocol for sampled-data synchronization of coupled harmonic oscillators with controller failure and communication delays is proposed, and a brief procedure of convergence analysis for such algorithm over undirected connected graphs is provided. Furthermore, a simple yet generic criterion is also presented to guarantee synchronized oscillatory motions in coupled harmonic oscillators. Subsequently, the simulation results are worked out to demonstrate the efficiency and feasibility of the theoretical results.
文摘This paper presents an procedure for purifying training data sets (i.e., past occurrences of slope failures) for inverse estimation on unobserved trigger factors of "different types of simultaneous slope failures". Due to difficulties in pixel-by-pixel observations of trigger factors, as one of the measures, the authors had proposed an inverse analysis algorithm on trigger factors based on SEM (structural equation modeling). Through a measurement equation, the trigger factor is inversely estimated, and a TFI (trigger factor influence) map can be also produced. As a subsequence subject, a purification procedure of training data set should be constructed to improve the accuracy of TFI map which depends on the representativeness of given training data sets of different types of slope failures. The proposed procedure resamples the matched pixels between original groups of past slope failures (i.e., surface slope failures, deep-seated slope failures, landslides) and classified three groups by K-means clustering for all pixels corresponding to those slope failures. For all cases of three types of slope failures, the improvement of success rates with respect to resampled training data sets was confirmed. As a final outcome, the differences between TFI maps produced by using original and resampled training data sets, respectively, are delineated on a DIF map (difference map) which is useful for analyzing trigger factor influence in terms of "risky- and safe-side assessment" sub-areas with respect to "different types of simultaneous slope failures".
文摘It is now recognized that many geomaterials have nonlinear failure envelopes. This non-linearity is most marked at lower stress levels, the failure envelope being of quasi-parabolic shape. It is not easy to calibrate these nonlinear failure envelopes from triaxial test data. Currently only the power-type failure envelope has been studied with an established formal procedure for its determination from triaxial test data. In this paper, a simplified procedure is evolved for the development of four different types of nonlinear envelopes. These are of invaluable assistance in the evaluation of true factors of safety in problems of slope stability and correct computation of lateral earth pressure and bearing capacity. The use of the Mohr-Coulomb failure envelopes leads to an overestimation of the factors of safety and other geotechnical quantities.
文摘Failure mode and effects analysis (FMEA) offers a quick and easy way for identifying ranking-order for all failure modes in a system or a product. In FMEA the ranking methods is so called risk priority number (RPN), which is a mathematical product of severity (S), occurrence (0), and detection (D). One of major disadvantages of this ranking-order is that the failure mode with different combination of SODs may generate same RPN resulting in difficult decision-making. Another shortfall of FMEA is lacking of discerning contribution factors, which lead to insufficient information about scaling of improving effort. Through data envelopment analysis (DEA) technique and its extension, the proposed approach evolves the current rankings for failure modes by exclusively investigating SOD in lieu of RPN and to furnish with improving sca.les for SOD. The purpose of present study is to propose a state-of-the-art new approach to enhance assessment capabilities of failure mode and effects analysis (FMEA). The paper proposes a state-of-the-art new approach, robust, structured and useful in practice, for failure analysis.
基金The National Natural Science Foundation of China(No.50405021)
文摘Considering the dependence and competitive relation-ship between traumatic failure and degradation,the reliability assessment of products based on competing failure analysis is studied.The hazard rate of traumatic failure is regarded as a Weibull distribution of the degradation performance,and the Wiener process is used to describe the degradation process.The parameters are estimated with the maximum likelihood estimation(MLE)method.A reliability model based on competing failure analysis is proposed.A case study of the GaAs lasers is given to validate the effectiveness of the model and its solving method.The results indicate that if only the degradation failure is considered,the estimated result will be comparably optimistic.Meanwhile,the correlation between the degradation and traumatic failure has a great influence on the accuracy of reliability assessment.