In the process of fault detection and classification,the operation mode usually drifts over time,which brings great challenges to the algorithms.Because traditional machine learning based fault classification cannot d...In the process of fault detection and classification,the operation mode usually drifts over time,which brings great challenges to the algorithms.Because traditional machine learning based fault classification cannot dynamically update the trained model according to the probability distribution of the testing dataset,the accuracy of these traditional methods usually drops significantly in the case of covariate shift.In this paper,an importance-weighted transfer learning method is proposed for fault classification in the nonlinear multi-mode industrial process.It effectively alters the drift between the training and testing dataset.Firstly,the mutual information method is utilized to perform feature selection on the original data,and a number of characteristic parameters associated with fault classification are selected according to their mutual information.Then,the importance-weighted least-squares probabilistic classifier(IWLSPC)is utilized for binary fault detection and multi-fault classification in covariate shift.Finally,the Tennessee Eastman(TE)benchmark is carried out to confirm the effectiveness of the proposed method.The experimental result shows that the covariate shift adaptation based on importance-weight sampling is superior to the traditional machine learning fault classification algorithms.Moreover,IWLSPC can not only be used for binary fault classification,but also can be applied to the multi-classification target in the process of fault diagnosis.展开更多
To improve the computational performance of the fuzzy C-means(FCM)algorithm used in dataset clus-tering with large numbers,the concepts of the equivalent samples and the weighting samples based on eigenvalue distribut...To improve the computational performance of the fuzzy C-means(FCM)algorithm used in dataset clus-tering with large numbers,the concepts of the equivalent samples and the weighting samples based on eigenvalue distribution of the samples in the feature space were intro-duced and a novel fast cluster algorithm named weighted fuzzy C-means(WFCM)algorithm was put forward,which came from the traditional FCM algorithm.It was proved that the cluster results were equivalent in dataset with two different cluster algorithms:WFCM and FCM.Furthermore,the WFCM algorithm had better computational performance than the ordinary FCM algorithm.The experiment of the gray image segmentation showed that the WFCM algorithm is a fast and effective cluster algorithm.展开更多
Adaptive mesh refinement (AMR) is fairly practiced in the context of high-dimensional, mesh-based computational models. However, it is in its infancy in that of low-dimensional, generalized-coordinate-based computatio...Adaptive mesh refinement (AMR) is fairly practiced in the context of high-dimensional, mesh-based computational models. However, it is in its infancy in that of low-dimensional, generalized-coordinate-based computational models such as projection-based reduced-order models. This paper presents a complete framework for projection-based model order reduction (PMOR) of nonlinear problems in the presence of AMR that builds on elements from existing methods and augments them with critical new contributions. In particular, it proposes an analytical algorithm for computing a pseudo-meshless inner product between adapted solution snapshots for the purpose of clustering and PMOR. It exploits hyperreduction—specifically, the energy-conserving sampling and weighting hyperreduction method—to deliver for nonlinear and/or parametric problems the desired computational gains. Most importantly, the proposed framework for PMOR in the presence of AMR capitalizes on the concept of state-local reduced-order bases to make the most of the notion of a supermesh, while achieving computational tractability. Its features are illustrated with CFD applications grounded in AMR and its significance is demonstrated by the reported wall-clock speedup factors.展开更多
The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM), may not include some important information, especially for steep slop...The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM), may not include some important information, especially for steep slopes with complex geological conditions. This paper presents a new reliability method that uses sample weight analysis. Based on the distribution characteristics of random variables, the minimal sample size of every random variable is extracted according to a small sample t-distribution under a certain expected value, and the weight coefficient of each extracted sample is considered to be its contribution to the random variables. Then, the weight coefficients of the random sample combinations are determined using the Bayes formula, and different sample combinations are taken as the input for slope stability analysis. According to one-to-one mapping between the input sample combination and the output safety coefficient, the reliability index of slope stability can be obtained with the multiplication principle. Slope stability analysis of the left bank of the Baihetan Project is used as an example, and the analysis results show that the present method is reasonable and practicable for the reliability analysis of steep slopes with complex geological conditions.展开更多
This study proposed a weighted sampling hierarchical classification learning method based on an efficient backbone network model to address the problems of high costs,low accuracy,and time-consuming traditional tea di...This study proposed a weighted sampling hierarchical classification learning method based on an efficient backbone network model to address the problems of high costs,low accuracy,and time-consuming traditional tea disease recognition methods.This method enhances the feature extraction ability by conducting hierarchical classification learning based on the EfficientNet model,effectively alleviating the impact of high similarity between tea diseases on the model’s classification performance.To better solve the problem of few and unevenly distributed tea disease samples,this study introduced a weighted sampling scheme to optimize data processing,which not only alleviates the overfitting effect caused by too few sample data but also balances the probability of extracting imbalanced classification data.The experimental results show that the proposed method was significant in identifying both healthy tea leaves and four common leaf diseases of tea(tea algal spot disease,tea white spot disease,tea anthracnose disease,and tea leaf blight disease).After applying the“weighted sampling hierarchical classification learning method”to train 7 different efficient backbone networks,most of their accuracies have improved.The EfficientNet-B1 model proposed in this study achieved an accuracy rate of 99.21%after adopting this learning method,which is higher than EfficientNet-b2(98.82%)and MobileNet-V3(98.43%).In addition,to better apply the results of identifying tea diseases,this study developed a mini-program that operates on WeChat.Users can quickly obtain accurate identification results and corresponding disease descriptions and prevention methods through simple operations.This intelligent tool for identifying tea diseases can serve as an auxiliary tool for farmers,consumers,and related scientific researchers and has certain practical value.展开更多
Segmentation of intracranial aneurysm(IA)from computed tomography angiography(CTA)images is of significant importance for quantitative assessment of IA and further surgical treatment.Manual segmentation of IA is a lab...Segmentation of intracranial aneurysm(IA)from computed tomography angiography(CTA)images is of significant importance for quantitative assessment of IA and further surgical treatment.Manual segmentation of IA is a labor-intensive,time-consuming job and suffers from inter-and intra-observer variabilities.Training deep neural networks usually requires a large amount of labeled data,while annotating data is very time-consuming for the IA segmentation task.This paper presents a novel weight-perceptual self-ensembling model for semi-supervised IA segmentation,which employs unlabeled data by encouraging the predictions of given perturbed input samples to be consistent.Considering that the quality of consistency targets is not comparable to each other,we introduce a novel sample weight perception module to quantify the quality of different consistency targets.Our proposed module can be used to evaluate the contributions of unlabeled samples during training to force the network to focus on those well-predicted samples.We have conducted both horizontal and vertical comparisons on the clinical intracranial aneurysm CTA image dataset.Experimental results show that our proposed method can improve at least 3%Dice coefficient over the fully-supervised baseline,and at least 1.7%over other state-of-the-art semi-supervised methods.展开更多
文摘In the process of fault detection and classification,the operation mode usually drifts over time,which brings great challenges to the algorithms.Because traditional machine learning based fault classification cannot dynamically update the trained model according to the probability distribution of the testing dataset,the accuracy of these traditional methods usually drops significantly in the case of covariate shift.In this paper,an importance-weighted transfer learning method is proposed for fault classification in the nonlinear multi-mode industrial process.It effectively alters the drift between the training and testing dataset.Firstly,the mutual information method is utilized to perform feature selection on the original data,and a number of characteristic parameters associated with fault classification are selected according to their mutual information.Then,the importance-weighted least-squares probabilistic classifier(IWLSPC)is utilized for binary fault detection and multi-fault classification in covariate shift.Finally,the Tennessee Eastman(TE)benchmark is carried out to confirm the effectiveness of the proposed method.The experimental result shows that the covariate shift adaptation based on importance-weight sampling is superior to the traditional machine learning fault classification algorithms.Moreover,IWLSPC can not only be used for binary fault classification,but also can be applied to the multi-classification target in the process of fault diagnosis.
文摘To improve the computational performance of the fuzzy C-means(FCM)algorithm used in dataset clus-tering with large numbers,the concepts of the equivalent samples and the weighting samples based on eigenvalue distribution of the samples in the feature space were intro-duced and a novel fast cluster algorithm named weighted fuzzy C-means(WFCM)algorithm was put forward,which came from the traditional FCM algorithm.It was proved that the cluster results were equivalent in dataset with two different cluster algorithms:WFCM and FCM.Furthermore,the WFCM algorithm had better computational performance than the ordinary FCM algorithm.The experiment of the gray image segmentation showed that the WFCM algorithm is a fast and effective cluster algorithm.
基金support by the Air Force Office of Scientific Research under Grant No.FA9550-20-1-0358 and Grant No.FA9550-22-1-0004.
文摘Adaptive mesh refinement (AMR) is fairly practiced in the context of high-dimensional, mesh-based computational models. However, it is in its infancy in that of low-dimensional, generalized-coordinate-based computational models such as projection-based reduced-order models. This paper presents a complete framework for projection-based model order reduction (PMOR) of nonlinear problems in the presence of AMR that builds on elements from existing methods and augments them with critical new contributions. In particular, it proposes an analytical algorithm for computing a pseudo-meshless inner product between adapted solution snapshots for the purpose of clustering and PMOR. It exploits hyperreduction—specifically, the energy-conserving sampling and weighting hyperreduction method—to deliver for nonlinear and/or parametric problems the desired computational gains. Most importantly, the proposed framework for PMOR in the presence of AMR capitalizes on the concept of state-local reduced-order bases to make the most of the notion of a supermesh, while achieving computational tractability. Its features are illustrated with CFD applications grounded in AMR and its significance is demonstrated by the reported wall-clock speedup factors.
基金supported by the National Natural Science Foundation of China (Grant No. 90510017)
文摘The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM), may not include some important information, especially for steep slopes with complex geological conditions. This paper presents a new reliability method that uses sample weight analysis. Based on the distribution characteristics of random variables, the minimal sample size of every random variable is extracted according to a small sample t-distribution under a certain expected value, and the weight coefficient of each extracted sample is considered to be its contribution to the random variables. Then, the weight coefficients of the random sample combinations are determined using the Bayes formula, and different sample combinations are taken as the input for slope stability analysis. According to one-to-one mapping between the input sample combination and the output safety coefficient, the reliability index of slope stability can be obtained with the multiplication principle. Slope stability analysis of the left bank of the Baihetan Project is used as an example, and the analysis results show that the present method is reasonable and practicable for the reliability analysis of steep slopes with complex geological conditions.
基金financial support provided by the Major Project of Yunnan Science and Technology,under Project No.202302AE09002003,entitled“Research on the Integration of Key Technologies in Smart Agriculture.”。
文摘This study proposed a weighted sampling hierarchical classification learning method based on an efficient backbone network model to address the problems of high costs,low accuracy,and time-consuming traditional tea disease recognition methods.This method enhances the feature extraction ability by conducting hierarchical classification learning based on the EfficientNet model,effectively alleviating the impact of high similarity between tea diseases on the model’s classification performance.To better solve the problem of few and unevenly distributed tea disease samples,this study introduced a weighted sampling scheme to optimize data processing,which not only alleviates the overfitting effect caused by too few sample data but also balances the probability of extracting imbalanced classification data.The experimental results show that the proposed method was significant in identifying both healthy tea leaves and four common leaf diseases of tea(tea algal spot disease,tea white spot disease,tea anthracnose disease,and tea leaf blight disease).After applying the“weighted sampling hierarchical classification learning method”to train 7 different efficient backbone networks,most of their accuracies have improved.The EfficientNet-B1 model proposed in this study achieved an accuracy rate of 99.21%after adopting this learning method,which is higher than EfficientNet-b2(98.82%)and MobileNet-V3(98.43%).In addition,to better apply the results of identifying tea diseases,this study developed a mini-program that operates on WeChat.Users can quickly obtain accurate identification results and corresponding disease descriptions and prevention methods through simple operations.This intelligent tool for identifying tea diseases can serve as an auxiliary tool for farmers,consumers,and related scientific researchers and has certain practical value.
基金supported by Shenzhen Fundamental Research Program of China under Grant Nos.JCYJ20200109110420626 and JCYJ20200109110208764the National Natural Science Foundation of China under Grant Nos.U1813204 and 61802385+1 种基金the Natural Science Foundation of Guangdong of China under Grant No.2021A1515012604the Clinical Research Project of Shenzhen Municiple Health Commission under Grant No.SZLY2017011.
文摘Segmentation of intracranial aneurysm(IA)from computed tomography angiography(CTA)images is of significant importance for quantitative assessment of IA and further surgical treatment.Manual segmentation of IA is a labor-intensive,time-consuming job and suffers from inter-and intra-observer variabilities.Training deep neural networks usually requires a large amount of labeled data,while annotating data is very time-consuming for the IA segmentation task.This paper presents a novel weight-perceptual self-ensembling model for semi-supervised IA segmentation,which employs unlabeled data by encouraging the predictions of given perturbed input samples to be consistent.Considering that the quality of consistency targets is not comparable to each other,we introduce a novel sample weight perception module to quantify the quality of different consistency targets.Our proposed module can be used to evaluate the contributions of unlabeled samples during training to force the network to focus on those well-predicted samples.We have conducted both horizontal and vertical comparisons on the clinical intracranial aneurysm CTA image dataset.Experimental results show that our proposed method can improve at least 3%Dice coefficient over the fully-supervised baseline,and at least 1.7%over other state-of-the-art semi-supervised methods.