Locality preserving projection (LPP) is a newly emerging fault detection method which can discover local manifold structure of a data set to be analyzed, but its linear assumption may lead to monitoring performance de...Locality preserving projection (LPP) is a newly emerging fault detection method which can discover local manifold structure of a data set to be analyzed, but its linear assumption may lead to monitoring performance degradation for complicated nonlinear industrial processes. In this paper, an improved LPP method, referred to as sparse kernel locality preserving projection (SKLPP) is proposed for nonlinear process fault detection. Based on the LPP model, kernel trick is applied to construct nonlinear kernel model. Furthermore, for reducing the computational complexity of kernel model, feature samples selection technique is adopted to make the kernel LPP model sparse. Lastly, two monitoring statistics of SKLPP model are built to detect process faults. Simulations on a continuous stirred tank reactor (CSTR) system show that SKLPP is more effective than LPP in terms of fault detection performance.展开更多
Kernel independent component analysis(KICA) is a newly emerging nonlinear process monitoring method,which can extract mutually independent latent variables called independent components(ICs) from process variables. Ho...Kernel independent component analysis(KICA) is a newly emerging nonlinear process monitoring method,which can extract mutually independent latent variables called independent components(ICs) from process variables. However, when more than one IC have Gaussian distribution, it cannot extract the IC feature effectively and thus its monitoring performance will be degraded drastically. To solve such a problem, a kernel time structure independent component analysis(KTSICA) method is proposed for monitoring nonlinear process in this paper. The original process data are mapped into a feature space nonlinearly and then the whitened data are calculated in the feature space by the kernel trick. Subsequently, a time structure independent component analysis algorithm, which has no requirement for the distribution of ICs, is proposed to extract the IC feature.Finally, two monitoring statistics are built to detect process faults. When some fault is detected, a nonlinear fault identification method is developed to identify fault variables based on sensitivity analysis. The proposed monitoring method is applied in the Tennessee Eastman benchmark process. Applications demonstrate the superiority of KTSICA over KICA.展开更多
A novel method based on the improved Laplacian eigenmap algorithm for fault pattern classification is proposed. Via modifying the Laplacian eigenmap algorithm to replace Euclidean distance with kernel-based geometric ...A novel method based on the improved Laplacian eigenmap algorithm for fault pattern classification is proposed. Via modifying the Laplacian eigenmap algorithm to replace Euclidean distance with kernel-based geometric distance in the neighbor graph construction, the method can preserve the consistency of local neighbor information and effectively extract the low-dimensional manifold features embedded in the high-dimensional nonlinear data sets. A nonlinear dimensionality reduction algorithm based on the improved Laplacian eigenmap is to directly learn high-dimensional fault signals and extract the intrinsic manifold features from them. The method greatly preserves the global geometry structure information embedded in the signals, and obviously improves the classification performance of fault pattern recognition. The experimental results on both simulation and engineering indicate the feasibility and effectiveness of the new method.展开更多
In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the researc...In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification.展开更多
基金Supported by the National Natural Science Foundation of China (61273160), the Natural Science Foundation of Shandong Province of China (ZR2011FM014) and the Fundamental Research Funds for the Central Universities (10CX04046A).
文摘Locality preserving projection (LPP) is a newly emerging fault detection method which can discover local manifold structure of a data set to be analyzed, but its linear assumption may lead to monitoring performance degradation for complicated nonlinear industrial processes. In this paper, an improved LPP method, referred to as sparse kernel locality preserving projection (SKLPP) is proposed for nonlinear process fault detection. Based on the LPP model, kernel trick is applied to construct nonlinear kernel model. Furthermore, for reducing the computational complexity of kernel model, feature samples selection technique is adopted to make the kernel LPP model sparse. Lastly, two monitoring statistics of SKLPP model are built to detect process faults. Simulations on a continuous stirred tank reactor (CSTR) system show that SKLPP is more effective than LPP in terms of fault detection performance.
基金Supported by the National Natural Science Foundation of China(61273160)the Natural Science Foundation of Shandong Province of China(ZR2011FM014)+1 种基金the Doctoral Fund of Shandong Province(BS2012ZZ011)the Postgraduate Innovation Funds of China University of Petroleum(CX2013060)
文摘Kernel independent component analysis(KICA) is a newly emerging nonlinear process monitoring method,which can extract mutually independent latent variables called independent components(ICs) from process variables. However, when more than one IC have Gaussian distribution, it cannot extract the IC feature effectively and thus its monitoring performance will be degraded drastically. To solve such a problem, a kernel time structure independent component analysis(KTSICA) method is proposed for monitoring nonlinear process in this paper. The original process data are mapped into a feature space nonlinearly and then the whitened data are calculated in the feature space by the kernel trick. Subsequently, a time structure independent component analysis algorithm, which has no requirement for the distribution of ICs, is proposed to extract the IC feature.Finally, two monitoring statistics are built to detect process faults. When some fault is detected, a nonlinear fault identification method is developed to identify fault variables based on sensitivity analysis. The proposed monitoring method is applied in the Tennessee Eastman benchmark process. Applications demonstrate the superiority of KTSICA over KICA.
基金National Hi-tech Research Development Program of China(863 Program,No.2007AA04Z421)National Natural Science Foundation of China(No.50475078,No.50775035)
文摘A novel method based on the improved Laplacian eigenmap algorithm for fault pattern classification is proposed. Via modifying the Laplacian eigenmap algorithm to replace Euclidean distance with kernel-based geometric distance in the neighbor graph construction, the method can preserve the consistency of local neighbor information and effectively extract the low-dimensional manifold features embedded in the high-dimensional nonlinear data sets. A nonlinear dimensionality reduction algorithm based on the improved Laplacian eigenmap is to directly learn high-dimensional fault signals and extract the intrinsic manifold features from them. The method greatly preserves the global geometry structure information embedded in the signals, and obviously improves the classification performance of fault pattern recognition. The experimental results on both simulation and engineering indicate the feasibility and effectiveness of the new method.
基金supported by the National Natural Science Foundation of China(5110505261173163)the Liaoning Provincial Natural Science Foundation of China(201102037)
文摘In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification.