期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Sparse Kernel Locality Preserving Projection and Its Application in Nonlinear Process Fault Detection 被引量:28
1
作者 DENG Xiaogang TIAN Xuemin 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2013年第2期163-170,共8页
Locality preserving projection (LPP) is a newly emerging fault detection method which can discover local manifold structure of a data set to be analyzed, but its linear assumption may lead to monitoring performance de... Locality preserving projection (LPP) is a newly emerging fault detection method which can discover local manifold structure of a data set to be analyzed, but its linear assumption may lead to monitoring performance degradation for complicated nonlinear industrial processes. In this paper, an improved LPP method, referred to as sparse kernel locality preserving projection (SKLPP) is proposed for nonlinear process fault detection. Based on the LPP model, kernel trick is applied to construct nonlinear kernel model. Furthermore, for reducing the computational complexity of kernel model, feature samples selection technique is adopted to make the kernel LPP model sparse. Lastly, two monitoring statistics of SKLPP model are built to detect process faults. Simulations on a continuous stirred tank reactor (CSTR) system show that SKLPP is more effective than LPP in terms of fault detection performance. 展开更多
关键词 nonlinear locality preserving projection kernel trick sparse model fault detection
下载PDF
MODIFIED LAPLACIAN EIGENMAP ETHOD FOR FAULT DIAGNOSIS 被引量:9
2
作者 JIANG Quansheng JIA Minping +1 位作者 HU Jianzhong XU Feiyun 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2008年第3期90-93,共4页
A novel method based on the improved Laplacian eigenmap algorithm for fault pattern classification is proposed. Via modifying the Laplacian eigenmap algorithm to replace Euclidean distance with kernel-based geometric ... A novel method based on the improved Laplacian eigenmap algorithm for fault pattern classification is proposed. Via modifying the Laplacian eigenmap algorithm to replace Euclidean distance with kernel-based geometric distance in the neighbor graph construction, the method can preserve the consistency of local neighbor information and effectively extract the low-dimensional manifold features embedded in the high-dimensional nonlinear data sets. A nonlinear dimensionality reduction algorithm based on the improved Laplacian eigenmap is to directly learn high-dimensional fault signals and extract the intrinsic manifold features from them. The method greatly preserves the global geometry structure information embedded in the signals, and obviously improves the classification performance of fault pattern recognition. The experimental results on both simulation and engineering indicate the feasibility and effectiveness of the new method. 展开更多
关键词 Laplacian eigenmap kernel trick Fault diagnosis Manifold learning
下载PDF
Multi-label dimensionality reduction and classification with extreme learning machines 被引量:9
3
作者 Lin Feng Jing Wang +1 位作者 Shenglan Liu Yao Xiao 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2014年第3期502-513,共12页
In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the researc... In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification. 展开更多
关键词 MULTI-LABEL dimensionality reduction kernel trick classification.
下载PDF
Dimensionality reduction via kernel sparse representation 被引量:3
4
作者 Zhisong PAN Zhantao DENG Yibing WANG Yanyan ZHANG 《Frontiers of Computer Science》 SCIE EI CSCD 2014年第5期807-815,共9页
Dimensionality reduction (DR) methods based on sparse representation as one of the hottest research topics have achieved remarkable performance in many applications in recent years. However, it's a challenge for ex... Dimensionality reduction (DR) methods based on sparse representation as one of the hottest research topics have achieved remarkable performance in many applications in recent years. However, it's a challenge for existing sparse representation based methods to solve nonlinear problem due to the limitations of seeking sparse representation of data in the original space. Motivated by kernel tricks, we proposed a new framework called empirical kernel sparse representation (EKSR) to solve nonlinear problem. In this framework, non- linear separable data are mapped into kernel space in which the nonlinear similarity can be captured, and then the data in kernel space is reconstructed by sparse representation to preserve the sparse structure, which is obtained by minimiz- ing a ~1 regularization-related objective function. EKSR pro- vides new insights into dimensionality reduction and extends two models: 1) empirical kernel sparsity preserving projec- tion (EKSPP), which is a feature extraction method based on sparsity preserving projection (SPP); 2) empirical kernel sparsity score (EKSS), which is a feature selection method based on sparsity score (SS). Both of the two methods can choose neighborhood automatically as the natural discrimi- native power of sparse representation. Compared with sev- eral existing approaches, the proposed framework can reduce computational complexity and be more convenient in prac- tice. 展开更多
关键词 feature extraction feature selection sparse rep-resentation kernel trick
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部