Product quality and operation cost control obtain increasing emphases in modern chemical system engineering. To improve the fault detection power of the partial least square (PLS) method for quality control, a new QRP...Product quality and operation cost control obtain increasing emphases in modern chemical system engineering. To improve the fault detection power of the partial least square (PLS) method for quality control, a new QRPV statistic is proposed in terms of the VP (variable importance in projection) indices of monitored process variables, which is significantly advanced over and different from the conventional Q statistic. QRPV is calculated only by the residuals of the remarkable process variables (RPVs). Therefore, it is the dominant relation between quality and RPV not all process variables (as in the case of the conventional PLS) that is monitored by this new VP-PLS (VPLS) method. The combination of QRPV and T2 statistics is applied to the quality and cost control of the Tennessee Eastman (TE) process, and weak faults can be detected as quickly as possible. Consequently, the product quality of TE process is guaranteed and operation costs are reduced.展开更多
The kernel principal component analysis (KPCA) method employs the first several kernel principal components (KPCs), which indicate the most variance information of normal observations for process monitoring, but m...The kernel principal component analysis (KPCA) method employs the first several kernel principal components (KPCs), which indicate the most variance information of normal observations for process monitoring, but may not reflect the fault information. In this study, sensitive kernel principal component analysis (SKPCA) is proposed to improve process monitoring performance, i.e., to deal with the discordance of T2 statistic and squared prediction error SVE statistic and reduce missed detection rates. T2 statistic can be used to measure the variation di rectly along each KPC and analyze the detection performance as well as capture the most useful information in a process. With the calculation of the change rate of T2 statistic along each KPC, SKPCA selects the sensitive kernel principal components for process monitoring. A simulated simple system and Tennessee Eastman process are employed to demonstrate the efficiency of SKPCA on online monitoring. The results indicate that the monitoring performance is improved significantly.展开更多
Fault diagnosis and monitoring are very important for complex chemical process. There are numerous methods that have been studied in this field, in which the effective visualization method is still challenging. In ord...Fault diagnosis and monitoring are very important for complex chemical process. There are numerous methods that have been studied in this field, in which the effective visualization method is still challenging. In order to get a better visualization effect, a novel fault diagnosis method which combines self-organizing map (SOM) with Fisher discriminant analysis (FDA) is proposed. FDA can reduce the dimension of the data in terms of maximizing the separability of the classes. After feature extraction by FDA, SOM can distinguish the different states on the output map clearly and it can also be employed to monitor abnormal states. Tennessee Eastman (TE) process is employed to illustrate the fault diagnosis and monitoring performance of the proposed method. The result shows that the SOM integrated with FDA method is efficient and capable for real-time monitoring and fault diagnosis in complex chemical process.展开更多
For complex industrial processes with multiple operational conditions, it is important to develop effective monitoring algorithms to ensure the safety of production processes. This paper proposes a novel monitoring st...For complex industrial processes with multiple operational conditions, it is important to develop effective monitoring algorithms to ensure the safety of production processes. This paper proposes a novel monitoring strategy based on fuzzy C-means. The high dimensional historical data are transferred to a low dimensional subspace spanned by locality preserving projection. Then the scores in the novel subspace are classified into several overlapped clusters, each representing an operational mode. The distance statistics of each cluster are integrated though the membership values into a novel BID (Bayesian inference distance) monitoring index. The efficiency and effectiveness of the proposed method are validated though the Tennessee Eastman benchmark process.展开更多
In industrial processes,there exist faults that have complex effect on process variables.Complex and simple faults are defined according to their effect dimensions.The conventional approaches based on structured resid...In industrial processes,there exist faults that have complex effect on process variables.Complex and simple faults are defined according to their effect dimensions.The conventional approaches based on structured residuals cannot isolate complex faults.This paper presents a multi-level strategy for complex fault isolation.An extraction procedure is employed to reduce the complex faults to simple ones and assign them to several levels.On each level,faults are isolated by their different responses in the structured residuals.Each residual is obtained insensitive to one fault but more sensitive to others.The faults on different levels are verified to have different residual responses and will not be confused.An entire incidence matrix containing residual response characteristics of all faults is obtained,based on which faults can be isolated.The proposed method is applied in the Tennessee Eastman process example,and the effectiveness and advantage are demonstrated.展开更多
For plant-wide processes with multiple operating conditions,the multimode feature imposes some challenges to conventional monitoring techniques.Hence,to solve this problem,this paper provides a novel local component b...For plant-wide processes with multiple operating conditions,the multimode feature imposes some challenges to conventional monitoring techniques.Hence,to solve this problem,this paper provides a novel local component based principal component analysis(LCPCA)approach for monitoring the status of a multimode process.In LCPCA,the process prior knowledge of mode division is not required and it purely based on the process data.Firstly,LCPCA divides the processes data into multiple local components using finite Gaussian mixture model mixture(FGMM).Then,calculating the posterior probability is applied to determine each sample belonging to which local component.After that,the local component information(such as mean and standard deviation)is used to standardize each sample of local component.Finally,the standardized samples of each local component are combined to train PCA monitoring model.Based on the PCA monitoring model,two monitoring statistics T^(2) and SPE are used for monitoring multimode processes.Through a numerical example and the Tennessee Eastman(TE)process,the monitoring result demonstrates that LCPCA outperformed conventional PCA and LNS-PCA in the fault detection rate.展开更多
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de...For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.展开更多
Time-series prediction is one of the major methodologies used for fault prediction. The methods based on recurrent neural networks have been widely used in time-series prediction for their remarkable non-liner mapping...Time-series prediction is one of the major methodologies used for fault prediction. The methods based on recurrent neural networks have been widely used in time-series prediction for their remarkable non-liner mapping ability. As a new recurrent neural network, reservoir neural network can effectively process the time-series prediction. However, the ill-posedness problem of reservoir neural networks has seriously restricted the generalization performance. In this paper, a fault prediction algorithm based on time-series is proposed using improved reservoir neural networks. The basic idea is taking structure risk into consideration, that is, the cost function involves not only the experience risk factor but also the structure risk factor. Thus a regulation coefficient is introduced to calculate the output weight of the reservoir neural network. As a result, the amplitude of output weight is effectively controlled and the ill-posedness problem is solved. Because the training speed of ordinary reservoir networks is naturally fast, the improved reservoir networks for time-series prediction are good in speed and generalization ability. Experiments on Mackey–Glass and sunspot time series prediction prove the effectiveness of the algorithm. The proposed algorithm is applied to TE process fault prediction. We first forecast some timeseries obtained from TE and then predict the fault type adopting the static reservoirs with the predicted data.The final prediction correct rate reaches 81%.展开更多
In chemical processes, fault diagnosis is relatively difficult due to the incomplete prior-knowledge and unpredictable production changes. To solve the problem, a case-based extension fault diagnosis (CEFD) method is ...In chemical processes, fault diagnosis is relatively difficult due to the incomplete prior-knowledge and unpredictable production changes. To solve the problem, a case-based extension fault diagnosis (CEFD) method is proposed combining with extension theory, in which the basic-element model is used for the unified and deep fault description, the distance concept is applied to quantify the correlation degree between the new fault and the original fault cases, and the extension transformation is used to expand and obtain the solution of unknown faults. With the application in Tennessee Eastman process, the result indicates that CEFD method has a flexible fault representation, objective fault retrieve performance and good ability for fault study, providing a new way for diagnosing production faults accurately.展开更多
Alarm flood is one of the main problems in the alarm systems of industrial process. Alarm root-cause analysis and alarm prioritization are good for alarm flood reduction. This paper proposes a systematic rationalizati...Alarm flood is one of the main problems in the alarm systems of industrial process. Alarm root-cause analysis and alarm prioritization are good for alarm flood reduction. This paper proposes a systematic rationalization method for multivariate correlated alarms to realize the root cause analysis and alarm prioritization. An information fusion based interpretive structural model is constructed according to the data-driven partial correlation coefficient calculation and process knowledge modification. This hierarchical multi-layer model is helpful in abnormality propagation path identification and root-cause analysis. Revised Likert scale method is adopted to determine the alarm priority and reduce the blindness of alarm handling. As a case study, the Tennessee Eastman process is utilized to show the effectiveness and validity of proposed approach. Alarm system performance comparison shows that our rationalization methodology can reduce the alarm flood to some extent and improve the performance.展开更多
Complex processes often work with multiple operation regions, it is critical to develop effective monitoring approaches to ensure the safety of chemical processes. In this work, a discriminant local consistency Gaussi...Complex processes often work with multiple operation regions, it is critical to develop effective monitoring approaches to ensure the safety of chemical processes. In this work, a discriminant local consistency Gaussian mixture model(DLCGMM) for multimode process monitoring is proposed for multimode process monitoring by integrating LCGMM with modified local Fisher discriminant analysis(MLFDA). Different from Fisher discriminant analysis(FDA) that aims to discover the global optimal discriminant directions, MLFDA is capable of uncovering multimodality and local structure of the data by exploiting the posterior probabilities of observations within clusters calculated from the results of LCGMM. This may enable MLFDA to capture more meaningful discriminant information hidden in the high-dimensional multimode observations comparing to FDA. Contrary to most existing multimode process monitoring approaches, DLCGMM performs LCGMM and MFLDA iteratively, and the optimal subspaces with multi-Gaussianity and the optimal discriminant projection vectors are simultaneously achieved in the framework of supervised and unsupervised learning. Furthermore, monitoring statistics are established on each cluster that represents a specific operation condition and two global Bayesian inference-based fault monitoring indexes are established by combining with all the monitoring results of all clusters. The efficiency and effectiveness of the proposed method are evaluated through UCI datasets, a simulated multimode model and the Tennessee Eastman benchmark process.展开更多
To improve the detection and identification performance of the Statistical Quality Monitoring (SQM) system, a novel quality based Prioritized Sensor-Fault Detection (PSFD) methodology is proposed. Weighted by the ...To improve the detection and identification performance of the Statistical Quality Monitoring (SQM) system, a novel quality based Prioritized Sensor-Fault Detection (PSFD) methodology is proposed. Weighted by the Vp (variable importance in projection) index, which indicates the importance of the sensor variables to the quality variables, the new monitoring statistic, Qv, is developed toensure that the most vital sensor faults be detected successfully. Subsequently, the ratio between the Detectable Minimum Faulty Magnitude (DMFM) of the most important sensor and of the least important sensor is only gpmin/gpmax 〈〈 1. The Structured Residuals are designed according to the Vp index to identify and then isolate them. The theoretical findings are fully supported by simulation studies performed on the Tennessee Eastman process.展开更多
In this paper, a novel criterion is proposed to determine the retained principal components (PCs) that capture the dominant variability of online monitored data. The variations of PCs were calculated according to thei...In this paper, a novel criterion is proposed to determine the retained principal components (PCs) that capture the dominant variability of online monitored data. The variations of PCs were calculated according to their mean and covariance changes between the modeling sample and the online monitored data. The retained PCs containing dominant variations were selected and defined as correlative PCs (CPCs). The new Hotelling's T2 statistic based on CPCs was then employed to monitor the process. Case studies on the simulated continuous stirred tank reactor and the well-known Tennessee Eastman process demonstrated the feasibility and effectiveness of the CPCs-based fault detection methods.展开更多
Generative adversarial network(GAN) is the most exciting machine learning breakthrough in recent years,and it trains the learning model by finding the Nash equilibrium of a two-player zero-sum game.GAN is composed of ...Generative adversarial network(GAN) is the most exciting machine learning breakthrough in recent years,and it trains the learning model by finding the Nash equilibrium of a two-player zero-sum game.GAN is composed of a generator and a discriminator,both trained with the adversarial learning mechanism.In this paper,we introduce and investigate the use of GAN for novelty detection.In training,GAN learns from ordinary data.Then,using previously unknown data,the generator and the discriminator with the designed decision boundaries can both be used to separate novel patterns from ordinary patterns.The proposed GAN-based novelty detection method demonstrates a competitive performance on the MNIST digit database and the Tennessee Eastman(TE) benchmark process compared with the PCA-based novelty detection methods using Hotelling's T^2 and squared prediction error statistics.展开更多
A local discriminant regularized soft k-means (LDRSKM) method with Bayesian inference is proposed for multimode process monitoring. LDRSKM extends the regularized soft k-means algorithm by exploiting the local and n...A local discriminant regularized soft k-means (LDRSKM) method with Bayesian inference is proposed for multimode process monitoring. LDRSKM extends the regularized soft k-means algorithm by exploiting the local and non-local geometric information of the data and generalized linear discriminant analysis to provide a better and more meaningful data partition. LDRSKM can perform clustering and subspace selection simultaneously, enhancing the separability of data residing in different clusters. With the data partition obtained, kernel support vector data description (KSVDD) is used to establish the monitoring statistics and control limits. Two Bayesian inference based global fault detection indicators are then developed using the local monitoring results associated with principal and residual subspaces. Based on clustering analysis, Bayesian inference and manifold learning methods, the within and cross-mode correlations, and local geometric information can be exploited to enhance monitoring performances for nonlinear and non-Gaussian processes. The effectiveness and efficiency of the proposed method are evaluated using the Tennessee Eastman benchmark process.展开更多
文摘Product quality and operation cost control obtain increasing emphases in modern chemical system engineering. To improve the fault detection power of the partial least square (PLS) method for quality control, a new QRPV statistic is proposed in terms of the VP (variable importance in projection) indices of monitored process variables, which is significantly advanced over and different from the conventional Q statistic. QRPV is calculated only by the residuals of the remarkable process variables (RPVs). Therefore, it is the dominant relation between quality and RPV not all process variables (as in the case of the conventional PLS) that is monitored by this new VP-PLS (VPLS) method. The combination of QRPV and T2 statistics is applied to the quality and cost control of the Tennessee Eastman (TE) process, and weak faults can be detected as quickly as possible. Consequently, the product quality of TE process is guaranteed and operation costs are reduced.
基金Supported by the 973 project of China (2013CB733600), the National Natural Science Foundation (21176073), the Doctoral Fund of Ministry of Education (20090074110005), the New Century Excellent Talents in University (NCET-09-0346), "Shu Guang" project (09SG29) and the Fundamental Research Funds for the Central Universities.
文摘The kernel principal component analysis (KPCA) method employs the first several kernel principal components (KPCs), which indicate the most variance information of normal observations for process monitoring, but may not reflect the fault information. In this study, sensitive kernel principal component analysis (SKPCA) is proposed to improve process monitoring performance, i.e., to deal with the discordance of T2 statistic and squared prediction error SVE statistic and reduce missed detection rates. T2 statistic can be used to measure the variation di rectly along each KPC and analyze the detection performance as well as capture the most useful information in a process. With the calculation of the change rate of T2 statistic along each KPC, SKPCA selects the sensitive kernel principal components for process monitoring. A simulated simple system and Tennessee Eastman process are employed to demonstrate the efficiency of SKPCA on online monitoring. The results indicate that the monitoring performance is improved significantly.
基金Supported by the National Basic Research Program of China (2013CB733600), the National Natural Science Foundation of China (21176073), the Doctoral Fund of Ministry of Education of China (20090074110005), the Program for New Century Excellent Talents in University (NCET-09-0346), Shu Guang Project (09SG29) and the Fundamental Research Funds for the Central Universities.
文摘Fault diagnosis and monitoring are very important for complex chemical process. There are numerous methods that have been studied in this field, in which the effective visualization method is still challenging. In order to get a better visualization effect, a novel fault diagnosis method which combines self-organizing map (SOM) with Fisher discriminant analysis (FDA) is proposed. FDA can reduce the dimension of the data in terms of maximizing the separability of the classes. After feature extraction by FDA, SOM can distinguish the different states on the output map clearly and it can also be employed to monitor abnormal states. Tennessee Eastman (TE) process is employed to illustrate the fault diagnosis and monitoring performance of the proposed method. The result shows that the SOM integrated with FDA method is efficient and capable for real-time monitoring and fault diagnosis in complex chemical process.
基金Supported by the National Natural Science Foundation of China (61074079)Shanghai Leading Academic Discipline Project (B054)
文摘For complex industrial processes with multiple operational conditions, it is important to develop effective monitoring algorithms to ensure the safety of production processes. This paper proposes a novel monitoring strategy based on fuzzy C-means. The high dimensional historical data are transferred to a low dimensional subspace spanned by locality preserving projection. Then the scores in the novel subspace are classified into several overlapped clusters, each representing an operational mode. The distance statistics of each cluster are integrated though the membership values into a novel BID (Bayesian inference distance) monitoring index. The efficiency and effectiveness of the proposed method are validated though the Tennessee Eastman benchmark process.
基金Supported by the National Natural Science Foundation of China(60574047)the National High Technology Research and Development Program of China(2007AA04Z168,2009AA04Z154)the Research Fund for the Doctoral Program of Higher Education in China(20050335018)
文摘In industrial processes,there exist faults that have complex effect on process variables.Complex and simple faults are defined according to their effect dimensions.The conventional approaches based on structured residuals cannot isolate complex faults.This paper presents a multi-level strategy for complex fault isolation.An extraction procedure is employed to reduce the complex faults to simple ones and assign them to several levels.On each level,faults are isolated by their different responses in the structured residuals.Each residual is obtained insensitive to one fault but more sensitive to others.The faults on different levels are verified to have different residual responses and will not be confused.An entire incidence matrix containing residual response characteristics of all faults is obtained,based on which faults can be isolated.The proposed method is applied in the Tennessee Eastman process example,and the effectiveness and advantage are demonstrated.
基金National Natural Science Foundation of China(61673279)。
文摘For plant-wide processes with multiple operating conditions,the multimode feature imposes some challenges to conventional monitoring techniques.Hence,to solve this problem,this paper provides a novel local component based principal component analysis(LCPCA)approach for monitoring the status of a multimode process.In LCPCA,the process prior knowledge of mode division is not required and it purely based on the process data.Firstly,LCPCA divides the processes data into multiple local components using finite Gaussian mixture model mixture(FGMM).Then,calculating the posterior probability is applied to determine each sample belonging to which local component.After that,the local component information(such as mean and standard deviation)is used to standardize each sample of local component.Finally,the standardized samples of each local component are combined to train PCA monitoring model.Based on the PCA monitoring model,two monitoring statistics T^(2) and SPE are used for monitoring multimode processes.Through a numerical example and the Tennessee Eastman(TE)process,the monitoring result demonstrates that LCPCA outperformed conventional PCA and LNS-PCA in the fault detection rate.
基金Supported by the National Natural Science Foundation of China (60904018, 61203040)the Natural Science Foundation of Fujian Province of China (2009J05147, 2011J01352)+1 种基金the Foundation for Distinguished Young Scholars of Higher Education of Fujian Province of China (JA10004)the Science Research Foundation of Huaqiao University (09BS617)
文摘For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.
基金Supported by the National Natural Science Foundation of China(61074153)
文摘Time-series prediction is one of the major methodologies used for fault prediction. The methods based on recurrent neural networks have been widely used in time-series prediction for their remarkable non-liner mapping ability. As a new recurrent neural network, reservoir neural network can effectively process the time-series prediction. However, the ill-posedness problem of reservoir neural networks has seriously restricted the generalization performance. In this paper, a fault prediction algorithm based on time-series is proposed using improved reservoir neural networks. The basic idea is taking structure risk into consideration, that is, the cost function involves not only the experience risk factor but also the structure risk factor. Thus a regulation coefficient is introduced to calculate the output weight of the reservoir neural network. As a result, the amplitude of output weight is effectively controlled and the ill-posedness problem is solved. Because the training speed of ordinary reservoir networks is naturally fast, the improved reservoir networks for time-series prediction are good in speed and generalization ability. Experiments on Mackey–Glass and sunspot time series prediction prove the effectiveness of the algorithm. The proposed algorithm is applied to TE process fault prediction. We first forecast some timeseries obtained from TE and then predict the fault type adopting the static reservoirs with the predicted data.The final prediction correct rate reaches 81%.
基金Supported by the National Natural Science Foundation of China (61104131).
文摘In chemical processes, fault diagnosis is relatively difficult due to the incomplete prior-knowledge and unpredictable production changes. To solve the problem, a case-based extension fault diagnosis (CEFD) method is proposed combining with extension theory, in which the basic-element model is used for the unified and deep fault description, the distance concept is applied to quantify the correlation degree between the new fault and the original fault cases, and the extension transformation is used to expand and obtain the solution of unknown faults. With the application in Tennessee Eastman process, the result indicates that CEFD method has a flexible fault representation, objective fault retrieve performance and good ability for fault study, providing a new way for diagnosing production faults accurately.
基金Supported by the National Natural Science Foundation of China(61473026,61104131)the Fundamental Research Funds for the Central Universities(JD1413)
文摘Alarm flood is one of the main problems in the alarm systems of industrial process. Alarm root-cause analysis and alarm prioritization are good for alarm flood reduction. This paper proposes a systematic rationalization method for multivariate correlated alarms to realize the root cause analysis and alarm prioritization. An information fusion based interpretive structural model is constructed according to the data-driven partial correlation coefficient calculation and process knowledge modification. This hierarchical multi-layer model is helpful in abnormality propagation path identification and root-cause analysis. Revised Likert scale method is adopted to determine the alarm priority and reduce the blindness of alarm handling. As a case study, the Tennessee Eastman process is utilized to show the effectiveness and validity of proposed approach. Alarm system performance comparison shows that our rationalization methodology can reduce the alarm flood to some extent and improve the performance.
基金Supported by the National Natural Science Foundation of China(61273167)
文摘Complex processes often work with multiple operation regions, it is critical to develop effective monitoring approaches to ensure the safety of chemical processes. In this work, a discriminant local consistency Gaussian mixture model(DLCGMM) for multimode process monitoring is proposed for multimode process monitoring by integrating LCGMM with modified local Fisher discriminant analysis(MLFDA). Different from Fisher discriminant analysis(FDA) that aims to discover the global optimal discriminant directions, MLFDA is capable of uncovering multimodality and local structure of the data by exploiting the posterior probabilities of observations within clusters calculated from the results of LCGMM. This may enable MLFDA to capture more meaningful discriminant information hidden in the high-dimensional multimode observations comparing to FDA. Contrary to most existing multimode process monitoring approaches, DLCGMM performs LCGMM and MFLDA iteratively, and the optimal subspaces with multi-Gaussianity and the optimal discriminant projection vectors are simultaneously achieved in the framework of supervised and unsupervised learning. Furthermore, monitoring statistics are established on each cluster that represents a specific operation condition and two global Bayesian inference-based fault monitoring indexes are established by combining with all the monitoring results of all clusters. The efficiency and effectiveness of the proposed method are evaluated through UCI datasets, a simulated multimode model and the Tennessee Eastman benchmark process.
基金Supported by the National Natural Science Foundation of China (20776128) and the Natural Science Foundation of Zhejiang Province (Y 107032).
文摘To improve the detection and identification performance of the Statistical Quality Monitoring (SQM) system, a novel quality based Prioritized Sensor-Fault Detection (PSFD) methodology is proposed. Weighted by the Vp (variable importance in projection) index, which indicates the importance of the sensor variables to the quality variables, the new monitoring statistic, Qv, is developed toensure that the most vital sensor faults be detected successfully. Subsequently, the ratio between the Detectable Minimum Faulty Magnitude (DMFM) of the most important sensor and of the least important sensor is only gpmin/gpmax 〈〈 1. The Structured Residuals are designed according to the Vp index to identify and then isolate them. The theoretical findings are fully supported by simulation studies performed on the Tennessee Eastman process.
基金supported by the National Basic Research Program of China (973 Program) (No. 2013CB733600)the National Natural Science Foundation of China (No. 21176073)+1 种基金the Program for New Century Excellent Talents in University (No. NCET-09-0346)the Fundamental Research Funds for the Central Universities, China
文摘In this paper, a novel criterion is proposed to determine the retained principal components (PCs) that capture the dominant variability of online monitored data. The variations of PCs were calculated according to their mean and covariance changes between the modeling sample and the online monitored data. The retained PCs containing dominant variations were selected and defined as correlative PCs (CPCs). The new Hotelling's T2 statistic based on CPCs was then employed to monitor the process. Case studies on the simulated continuous stirred tank reactor and the well-known Tennessee Eastman process demonstrated the feasibility and effectiveness of the CPCs-based fault detection methods.
文摘Generative adversarial network(GAN) is the most exciting machine learning breakthrough in recent years,and it trains the learning model by finding the Nash equilibrium of a two-player zero-sum game.GAN is composed of a generator and a discriminator,both trained with the adversarial learning mechanism.In this paper,we introduce and investigate the use of GAN for novelty detection.In training,GAN learns from ordinary data.Then,using previously unknown data,the generator and the discriminator with the designed decision boundaries can both be used to separate novel patterns from ordinary patterns.The proposed GAN-based novelty detection method demonstrates a competitive performance on the MNIST digit database and the Tennessee Eastman(TE) benchmark process compared with the PCA-based novelty detection methods using Hotelling's T^2 and squared prediction error statistics.
基金supported by the National Natural Science Foundation of China(No.61272297)
文摘A local discriminant regularized soft k-means (LDRSKM) method with Bayesian inference is proposed for multimode process monitoring. LDRSKM extends the regularized soft k-means algorithm by exploiting the local and non-local geometric information of the data and generalized linear discriminant analysis to provide a better and more meaningful data partition. LDRSKM can perform clustering and subspace selection simultaneously, enhancing the separability of data residing in different clusters. With the data partition obtained, kernel support vector data description (KSVDD) is used to establish the monitoring statistics and control limits. Two Bayesian inference based global fault detection indicators are then developed using the local monitoring results associated with principal and residual subspaces. Based on clustering analysis, Bayesian inference and manifold learning methods, the within and cross-mode correlations, and local geometric information can be exploited to enhance monitoring performances for nonlinear and non-Gaussian processes. The effectiveness and efficiency of the proposed method are evaluated using the Tennessee Eastman benchmark process.