A fault diagnosis model is proposed based on fuzzy support vector machine (FSVM) combined with fuzzy clustering (FC).Considering the relationship between the sample point and non-self class,FC algorithm is applied to ...A fault diagnosis model is proposed based on fuzzy support vector machine (FSVM) combined with fuzzy clustering (FC).Considering the relationship between the sample point and non-self class,FC algorithm is applied to generate fuzzy memberships.In the algorithm,sample weights based on a distribution density function of data point and genetic algorithm (GA) are introduced to enhance the performance of FC.Then a multi-class FSVM with radial basis function kernel is established according to directed acyclic graph algorithm,the penalty factor and kernel parameter of which are optimized by GA.Finally,the model is executed for multi-class fault diagnosis of rolling element bearings.The results show that the presented model achieves high performances both in identifying fault types and fault degrees.The performance comparisons of the presented model with SVM and distance-based FSVM for noisy case demonstrate the capacity of dealing with noise and generalization.展开更多
Turbopump condition monitoring is a significant approach to ensure the safety of liquid rocket engine (LRE).Because of lack of fault samples,a monitoring system cannot be trained on all possible condition patterns.T...Turbopump condition monitoring is a significant approach to ensure the safety of liquid rocket engine (LRE).Because of lack of fault samples,a monitoring system cannot be trained on all possible condition patterns.Thus it is important to differentiate abnormal or unknown patterns from normal pattern with novelty detection methods.One-class support vector machine (OCSVM) that has been commonly used for novelty detection cannot deal well with large scale samples.In order to model the normal pattern of the turbopump with OCSVM and so as to monitor the condition of the turbopump,a monitoring method that integrates OCSVM with incremental clustering is presented.In this method,the incremental clustering is used for sample reduction by extracting representative vectors from a large training set.The representative vectors are supposed to distribute uniformly in the object region and fulfill the region.And training OCSVM on these representative vectors yields a novelty detector.By applying this method to the analysis of the turbopump's historical test data,it shows that the incremental clustering algorithm can extract 91 representative points from more than 36 000 training vectors,and the OCSVM detector trained on these 91 representative points can recognize spikes in vibration signals caused by different abnormal events such as vane shedding,rub-impact and sensor faults.This monitoring method does not need fault samples during training as classical recognition methods.The method resolves the learning problem of large samples and is an alternative method for condition monitoring of the LRE turbopump.展开更多
It is a challenging topic to develop an efficient algorithm for large scale classification problems in many applications of machine learning. In this paper, a hierarchical clustering and fixed- layer local learning (...It is a challenging topic to develop an efficient algorithm for large scale classification problems in many applications of machine learning. In this paper, a hierarchical clustering and fixed- layer local learning (HCFLL) based support vector machine(SVM) algorithm is proposed to deal with this problem. Firstly, HCFLL hierarchically dusters a given dataset into a modified clustering feature tree based on the ideas of unsupervised clustering and supervised clustering. Then it locally trains SVM on each labeled subtree at a fixed-layer of the tree. The experimental results show that compared with the existing popular algorithms such as core vector machine and decision.tree support vector machine, HCFLL can significantly improve the training and testing speeds with comparable testing accuracy.展开更多
A sustainable production of electricity is essential for low carbon green growth in South Korea. The generation of wind power as renewable energy has been rapidly growing around the world. Undoubtedly, wind energy is ...A sustainable production of electricity is essential for low carbon green growth in South Korea. The generation of wind power as renewable energy has been rapidly growing around the world. Undoubtedly, wind energy is unlimited in potential. However due to its own intermittency and volatility, there are difficulties in the effective harvesting of wind energy and the integration of wind power into the current electric power grid. To cope with this, many works have been done for wind speed and power forecasting. In this paper, an SVR (support vector regression) using FCM (Fuzzy C-Means) is proposed for wind speed forecasting. This paper describes the design of an FCM based SVR to increase the prediction accuracy. Proposed model was compared with ordinary SVR model using balanced and unbalanced test data. Also, multi-step ahead forecasting result was compared. Kernel parameters in SVR are adaptively determined in order to improve forecasting accuracy. An illustrative example is given by using real-world wind farm dataset. According to the experimental results, it is shown that the proposed method provides better forecasts of wind power.展开更多
A new algorithm named kernel bisecting k-means and sample removal(KBK-SR) is proposed as sampling preprocessing for support vector machine(SVM) training to improve the efficiency.The proposed algorithm tends to quickl...A new algorithm named kernel bisecting k-means and sample removal(KBK-SR) is proposed as sampling preprocessing for support vector machine(SVM) training to improve the efficiency.The proposed algorithm tends to quickly produce balanced clusters of similar sizes in the kernel feature space,which makes it efficient and effective for reducing training samples.Theoretical analysis and experimental results on three UCI real data benchmarks both show that,with very short sampling time,the proposed algorithm dramatically accelerates SVM sampling and training while maintaining high test accuracy.展开更多
Support Vector Clustering (SVC) is a kernel-based unsupervised learning clustering method. The main drawback of SVC is its high computational complexity in getting the adjacency matrix describing the connectivity for ...Support Vector Clustering (SVC) is a kernel-based unsupervised learning clustering method. The main drawback of SVC is its high computational complexity in getting the adjacency matrix describing the connectivity for each pairs of points. Based on the proximity graph model [3], the Euclidean distance in Hilbert space is calculated using a Gaussian kernel, which is the right criterion to generate a minimum spanning tree using Kruskal's algorithm. Then the connectivity estimation is lowered by only checking the linkages between the edges that construct the main stem of the MST (Minimum Spanning Tree), in which the non-compatibility degree is originally defined to support the edge selection during linkage estimations. This new approach is experimentally analyzed. The results show that the revised algorithm has a better performance than the proximity graph model with faster speed, optimized clustering quality and strong ability to noise suppression, which makes SVC scalable to large data sets.展开更多
For the existing support vector machine, when recognizing more questions, the shortcomings of high computational complexity and low recognition rate under the low SNR are emerged. The characteristic parameter of the s...For the existing support vector machine, when recognizing more questions, the shortcomings of high computational complexity and low recognition rate under the low SNR are emerged. The characteristic parameter of the signal is extracted and optimized by using a clustering algorithm, support vector machine is trained by grading algorithm so as to enhance the rate of convergence, improve the performance of recognition under the low SNR and realize modulation recognition of the signal based on the modulation system of the constellation diagram in this paper. Simulation results show that the average recognition rate based on this algorithm is enhanced over 30% compared with methods that adopting clustering algorithm or support vector machine respectively under the low SNR. The average recognition rate can reach 90% when the SNR is 5 dB, and the method is easy to be achieved so that it has broad application prospect in the modulating recognition.展开更多
The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction. First, the parameter gamma and multi-step prediction capabilities of the LS-SVM network are discussed. Then we e...The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction. First, the parameter gamma and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values.. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved.展开更多
A novel Support Vector Machine(SVM) ensemble approach using clustering analysis is proposed. Firstly,the positive and negative training examples are clustered through subtractive clus-tering algorithm respectively. Th...A novel Support Vector Machine(SVM) ensemble approach using clustering analysis is proposed. Firstly,the positive and negative training examples are clustered through subtractive clus-tering algorithm respectively. Then some representative examples are chosen from each of them to construct SVM components. At last,the outputs of the individual classifiers are fused through ma-jority voting method to obtain the final decision. Comparisons of performance between the proposed method and other popular ensemble approaches,such as Bagging,Adaboost and k.-fold cross valida-tion,are carried out on synthetic and UCI datasets. The experimental results show that our method has higher classification accuracy since the example distribution information is considered during en-semble through clustering analysis. It further indicates that our method needs a much smaller size of training subsets than Bagging and Adaboost to obtain satisfactory classification accuracy.展开更多
A method that applies clustering technique to reduce the number of samples of large data sets using input-output clustering is proposed.The proposed method clusters the output data into groups and clusters the input d...A method that applies clustering technique to reduce the number of samples of large data sets using input-output clustering is proposed.The proposed method clusters the output data into groups and clusters the input data in accordance with the groups of output data.Then,a set of prototypes are selected from the clustered input data.The inessential data can be ultimately discarded from the data set.The proposed method can reduce the effect from outliers because only the prototypes are used.This method is applied to reduce the data set in regression problems.Two standard synthetic data sets and three standard real-world data sets are used for evaluation.The root-mean-square errors are compared from support vector regression models trained with the original data sets and the corresponding instance-reduced data sets.From the experiments,the proposed method provides good results on the reduction and the reconstruction of the standard synthetic and real-world data sets.The numbers of instances of the synthetic data sets are decreased by 25%-69%.The reduction rates for the real-world data sets of the automobile miles per gallon and the 1990 census in CA are 46% and 57%,respectively.The reduction rate of 96% is very good for the electrocardiogram(ECG) data set because of the redundant and periodic nature of ECG signals.For all of the data sets,the regression results are similar to those from the corresponding original data sets.Therefore,the regression performance of the proposed method is good while only a fraction of the data is needed in the training process.展开更多
Several typical supervised clustering methods such as Gaussian mixture model-based supervised clustering (GMM), k- nearest-neighbor (KNN), binary support vector machines (SVMs) and multiclass support vector mach...Several typical supervised clustering methods such as Gaussian mixture model-based supervised clustering (GMM), k- nearest-neighbor (KNN), binary support vector machines (SVMs) and multiclass support vector machines (MC-SVMs) were employed to classify the computer simulation data and two real microarray expression datasets. False positive, false negative, true positive, true negative, clustering accuracy and Matthews' correlation coefficient (MCC) were compared among these methods. The results are as follows: (1) In classifying thousands of gene expression data, the performances of two GMM methods have the maximal clustering accuracy and the least overall FP+FN error numbers on the basis of the assumption that the whole set of microarray data are a finite mixture of multivariate Gaussian distributions. Furthermore, when the number of training sample is very small, the clustering accuracy of GMM-Ⅱ method has superiority over GMM- Ⅰ method. (2) In general, the superior classification performance of the MC-SVMs are more robust and more practical, which are less sensitive to the curse of dimensionality, and not only next to GMM method in clustering accuracy to thousands of gene expression data, but also more robust to a small number of high-dimensional gene expression samples than other techniques. (3) Of the MC-SVMs, OVO and DAGSVM perform better on the large sample sizes, whereas five MC-SVMs methods have very similar performance on moderate sample sizes. In other cases, OVR, WW and CS yield better results when sample sizes are small. So, it is recommended that at least two candidate methods, choosing on the basis of the real data features and experimental conditions, should be performed and compared to obtain better clustering result.展开更多
A new fuzzy support vector machine algorithm with dual membership values based on spectral clustering method is pro- posed to overcome the shortcoming of the normal support vector machine algorithm, which divides the ...A new fuzzy support vector machine algorithm with dual membership values based on spectral clustering method is pro- posed to overcome the shortcoming of the normal support vector machine algorithm, which divides the training datasets into two absolutely exclusive classes in the binary classification, ignoring the possibility of "overlapping" region between the two training classes. The proposed method handles sample "overlap" effi- ciently with spectral clustering, overcoming the disadvantages of over-fitting well, and improving the data mining efficiency greatly. Simulation provides clear evidences to the new method.展开更多
基金Supported by the joint fund of National Natural Science Foundation of China and Civil Aviation Administration Foundation of China(No.U1233201)
文摘A fault diagnosis model is proposed based on fuzzy support vector machine (FSVM) combined with fuzzy clustering (FC).Considering the relationship between the sample point and non-self class,FC algorithm is applied to generate fuzzy memberships.In the algorithm,sample weights based on a distribution density function of data point and genetic algorithm (GA) are introduced to enhance the performance of FC.Then a multi-class FSVM with radial basis function kernel is established according to directed acyclic graph algorithm,the penalty factor and kernel parameter of which are optimized by GA.Finally,the model is executed for multi-class fault diagnosis of rolling element bearings.The results show that the presented model achieves high performances both in identifying fault types and fault degrees.The performance comparisons of the presented model with SVM and distance-based FSVM for noisy case demonstrate the capacity of dealing with noise and generalization.
基金supported by National Natural Science Foundation of China (Grant No. 50675219)Hu’nan Provincial Science Committee Excellent Youth Foundation of China (Grant No. 08JJ1008)
文摘Turbopump condition monitoring is a significant approach to ensure the safety of liquid rocket engine (LRE).Because of lack of fault samples,a monitoring system cannot be trained on all possible condition patterns.Thus it is important to differentiate abnormal or unknown patterns from normal pattern with novelty detection methods.One-class support vector machine (OCSVM) that has been commonly used for novelty detection cannot deal well with large scale samples.In order to model the normal pattern of the turbopump with OCSVM and so as to monitor the condition of the turbopump,a monitoring method that integrates OCSVM with incremental clustering is presented.In this method,the incremental clustering is used for sample reduction by extracting representative vectors from a large training set.The representative vectors are supposed to distribute uniformly in the object region and fulfill the region.And training OCSVM on these representative vectors yields a novelty detector.By applying this method to the analysis of the turbopump's historical test data,it shows that the incremental clustering algorithm can extract 91 representative points from more than 36 000 training vectors,and the OCSVM detector trained on these 91 representative points can recognize spikes in vibration signals caused by different abnormal events such as vane shedding,rub-impact and sensor faults.This monitoring method does not need fault samples during training as classical recognition methods.The method resolves the learning problem of large samples and is an alternative method for condition monitoring of the LRE turbopump.
基金National Natural Science Foundation of China ( No. 61070033 )Fundamental Research Funds for the Central Universities,China( No. 2012ZM0061)
文摘It is a challenging topic to develop an efficient algorithm for large scale classification problems in many applications of machine learning. In this paper, a hierarchical clustering and fixed- layer local learning (HCFLL) based support vector machine(SVM) algorithm is proposed to deal with this problem. Firstly, HCFLL hierarchically dusters a given dataset into a modified clustering feature tree based on the ideas of unsupervised clustering and supervised clustering. Then it locally trains SVM on each labeled subtree at a fixed-layer of the tree. The experimental results show that compared with the existing popular algorithms such as core vector machine and decision.tree support vector machine, HCFLL can significantly improve the training and testing speeds with comparable testing accuracy.
文摘A sustainable production of electricity is essential for low carbon green growth in South Korea. The generation of wind power as renewable energy has been rapidly growing around the world. Undoubtedly, wind energy is unlimited in potential. However due to its own intermittency and volatility, there are difficulties in the effective harvesting of wind energy and the integration of wind power into the current electric power grid. To cope with this, many works have been done for wind speed and power forecasting. In this paper, an SVR (support vector regression) using FCM (Fuzzy C-Means) is proposed for wind speed forecasting. This paper describes the design of an FCM based SVR to increase the prediction accuracy. Proposed model was compared with ordinary SVR model using balanced and unbalanced test data. Also, multi-step ahead forecasting result was compared. Kernel parameters in SVR are adaptively determined in order to improve forecasting accuracy. An illustrative example is given by using real-world wind farm dataset. According to the experimental results, it is shown that the proposed method provides better forecasts of wind power.
基金National Natural Science Foundation of China (No. 60975083)Key Grant Project,Ministry of Education,China(No. 104145)
文摘A new algorithm named kernel bisecting k-means and sample removal(KBK-SR) is proposed as sampling preprocessing for support vector machine(SVM) training to improve the efficiency.The proposed algorithm tends to quickly produce balanced clusters of similar sizes in the kernel feature space,which makes it efficient and effective for reducing training samples.Theoretical analysis and experimental results on three UCI real data benchmarks both show that,with very short sampling time,the proposed algorithm dramatically accelerates SVM sampling and training while maintaining high test accuracy.
基金TheNationalHighTechnologyResearchandDevelopmentProgramofChina (No .86 3 5 11 930 0 0 9)
文摘Support Vector Clustering (SVC) is a kernel-based unsupervised learning clustering method. The main drawback of SVC is its high computational complexity in getting the adjacency matrix describing the connectivity for each pairs of points. Based on the proximity graph model [3], the Euclidean distance in Hilbert space is calculated using a Gaussian kernel, which is the right criterion to generate a minimum spanning tree using Kruskal's algorithm. Then the connectivity estimation is lowered by only checking the linkages between the edges that construct the main stem of the MST (Minimum Spanning Tree), in which the non-compatibility degree is originally defined to support the edge selection during linkage estimations. This new approach is experimentally analyzed. The results show that the revised algorithm has a better performance than the proximity graph model with faster speed, optimized clustering quality and strong ability to noise suppression, which makes SVC scalable to large data sets.
基金supported in part by the National Natural Science Foundation of China under Grand No.61871129 and No.61301179Projects of Science and Technology Plan Guangdong Province under Grand No.2014A010101284
文摘For the existing support vector machine, when recognizing more questions, the shortcomings of high computational complexity and low recognition rate under the low SNR are emerged. The characteristic parameter of the signal is extracted and optimized by using a clustering algorithm, support vector machine is trained by grading algorithm so as to enhance the rate of convergence, improve the performance of recognition under the low SNR and realize modulation recognition of the signal based on the modulation system of the constellation diagram in this paper. Simulation results show that the average recognition rate based on this algorithm is enhanced over 30% compared with methods that adopting clustering algorithm or support vector machine respectively under the low SNR. The average recognition rate can reach 90% when the SNR is 5 dB, and the method is easy to be achieved so that it has broad application prospect in the modulating recognition.
文摘The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction. First, the parameter gamma and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values.. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved.
基金the National Natural Science Foundation of China (No.60472072)the Specialized Research Foundation for the Doctoral Program of Higher Educa-tion of China (No.20040699034).
文摘A novel Support Vector Machine(SVM) ensemble approach using clustering analysis is proposed. Firstly,the positive and negative training examples are clustered through subtractive clus-tering algorithm respectively. Then some representative examples are chosen from each of them to construct SVM components. At last,the outputs of the individual classifiers are fused through ma-jority voting method to obtain the final decision. Comparisons of performance between the proposed method and other popular ensemble approaches,such as Bagging,Adaboost and k.-fold cross valida-tion,are carried out on synthetic and UCI datasets. The experimental results show that our method has higher classification accuracy since the example distribution information is considered during en-semble through clustering analysis. It further indicates that our method needs a much smaller size of training subsets than Bagging and Adaboost to obtain satisfactory classification accuracy.
基金supported by Chiang Mai University Research Fund under the contract number T-M5744
文摘A method that applies clustering technique to reduce the number of samples of large data sets using input-output clustering is proposed.The proposed method clusters the output data into groups and clusters the input data in accordance with the groups of output data.Then,a set of prototypes are selected from the clustered input data.The inessential data can be ultimately discarded from the data set.The proposed method can reduce the effect from outliers because only the prototypes are used.This method is applied to reduce the data set in regression problems.Two standard synthetic data sets and three standard real-world data sets are used for evaluation.The root-mean-square errors are compared from support vector regression models trained with the original data sets and the corresponding instance-reduced data sets.From the experiments,the proposed method provides good results on the reduction and the reconstruction of the standard synthetic and real-world data sets.The numbers of instances of the synthetic data sets are decreased by 25%-69%.The reduction rates for the real-world data sets of the automobile miles per gallon and the 1990 census in CA are 46% and 57%,respectively.The reduction rate of 96% is very good for the electrocardiogram(ECG) data set because of the redundant and periodic nature of ECG signals.For all of the data sets,the regression results are similar to those from the corresponding original data sets.Therefore,the regression performance of the proposed method is good while only a fraction of the data is needed in the training process.
基金This research was supported by the National Natural Science Foundation of China(30370758)Program for New Century Excellent Talents in Universities(NCET)of Ministry of Education to Dr.Xu Chenwu(NCET-05-0502).
文摘Several typical supervised clustering methods such as Gaussian mixture model-based supervised clustering (GMM), k- nearest-neighbor (KNN), binary support vector machines (SVMs) and multiclass support vector machines (MC-SVMs) were employed to classify the computer simulation data and two real microarray expression datasets. False positive, false negative, true positive, true negative, clustering accuracy and Matthews' correlation coefficient (MCC) were compared among these methods. The results are as follows: (1) In classifying thousands of gene expression data, the performances of two GMM methods have the maximal clustering accuracy and the least overall FP+FN error numbers on the basis of the assumption that the whole set of microarray data are a finite mixture of multivariate Gaussian distributions. Furthermore, when the number of training sample is very small, the clustering accuracy of GMM-Ⅱ method has superiority over GMM- Ⅰ method. (2) In general, the superior classification performance of the MC-SVMs are more robust and more practical, which are less sensitive to the curse of dimensionality, and not only next to GMM method in clustering accuracy to thousands of gene expression data, but also more robust to a small number of high-dimensional gene expression samples than other techniques. (3) Of the MC-SVMs, OVO and DAGSVM perform better on the large sample sizes, whereas five MC-SVMs methods have very similar performance on moderate sample sizes. In other cases, OVR, WW and CS yield better results when sample sizes are small. So, it is recommended that at least two candidate methods, choosing on the basis of the real data features and experimental conditions, should be performed and compared to obtain better clustering result.
基金supported by the National Natural Science Foundation of China (7083100170821061)
文摘A new fuzzy support vector machine algorithm with dual membership values based on spectral clustering method is pro- posed to overcome the shortcoming of the normal support vector machine algorithm, which divides the training datasets into two absolutely exclusive classes in the binary classification, ignoring the possibility of "overlapping" region between the two training classes. The proposed method handles sample "overlap" effi- ciently with spectral clustering, overcoming the disadvantages of over-fitting well, and improving the data mining efficiency greatly. Simulation provides clear evidences to the new method.