With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve suffi...With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed.展开更多
This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected featu...This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected feature subsets. The dependence between two attributes (binary) is determined based on the probabilities of their joint values that contribute to positive and negative classification decisions. If opposing sets of attribute values do not lead to opposing classification decisions (zero probability), then the two attributes are considered independent of each other, otherwise dependent, and one of them can be removed and thus the number of attributes is reduced. The process must be repeated on all combinations of attributes. The paper also evaluates the approach by comparing it with existing feature selection algorithms over 8 datasets from University of California, Irvine (UCI) machine learning databases. The proposed method shows better results in terms of number of selected features, classification accuracy, and running time than most existing algorithms.展开更多
Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that ...Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that equal information may be expressed easily.These tactics are frequently utilized to improve classification or regression challenges while dealing with machine learning issues.To achieve dimensionality reduction for huge data sets,this paper offers a hybrid particle swarm optimization-rough set PSO-RS and Mayfly algorithm-rough set MA-RS.A novel hybrid strategy based on the Mayfly algorithm(MA)and the rough set(RS)is proposed in particular.The performance of the novel hybrid algorithm MA-RS is evaluated by solving six different data sets from the literature.The simulation results and comparison with common reduction methods demonstrate the proposed MARS algorithm’s capacity to handle a wide range of data sets.Finally,the rough set approach,as well as the hybrid optimization techniques PSO-RS and MARS,were applied to deal with the massive data problem.MA-hybrid RS’s method beats other classic dimensionality reduction techniques,according to the experimental results and statistical testing studies.展开更多
Feature selection(FS)(or feature dimensional reduction,or feature optimization)is an essential process in pattern recognition and machine learning because of its enhanced classification speed and accuracy and reduced ...Feature selection(FS)(or feature dimensional reduction,or feature optimization)is an essential process in pattern recognition and machine learning because of its enhanced classification speed and accuracy and reduced system complexity.FS reduces the number of features extracted in the feature extraction phase by reducing highly correlated features,retaining features with high information gain,and removing features with no weights in classification.In this work,an FS filter-type statistical method is designed and implemented,utilizing a t-test to decrease the convergence between feature subsets by calculating the quality of performance value(QoPV).The approach utilizes the well-designed fitness function to calculate the strength of recognition value(SoRV).The two values are used to rank all features according to the final weight(FW)calculated for each feature subset using a function that prioritizes feature subsets with high SoRV values.An FW is assigned to each feature subset,and those with FWs less than a predefined threshold are removed from the feature subset domain.Experiments are implemented on three datasets:Ryerson Audio-Visual Database of Emotional Speech and Song,Berlin,and Surrey Audio-Visual Expressed Emotion.The performance of the F-test and F-score FS methods are compared to those of the proposed method.Tests are also conducted on a system before and after deploying the FS methods.Results demonstrate the comparative efficiency of the proposed method.The complexity of the system is calculated based on the time overhead required before and after FS.Results show that the proposed method can reduce system complexity.展开更多
The precision of the kernel independent component analysis( KICA) algorithm depends on the type and parameter values of kernel function. Therefore,it's of great significance to study the choice method of KICA'...The precision of the kernel independent component analysis( KICA) algorithm depends on the type and parameter values of kernel function. Therefore,it's of great significance to study the choice method of KICA's kernel parameters for improving its feature dimension reduction result. In this paper, a fitness function was established by use of the ideal of Fisher discrimination function firstly. Then the global optimal solution of fitness function was searched by particle swarm optimization( PSO) algorithm and a multi-state information dimension reduction algorithm based on PSO-KICA was established. Finally,the validity of this algorithm to enhance the precision of feature dimension reduction has been proven.展开更多
Embedding the original high dimensional data in a low dimensional space helps to overcome the curse of dimensionality and removes noise. The aim of this work is to evaluate the performance of three different linear di...Embedding the original high dimensional data in a low dimensional space helps to overcome the curse of dimensionality and removes noise. The aim of this work is to evaluate the performance of three different linear dimensionality reduction techniques (DR) techniques namely principal component analysis (PCA), multi dimensional scaling (MDS) and linear discriminant analysis (LDA) on classification of cardiac arrhythmias using probabilistic neural network classifier (PNN). The design phase of classification model comprises of the following stages: preprocessing of the cardiac signal by eliminating detail coefficients that contain noise, feature extraction through daubechies wavelet transform, dimensionality reduction through linear DR techniques specified, and arrhythmia classification using PNN. Linear dimensionality reduction techniques have simple geometric representations and simple computational properties. Entire MIT-BIH arrhythmia database is used for experimentation. The experimental results demonstrates that combination of PNN classifier (spread parameter, σ = 0.08) and PCA DR technique exhibits highest sensitivity and F score of 78.84% and 78.82% respectively with a minimum of 8 dimensions.展开更多
Globally,depression is perceived as the most recurrent and risky disor-der among young people and adults under the age of 60.Depression has a strong influence on the usage of words which can be observed in the form of ...Globally,depression is perceived as the most recurrent and risky disor-der among young people and adults under the age of 60.Depression has a strong influence on the usage of words which can be observed in the form of written texts or stories posted on social media.With the help of Natural Language Proces-sing(NLP)and Machine Learning(ML)techniques,the depressive signs expressed by people can be identified at the earliest stage from their Social Media posts.The proposed work aims to introduce an efficacious depression detection model unifying an exemplary feature extraction scheme and a hybrid Long Short-Term Memory network(LSTM)model.The feature extraction process combines a novel feature selection method called Elite Term Score(ETS)and Word2Vec to extract the syntactic and semantic information respectively.First,the ETS method leverages the document level,class level,and corpus level prob-abilities for computing the weightage/score of the terms.Then,the ideal and per-tinent set of features with a high ETS score is selected,and the Word2vec model is trained to generate the intense feature vector representation for the set of selected terms.Finally,the resultant word vector obtained is called EliteVec,which is fed to the hybrid LSTM model based on Honey Badger optimizer with population reduction technique(PHB)which predicts whether the input textual content is depressive or not.The PHB algorithm is integrated to explore and exploit the opti-mal hyperparameters for strengthening the performance of the LSTM network.The comprehensive experiments are carried out with two different Twitter depres-sion corpus based on accuracy and Root Mean Square Error(RMSE)metrics.The results demonstrated that the proposed EliteVec+LSTM+PHB model outperforms the state-of-art models with 98.1%accuracy and 0.0559 RMSE.展开更多
In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number...In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number of design variables are needed, the computational cost becomes prohibitive, and thus original global optimization strategies are required. To address this need, data dimensionality reduction method is combined with global optimization methods, thus forming a new global optimization system, aiming to improve the efficiency of conventional global optimization. The new optimization system involves applying Proper Orthogonal Decomposition(POD) in dimensionality reduction of design space while maintaining the generality of original design space. Besides, an acceleration approach for samples calculation in surrogate modeling is applied to reduce the computational time while providing sufficient accuracy. The optimizations of a transonic airfoil RAE2822 and the transonic wing ONERA M6 are performed to demonstrate the effectiveness of the proposed new optimization system. In both cases, we manage to reduce the number of design variables from 20 to 10 and from 42 to 20 respectively. The new design optimization system converges faster and it takes 1/3 of the total time of traditional optimization to converge to a better design, thus significantly reducing the overall optimization time and improving the efficiency of conventional global design optimization method.展开更多
文摘With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed.
文摘This paper proposes one method of feature selection by using Bayes' theorem. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected feature subsets. The dependence between two attributes (binary) is determined based on the probabilities of their joint values that contribute to positive and negative classification decisions. If opposing sets of attribute values do not lead to opposing classification decisions (zero probability), then the two attributes are considered independent of each other, otherwise dependent, and one of them can be removed and thus the number of attributes is reduced. The process must be repeated on all combinations of attributes. The paper also evaluates the approach by comparing it with existing feature selection algorithms over 8 datasets from University of California, Irvine (UCI) machine learning databases. The proposed method shows better results in terms of number of selected features, classification accuracy, and running time than most existing algorithms.
文摘Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that equal information may be expressed easily.These tactics are frequently utilized to improve classification or regression challenges while dealing with machine learning issues.To achieve dimensionality reduction for huge data sets,this paper offers a hybrid particle swarm optimization-rough set PSO-RS and Mayfly algorithm-rough set MA-RS.A novel hybrid strategy based on the Mayfly algorithm(MA)and the rough set(RS)is proposed in particular.The performance of the novel hybrid algorithm MA-RS is evaluated by solving six different data sets from the literature.The simulation results and comparison with common reduction methods demonstrate the proposed MARS algorithm’s capacity to handle a wide range of data sets.Finally,the rough set approach,as well as the hybrid optimization techniques PSO-RS and MARS,were applied to deal with the massive data problem.MA-hybrid RS’s method beats other classic dimensionality reduction techniques,according to the experimental results and statistical testing studies.
文摘Feature selection(FS)(or feature dimensional reduction,or feature optimization)is an essential process in pattern recognition and machine learning because of its enhanced classification speed and accuracy and reduced system complexity.FS reduces the number of features extracted in the feature extraction phase by reducing highly correlated features,retaining features with high information gain,and removing features with no weights in classification.In this work,an FS filter-type statistical method is designed and implemented,utilizing a t-test to decrease the convergence between feature subsets by calculating the quality of performance value(QoPV).The approach utilizes the well-designed fitness function to calculate the strength of recognition value(SoRV).The two values are used to rank all features according to the final weight(FW)calculated for each feature subset using a function that prioritizes feature subsets with high SoRV values.An FW is assigned to each feature subset,and those with FWs less than a predefined threshold are removed from the feature subset domain.Experiments are implemented on three datasets:Ryerson Audio-Visual Database of Emotional Speech and Song,Berlin,and Surrey Audio-Visual Expressed Emotion.The performance of the F-test and F-score FS methods are compared to those of the proposed method.Tests are also conducted on a system before and after deploying the FS methods.Results demonstrate the comparative efficiency of the proposed method.The complexity of the system is calculated based on the time overhead required before and after FS.Results show that the proposed method can reduce system complexity.
文摘The precision of the kernel independent component analysis( KICA) algorithm depends on the type and parameter values of kernel function. Therefore,it's of great significance to study the choice method of KICA's kernel parameters for improving its feature dimension reduction result. In this paper, a fitness function was established by use of the ideal of Fisher discrimination function firstly. Then the global optimal solution of fitness function was searched by particle swarm optimization( PSO) algorithm and a multi-state information dimension reduction algorithm based on PSO-KICA was established. Finally,the validity of this algorithm to enhance the precision of feature dimension reduction has been proven.
文摘Embedding the original high dimensional data in a low dimensional space helps to overcome the curse of dimensionality and removes noise. The aim of this work is to evaluate the performance of three different linear dimensionality reduction techniques (DR) techniques namely principal component analysis (PCA), multi dimensional scaling (MDS) and linear discriminant analysis (LDA) on classification of cardiac arrhythmias using probabilistic neural network classifier (PNN). The design phase of classification model comprises of the following stages: preprocessing of the cardiac signal by eliminating detail coefficients that contain noise, feature extraction through daubechies wavelet transform, dimensionality reduction through linear DR techniques specified, and arrhythmia classification using PNN. Linear dimensionality reduction techniques have simple geometric representations and simple computational properties. Entire MIT-BIH arrhythmia database is used for experimentation. The experimental results demonstrates that combination of PNN classifier (spread parameter, σ = 0.08) and PCA DR technique exhibits highest sensitivity and F score of 78.84% and 78.82% respectively with a minimum of 8 dimensions.
文摘Globally,depression is perceived as the most recurrent and risky disor-der among young people and adults under the age of 60.Depression has a strong influence on the usage of words which can be observed in the form of written texts or stories posted on social media.With the help of Natural Language Proces-sing(NLP)and Machine Learning(ML)techniques,the depressive signs expressed by people can be identified at the earliest stage from their Social Media posts.The proposed work aims to introduce an efficacious depression detection model unifying an exemplary feature extraction scheme and a hybrid Long Short-Term Memory network(LSTM)model.The feature extraction process combines a novel feature selection method called Elite Term Score(ETS)and Word2Vec to extract the syntactic and semantic information respectively.First,the ETS method leverages the document level,class level,and corpus level prob-abilities for computing the weightage/score of the terms.Then,the ideal and per-tinent set of features with a high ETS score is selected,and the Word2vec model is trained to generate the intense feature vector representation for the set of selected terms.Finally,the resultant word vector obtained is called EliteVec,which is fed to the hybrid LSTM model based on Honey Badger optimizer with population reduction technique(PHB)which predicts whether the input textual content is depressive or not.The PHB algorithm is integrated to explore and exploit the opti-mal hyperparameters for strengthening the performance of the LSTM network.The comprehensive experiments are carried out with two different Twitter depres-sion corpus based on accuracy and Root Mean Square Error(RMSE)metrics.The results demonstrated that the proposed EliteVec+LSTM+PHB model outperforms the state-of-art models with 98.1%accuracy and 0.0559 RMSE.
基金supported by the National Natural Science Foundation of China (No. 11502211)
文摘In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number of design variables are needed, the computational cost becomes prohibitive, and thus original global optimization strategies are required. To address this need, data dimensionality reduction method is combined with global optimization methods, thus forming a new global optimization system, aiming to improve the efficiency of conventional global optimization. The new optimization system involves applying Proper Orthogonal Decomposition(POD) in dimensionality reduction of design space while maintaining the generality of original design space. Besides, an acceleration approach for samples calculation in surrogate modeling is applied to reduce the computational time while providing sufficient accuracy. The optimizations of a transonic airfoil RAE2822 and the transonic wing ONERA M6 are performed to demonstrate the effectiveness of the proposed new optimization system. In both cases, we manage to reduce the number of design variables from 20 to 10 and from 42 to 20 respectively. The new design optimization system converges faster and it takes 1/3 of the total time of traditional optimization to converge to a better design, thus significantly reducing the overall optimization time and improving the efficiency of conventional global design optimization method.