Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques we...The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.展开更多
As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected featu...As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected features.Evolutionary computing(EC)is promising for FS owing to its powerful search capability.However,in traditional EC-based methods,feature subsets are represented via a length-fixed individual encoding.It is ineffective for high-dimensional data,because it results in a huge search space and prohibitive training time.This work proposes a length-adaptive non-dominated sorting genetic algorithm(LA-NSGA)with a length-variable individual encoding and a length-adaptive evolution mechanism for bi-objective highdimensional FS.In LA-NSGA,an initialization method based on correlation and redundancy is devised to initialize individuals of diverse lengths,and a Pareto dominance-based length change operator is introduced to guide individuals to explore in promising search space adaptively.Moreover,a dominance-based local search method is employed for further improvement.The experimental results based on 12 high-dimensional gene datasets show that the Pareto front of feature subsets produced by LA-NSGA is superior to those of existing algorithms.展开更多
The world produces vast quantities of high-dimensional multi-semantic data.However,extracting valuable information from such a large amount of high-dimensional and multi-label data is undoubtedly arduous and challengi...The world produces vast quantities of high-dimensional multi-semantic data.However,extracting valuable information from such a large amount of high-dimensional and multi-label data is undoubtedly arduous and challenging.Feature selection aims to mitigate the adverse impacts of high dimensionality in multi-label data by eliminating redundant and irrelevant features.The ant colony optimization algorithm has demonstrated encouraging outcomes in multi-label feature selection,because of its simplicity,efficiency,and similarity to reinforcement learning.Nevertheless,existing methods do not consider crucial correlation information,such as dynamic redundancy and label correlation.To tackle these concerns,the paper proposes a multi-label feature selection technique based on ant colony optimization algorithm(MFACO),focusing on dynamic redundancy and label correlation.Initially,the dynamic redundancy is assessed between the selected feature subset and potential features.Meanwhile,the ant colony optimization algorithm extracts label correlation from the label set,which is then combined into the heuristic factor as label weights.Experimental results demonstrate that our proposed strategies can effectively enhance the optimal search ability of ant colony,outperforming the other algorithms involved in the paper.展开更多
Bladder urothelial carcinoma is the most common malignant tumor disease in urinary system,and its incidence rate ranks ninth in the world.In recent years,the continuous development of hyperspectral imaging technology ...Bladder urothelial carcinoma is the most common malignant tumor disease in urinary system,and its incidence rate ranks ninth in the world.In recent years,the continuous development of hyperspectral imaging technology has provided a new tool for the auxiliary diagnosis of bladder cancer.In this study,based on microscopic hyperspectral data,an automatic detection algorithm of bladder tumor cells combining color features and shape features is proposed.Support vector machine(SVM)is used to build classification models and compare the classification performance of spectral feature,spectral and shape fusion feature,and the fusion feature proposed in this paper on the same classifier.The results show that the sensitivity,specificity,and accuracy of our classification algorithm based on shape and color fusion features are 0.952,0.897,and 0.920,respectively,which are better than the classification algorithm only using spectral features.Therefore,this study can effectively extract the cell features of bladder urothelial carcinoma smear,thus achieving automatic,real-time,and noninvasive detection of bladder tumor cells,and then helping doctors improve the efficiency of pathological diagnosis of bladder urothelial cancer,and providing a reliable basis for doctors to choose treatment plans and judge the prognosis of the disease.展开更多
Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the i...Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs.展开更多
Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dim...Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dimensional combined feature is presented based on sequence image matching navigation.To balance between the distribution of high-dimensional combined features and the shortcomings of the only use of geometric relations,we propose a method based on Delaunay triangulation to improve the feature,and add the regional characteristics of the features together with their geometric characteristics.Finally,k-nearest neighbor(KNN)algorithm is adopted to optimize searching process.Simulation results show that the matching can be realized at the rotation angle of-8°to 8°and the scale factor of 0.9 to 1.1,and when the image size is 160 pixel×160 pixel,the matching time is less than 0.5 s.Therefore,the proposed algorithm can substantially reduce computational complexity,improve the matching speed,and exhibit robustness to the rotation and scale changes.展开更多
[Objective] The aim was to study the feature extraction of stored-grain insects based on ant colony optimization and support vector machine algorithm, and to explore the feasibility of the feature extraction of stored...[Objective] The aim was to study the feature extraction of stored-grain insects based on ant colony optimization and support vector machine algorithm, and to explore the feasibility of the feature extraction of stored-grain insects. [Method] Through the analysis of feature extraction in the image recognition of the stored-grain insects, the recognition accuracy of the cross-validation training model in support vector machine (SVM) algorithm was taken as an important factor of the evaluation principle of feature extraction of stored-grain insects. The ant colony optimization (ACO) algorithm was applied to the automatic feature extraction of stored-grain insects. [Result] The algorithm extracted the optimal feature subspace of seven features from the 17 morphological features, including area and perimeter. The ninety image samples of the stored-grain insects were automatically recognized by the optimized SVM classifier, and the recognition accuracy was over 95%. [Conclusion] The experiment shows that the application of ant colony optimization to the feature extraction of grain insects is practical and feasible.展开更多
In this paper,an Observation Points Classifier Ensemble(OPCE)algorithm is proposed to deal with High-Dimensional Imbalanced Classification(HDIC)problems based on data processed using the Multi-Dimensional Scaling(MDS)...In this paper,an Observation Points Classifier Ensemble(OPCE)algorithm is proposed to deal with High-Dimensional Imbalanced Classification(HDIC)problems based on data processed using the Multi-Dimensional Scaling(MDS)feature extraction technique.First,dimensionality of the original imbalanced data is reduced using MDS so that distances between any two different samples are preserved as well as possible.Second,a novel OPCE algorithm is applied to classify imbalanced samples by placing optimised observation points in a low-dimensional data space.Third,optimization of the observation point mappings is carried out to obtain a reliable assessment of the unknown samples.Exhaustive experiments have been conducted to evaluate the feasibility,rationality,and effectiveness of the proposed OPCE algorithm using seven benchmark HDIC data sets.Experimental results show that(1)the OPCE algorithm can be trained faster on low-dimensional imbalanced data than on high-dimensional data;(2)the OPCE algorithm can correctly identify samples as the number of optimised observation points is increased;and(3)statistical analysis reveals that OPCE yields better HDIC performances on the selected data sets in comparison with eight other HDIC algorithms.This demonstrates that OPCE is a viable algorithm to deal with HDIC problems.展开更多
With the intensifying aging of the population,the phenomenon of the elderly living alone is also increasing.Therefore,using modern internet of things technology to monitor the daily behavior of the elderly in indoors ...With the intensifying aging of the population,the phenomenon of the elderly living alone is also increasing.Therefore,using modern internet of things technology to monitor the daily behavior of the elderly in indoors is a meaningful study.Video-based action recognition tasks are easily affected by object occlusion and weak ambient light,resulting in poor recognition performance.Therefore,this paper proposes an indoor human behavior recognition method based on wireless fidelity(Wi-Fi)perception and video feature fusion by utilizing the ability of Wi-Fi signals to carry environmental information during the propagation process.This paper uses the public WiFi-based activity recognition dataset(WIAR)containing Wi-Fi channel state information and essential action videos,and then extracts video feature vectors and Wi-Fi signal feature vectors in the datasets through the two-stream convolutional neural network and standard statistical algorithms,respectively.Then the two sets of feature vectors are fused,and finally,the action classification and recognition are performed by the support vector machine(SVM).The experiments in this paper contrast experiments between the two-stream network model and the methods in this paper under three different environments.And the accuracy of action recognition after adding Wi-Fi signal feature fusion is improved by 10%on average.展开更多
Key variable identification for classifications is related to many trouble-shooting problems in process indus-tries. Recursive feature elimination based on support vector machine (SVM-RFE) has been proposed recently i...Key variable identification for classifications is related to many trouble-shooting problems in process indus-tries. Recursive feature elimination based on support vector machine (SVM-RFE) has been proposed recently in applica-tion for feature selection in cancer diagnosis. In this paper, SVM-RFE is used to the key variable selection in fault diag-nosis, and an accelerated SVM-RFE procedure based on heuristic criterion is proposed. The data from Tennessee East-man process (TEP) simulator is used to evaluate the effectiveness of the key variable selection using accelerated SVM-RFE (A-SVM-RFE). A-SVM-RFE integrates computational rate and algorithm effectiveness into a consistent framework. It not only can correctly identify the key variables, but also has very good computational rate. In comparison with contribution charts combined with principal component aralysis (PCA) and other two SVM-RFE algorithms, A-SVM-RFE performs better. It is more fitting for industrial application.展开更多
As an indispensable part of identity authentication,offline writer identification plays a notable role in biology,forensics,and historical document analysis.However,identifying handwriting efficiently,stably,and quick...As an indispensable part of identity authentication,offline writer identification plays a notable role in biology,forensics,and historical document analysis.However,identifying handwriting efficiently,stably,and quickly is still challenging due to the method of extracting and processing handwriting features.In this paper,we propose an efficient system to identify writers through handwritten images,which integrates local and global features from similar handwritten images.The local features are modeled by effective aggregate processing,and global features are extracted through transfer learning.Specifically,the proposed system employs a pre-trained Residual Network to mine the relationship between large image sets and specific handwritten images,while the vector of locally aggregated descriptors with double power normalization is employed in aggregating local and global features.Moreover,handwritten image segmentation,preprocessing,enhancement,optimization of neural network architecture,and normalization for local and global features are exploited,significantly improving system performance.The proposed system is evaluated on Computer Vision Lab(CVL)datasets and the International Conference on Document Analysis and Recognition(ICDAR)2013 datasets.The results show that it represents good generalizability and achieves state-of-the-art performance.Furthermore,the system performs better when training complete handwriting patches with the normalization method.The experimental result indicates that it’s significant to segment handwriting reasonably while dealing with handwriting overlap,which reduces visual burstiness.展开更多
Feature extraction is the most critical step in classification of multispectral image.The classification accuracy is mainly influenced by the feature sets that are selected to classify the image.In the past,handcrafte...Feature extraction is the most critical step in classification of multispectral image.The classification accuracy is mainly influenced by the feature sets that are selected to classify the image.In the past,handcrafted feature sets are used which are not adaptive for different image domains.To overcome this,an evolu-tionary learning method is developed to automatically learn the spatial-spectral features for classification.A modified Firefly Algorithm(FA)which achieves maximum classification accuracy with reduced size of feature set is proposed to gain the interest of feature selection for this purpose.For extracting the most effi-cient features from the data set,we have used 3-D discrete wavelet transform which decompose the multispectral image in all three dimensions.For selecting spatial and spectral features we have studied three different approaches namely overlapping window(OW-3DFS),non-overlapping window(NW-3DFS)adaptive window cube(AW-3DFS)and Pixel based technique.Fivefold Multiclass Support Vector Machine(MSVM)is used for classification purpose.Experiments con-ducted on Madurai LISS IV multispectral image exploited that the adaptive win-dow approach is used to increase the classification accuracy.展开更多
Support vector machine (SVM) is a popular pattern classification method with many application areas. SVM shows its outstanding performance in high-dimensional data classification. In the process of classification, SVM...Support vector machine (SVM) is a popular pattern classification method with many application areas. SVM shows its outstanding performance in high-dimensional data classification. In the process of classification, SVM kernel parameter setting during the SVM training procedure, along with the feature selection significantly influences the classification accuracy. This paper proposes two novel intelligent optimization methods, which simultaneously determines the parameter values while discovering a subset of features to increase SVM classification accuracy. The study focuses on two evolutionary computing approaches to optimize the parameters of SVM: particle swarm optimization (PSO) and genetic algorithm (GA). And we combine above the two intelligent optimization methods with SVM to choose appropriate subset features and SVM parameters, which are termed GA-FSSVM (Genetic Algorithm-Feature Selection Support Vector Machines) and PSO-FSSVM(Particle Swarm Optimization-Feature Selection Support Vector Machines) models. Experimental results demonstrate that the classification accuracy by our proposed methods outperforms traditional grid search approach and many other approaches. Moreover, the result indicates that PSO-FSSVM can obtain higher classification accuracy than GA-FSSVM classification for hyperspectral data.展开更多
The image shape feature can be described by the image Zernike moments. In this paper, we points out the problem that the high dimension image Zernike moments shape feature vector can describe more detail of the origin...The image shape feature can be described by the image Zernike moments. In this paper, we points out the problem that the high dimension image Zernike moments shape feature vector can describe more detail of the original image but has too many elements making trouble for the next image analysis phases. Then the low dimension image Zernike moments shape feature vector should be improved and optimized to describe more detail of the original image. So the optimization algorithm based on evolutionary computation is designed and implemented in this paper to solve this problem. The experimental results demonstrate the feasibility of the optimization algorithm.展开更多
Alzheimer’s disease is a non-reversible,non-curable,and progressive neurological disorder that induces the shrinkage and death of a specific neuronal population associated with memory formation and retention.It is a ...Alzheimer’s disease is a non-reversible,non-curable,and progressive neurological disorder that induces the shrinkage and death of a specific neuronal population associated with memory formation and retention.It is a frequently occurring mental illness that occurs in about 60%–80%of cases of dementia.It is usually observed between people in the age group of 60 years and above.Depending upon the severity of symptoms the patients can be categorized in Cognitive Normal(CN),Mild Cognitive Impairment(MCI)and Alzheimer’s Disease(AD).Alzheimer’s disease is the last phase of the disease where the brain is severely damaged,and the patients are not able to live on their own.Radiomics is an approach to extracting a huge number of features from medical images with the help of data characterization algorithms.Here,105 number of radiomic features are extracted and used to predict the alzhimer’s.This paper uses Support Vector Machine,K-Nearest Neighbour,Gaussian Naïve Bayes,eXtreme Gradient Boosting(XGBoost)and Random Forest to predict Alzheimer’s disease.The proposed random forest-based approach with the Radiomic features achieved an accuracy of 85%.This proposed approach also achieved 88%accuracy,88%recall,88%precision and 87%F1-score for AD vs.CN,it achieved 72%accuracy,73%recall,72%precisionand 71%F1-score for AD vs.MCI and it achieved 69%accuracy,69%recall,68%precision and 69%F1-score for MCI vs.CN.The comparative analysis shows that the proposed approach performs better than others approaches.展开更多
For more accurate fault detection and diagnosis, there is an increasing trend to use a large number of sensors and to collect data at high frequency. This inevitably produces large-scale data and causes difficulties i...For more accurate fault detection and diagnosis, there is an increasing trend to use a large number of sensors and to collect data at high frequency. This inevitably produces large-scale data and causes difficulties in fault classification. Actually, the classification methods are simply intractable when applied to high-dimensional condition monitoring data. In order to solve the problem, engineers have to resort to complicated feature extraction methods to reduce the dimensionality of data. However, the features transformed by the methods cannot be understood by the engineers due to a loss of the original engineering meaning. In this paper, other forms of dimensionality reduction technique(feature selection methods) are employed to identify machinery condition, based only on frequency spectrum data. Feature selection methods are usually divided into three main types: filter, wrapper and embedded methods. Most studies are mainly focused on the first two types, whilst the development and application of the embedded feature selection methods are very limited. This paper attempts to explore a novel embedded method. The method is formed by merging a sequential bidirectional search algorithm into scale parameters tuning within a kernel function in the relevance vector machine. To demonstrate the potential for applying the method to machinery fault diagnosis, the method is implemented to rolling bearing experimental data. The results obtained by using the method are consistent with the theoretical interpretation, proving that this algorithm has important engineering significance in revealing the correlation between the faults and relevant frequency features. The proposed method is a theoretical extension of relevance vector machine, and provides an effective solution to detect the fault-related frequency components with high efficiency.展开更多
Depth estimation of subsurface faults is one of the problems in gravity interpretation. We tried using the support vector classifier (SVC) method in the estimation. Using forward and nonlinear inverse techniques, de...Depth estimation of subsurface faults is one of the problems in gravity interpretation. We tried using the support vector classifier (SVC) method in the estimation. Using forward and nonlinear inverse techniques, detecting the depth of subsurface faults with related error is possible but it is necessary to have an initial guess for the depth and this initial guess usually comes from non-gravity data. We introduce SVC in this paper as one of the tools for estimating the depth of subsurface faults using gravity data. We can suppose that each subsurface fault depth is a class and that SVC is a classification algorithm. To better use the SVC algorithm, we select proper depth estimation features using a proper features selection (FS) algorithm. In this research, we produce a training set consisting of synthetic gravity profiles created by subsurface faults at different depths to train the SVC code to estimate the depth of real subsurface faults. Then we test our trained SVC code by a testing set consisting of other synthetic gravity profiles created by subsurface faults at different depths. We also tested our trained SVC code using real data.展开更多
In recent times,the images and videos have emerged as one of the most important information source depicting the real time scenarios.Digital images nowadays serve as input for many applications and replacing the manua...In recent times,the images and videos have emerged as one of the most important information source depicting the real time scenarios.Digital images nowadays serve as input for many applications and replacing the manual methods due to their capabilities of 3D scene representation in 2D plane.The capabilities of digital images along with utilization of machine learning methodologies are showing promising accuracies in many applications of prediction and pattern recognition.One of the application fields pertains to detection of diseases occurring in the plants,which are destroying the widespread fields.Traditionally the disease detection process was done by a domain expert using manual examination and laboratory tests.This is a tedious and time consuming process and does not suffice the accuracy levels.This creates a room for the research in developing automation based methods where the images captured through sensors and cameras will be used for detection of disease and control its spreading.The digital images captured from the field’s forms the dataset which trains the machine learning models to predict the nature of the disease.The accuracy of these models is greatly affected by the amount of noise and ailments present in the input images,appropriate segmentation methodology,feature vector development and the choice of machine learning algorithm.To ensure the high rated performance of the designed system the research is moving in a direction to fine tune each and every stage separately considering their dependencies on subsequent stages.Therefore the most optimum solution can be obtained by considering the image processing methodologies for improving the quality of image and then applying statistical methods for feature extraction and selection.The training vector thus developed is capable of presenting the relationship between the feature values and the target class.In this article,a highly accurate system model for detecting the diseases occurring in citrus fruits using a hybrid feature development approach is proposed.The overall improvement in terms of accuracy is measured and depicted.展开更多
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
基金supported by the Second Tibetan Plateau Scientific Expedition and Research Program(Grant no.2019QZKK0904)Natural Science Foundation of Hebei Province(Grant no.D2022403032)S&T Program of Hebei(Grant no.E2021403001).
文摘The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.
基金supported in part by the National Natural Science Foundation of China(62172065,62072060)。
文摘As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected features.Evolutionary computing(EC)is promising for FS owing to its powerful search capability.However,in traditional EC-based methods,feature subsets are represented via a length-fixed individual encoding.It is ineffective for high-dimensional data,because it results in a huge search space and prohibitive training time.This work proposes a length-adaptive non-dominated sorting genetic algorithm(LA-NSGA)with a length-variable individual encoding and a length-adaptive evolution mechanism for bi-objective highdimensional FS.In LA-NSGA,an initialization method based on correlation and redundancy is devised to initialize individuals of diverse lengths,and a Pareto dominance-based length change operator is introduced to guide individuals to explore in promising search space adaptively.Moreover,a dominance-based local search method is employed for further improvement.The experimental results based on 12 high-dimensional gene datasets show that the Pareto front of feature subsets produced by LA-NSGA is superior to those of existing algorithms.
基金supported by National Natural Science Foundation of China(Grant Nos.62376089,62302153,62302154,62202147)the key Research and Development Program of Hubei Province,China(Grant No.2023BEB024).
文摘The world produces vast quantities of high-dimensional multi-semantic data.However,extracting valuable information from such a large amount of high-dimensional and multi-label data is undoubtedly arduous and challenging.Feature selection aims to mitigate the adverse impacts of high dimensionality in multi-label data by eliminating redundant and irrelevant features.The ant colony optimization algorithm has demonstrated encouraging outcomes in multi-label feature selection,because of its simplicity,efficiency,and similarity to reinforcement learning.Nevertheless,existing methods do not consider crucial correlation information,such as dynamic redundancy and label correlation.To tackle these concerns,the paper proposes a multi-label feature selection technique based on ant colony optimization algorithm(MFACO),focusing on dynamic redundancy and label correlation.Initially,the dynamic redundancy is assessed between the selected feature subset and potential features.Meanwhile,the ant colony optimization algorithm extracts label correlation from the label set,which is then combined into the heuristic factor as label weights.Experimental results demonstrate that our proposed strategies can effectively enhance the optimal search ability of ant colony,outperforming the other algorithms involved in the paper.
基金Bethune Medical Engineering and Instrument Center Fund(E10133Y8H0)Jilin province science and technology development plan project(20210204216YY,20210204146YY).
文摘Bladder urothelial carcinoma is the most common malignant tumor disease in urinary system,and its incidence rate ranks ninth in the world.In recent years,the continuous development of hyperspectral imaging technology has provided a new tool for the auxiliary diagnosis of bladder cancer.In this study,based on microscopic hyperspectral data,an automatic detection algorithm of bladder tumor cells combining color features and shape features is proposed.Support vector machine(SVM)is used to build classification models and compare the classification performance of spectral feature,spectral and shape fusion feature,and the fusion feature proposed in this paper on the same classifier.The results show that the sensitivity,specificity,and accuracy of our classification algorithm based on shape and color fusion features are 0.952,0.897,and 0.920,respectively,which are better than the classification algorithm only using spectral features.Therefore,this study can effectively extract the cell features of bladder urothelial carcinoma smear,thus achieving automatic,real-time,and noninvasive detection of bladder tumor cells,and then helping doctors improve the efficiency of pathological diagnosis of bladder urothelial cancer,and providing a reliable basis for doctors to choose treatment plans and judge the prognosis of the disease.
基金funded by the Natural Science Foundation of Shandong Province (ZR2021MD061ZR2023QD025)+3 种基金China Postdoctoral Science Foundation (2022M721972)National Natural Science Foundation of China (41174098)Young Talents Foundation of Inner Mongolia University (10000-23112101/055)Qingdao Postdoctoral Science Foundation (QDBSH20230102094)。
文摘Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs.
基金supported by the National Natural Science Foundations of China(Nos.51205193,51475221)
文摘Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dimensional combined feature is presented based on sequence image matching navigation.To balance between the distribution of high-dimensional combined features and the shortcomings of the only use of geometric relations,we propose a method based on Delaunay triangulation to improve the feature,and add the regional characteristics of the features together with their geometric characteristics.Finally,k-nearest neighbor(KNN)algorithm is adopted to optimize searching process.Simulation results show that the matching can be realized at the rotation angle of-8°to 8°and the scale factor of 0.9 to 1.1,and when the image size is 160 pixel×160 pixel,the matching time is less than 0.5 s.Therefore,the proposed algorithm can substantially reduce computational complexity,improve the matching speed,and exhibit robustness to the rotation and scale changes.
基金Supported by the National Natural Science Foundation of China(31101085)the Program for Young Core Teachers of Colleges in Henan(2011GGJS-094)the Scientific Research Project for the High Level Talents,North China University of Water Conservancy and Hydroelectric Power~~
文摘[Objective] The aim was to study the feature extraction of stored-grain insects based on ant colony optimization and support vector machine algorithm, and to explore the feasibility of the feature extraction of stored-grain insects. [Method] Through the analysis of feature extraction in the image recognition of the stored-grain insects, the recognition accuracy of the cross-validation training model in support vector machine (SVM) algorithm was taken as an important factor of the evaluation principle of feature extraction of stored-grain insects. The ant colony optimization (ACO) algorithm was applied to the automatic feature extraction of stored-grain insects. [Result] The algorithm extracted the optimal feature subspace of seven features from the 17 morphological features, including area and perimeter. The ninety image samples of the stored-grain insects were automatically recognized by the optimized SVM classifier, and the recognition accuracy was over 95%. [Conclusion] The experiment shows that the application of ant colony optimization to the feature extraction of grain insects is practical and feasible.
基金National Natural Science Foundation of China,Grant/Award Number:61972261Basic Research Foundations of Shenzhen,Grant/Award Numbers:JCYJ20210324093609026,JCYJ20200813091134001。
文摘In this paper,an Observation Points Classifier Ensemble(OPCE)algorithm is proposed to deal with High-Dimensional Imbalanced Classification(HDIC)problems based on data processed using the Multi-Dimensional Scaling(MDS)feature extraction technique.First,dimensionality of the original imbalanced data is reduced using MDS so that distances between any two different samples are preserved as well as possible.Second,a novel OPCE algorithm is applied to classify imbalanced samples by placing optimised observation points in a low-dimensional data space.Third,optimization of the observation point mappings is carried out to obtain a reliable assessment of the unknown samples.Exhaustive experiments have been conducted to evaluate the feasibility,rationality,and effectiveness of the proposed OPCE algorithm using seven benchmark HDIC data sets.Experimental results show that(1)the OPCE algorithm can be trained faster on low-dimensional imbalanced data than on high-dimensional data;(2)the OPCE algorithm can correctly identify samples as the number of optimised observation points is increased;and(3)statistical analysis reveals that OPCE yields better HDIC performances on the selected data sets in comparison with eight other HDIC algorithms.This demonstrates that OPCE is a viable algorithm to deal with HDIC problems.
基金supported by the National Natural Science Foundation of China(No.62006135)the Natural Science Foundation of Shandong Province(No.ZR2020QF116)。
文摘With the intensifying aging of the population,the phenomenon of the elderly living alone is also increasing.Therefore,using modern internet of things technology to monitor the daily behavior of the elderly in indoors is a meaningful study.Video-based action recognition tasks are easily affected by object occlusion and weak ambient light,resulting in poor recognition performance.Therefore,this paper proposes an indoor human behavior recognition method based on wireless fidelity(Wi-Fi)perception and video feature fusion by utilizing the ability of Wi-Fi signals to carry environmental information during the propagation process.This paper uses the public WiFi-based activity recognition dataset(WIAR)containing Wi-Fi channel state information and essential action videos,and then extracts video feature vectors and Wi-Fi signal feature vectors in the datasets through the two-stream convolutional neural network and standard statistical algorithms,respectively.Then the two sets of feature vectors are fused,and finally,the action classification and recognition are performed by the support vector machine(SVM).The experiments in this paper contrast experiments between the two-stream network model and the methods in this paper under three different environments.And the accuracy of action recognition after adding Wi-Fi signal feature fusion is improved by 10%on average.
基金Supported by China 973 Program (No.2002CB312200), the National Natural Science Foundation of China (No.60574019 and No.60474045), the Key Technologies R&D Program of Zhejiang Province (No.2005C21087) and the Academician Foundation of Zhejiang Province (No.2005A1001-13).
文摘Key variable identification for classifications is related to many trouble-shooting problems in process indus-tries. Recursive feature elimination based on support vector machine (SVM-RFE) has been proposed recently in applica-tion for feature selection in cancer diagnosis. In this paper, SVM-RFE is used to the key variable selection in fault diag-nosis, and an accelerated SVM-RFE procedure based on heuristic criterion is proposed. The data from Tennessee East-man process (TEP) simulator is used to evaluate the effectiveness of the key variable selection using accelerated SVM-RFE (A-SVM-RFE). A-SVM-RFE integrates computational rate and algorithm effectiveness into a consistent framework. It not only can correctly identify the key variables, but also has very good computational rate. In comparison with contribution charts combined with principal component aralysis (PCA) and other two SVM-RFE algorithms, A-SVM-RFE performs better. It is more fitting for industrial application.
基金supported in part by the Postgraduate Research&Practice Innovation Program of Jiangsu Province under Grant KYCX 20_0758in part by the Science and Technology Research Project of Jiangsu Public Security Department under Grant 2020KX005+1 种基金in part by the General Project of Philosophy and Social Science Research in Colleges and Universities in Jiangsu Province under Grant 2022SJYB0473in part by“Cyberspace Security”Construction Project of Jiangsu Provincial Key Discipline during the“14th Five Year Plan”.
文摘As an indispensable part of identity authentication,offline writer identification plays a notable role in biology,forensics,and historical document analysis.However,identifying handwriting efficiently,stably,and quickly is still challenging due to the method of extracting and processing handwriting features.In this paper,we propose an efficient system to identify writers through handwritten images,which integrates local and global features from similar handwritten images.The local features are modeled by effective aggregate processing,and global features are extracted through transfer learning.Specifically,the proposed system employs a pre-trained Residual Network to mine the relationship between large image sets and specific handwritten images,while the vector of locally aggregated descriptors with double power normalization is employed in aggregating local and global features.Moreover,handwritten image segmentation,preprocessing,enhancement,optimization of neural network architecture,and normalization for local and global features are exploited,significantly improving system performance.The proposed system is evaluated on Computer Vision Lab(CVL)datasets and the International Conference on Document Analysis and Recognition(ICDAR)2013 datasets.The results show that it represents good generalizability and achieves state-of-the-art performance.Furthermore,the system performs better when training complete handwriting patches with the normalization method.The experimental result indicates that it’s significant to segment handwriting reasonably while dealing with handwriting overlap,which reduces visual burstiness.
文摘Feature extraction is the most critical step in classification of multispectral image.The classification accuracy is mainly influenced by the feature sets that are selected to classify the image.In the past,handcrafted feature sets are used which are not adaptive for different image domains.To overcome this,an evolu-tionary learning method is developed to automatically learn the spatial-spectral features for classification.A modified Firefly Algorithm(FA)which achieves maximum classification accuracy with reduced size of feature set is proposed to gain the interest of feature selection for this purpose.For extracting the most effi-cient features from the data set,we have used 3-D discrete wavelet transform which decompose the multispectral image in all three dimensions.For selecting spatial and spectral features we have studied three different approaches namely overlapping window(OW-3DFS),non-overlapping window(NW-3DFS)adaptive window cube(AW-3DFS)and Pixel based technique.Fivefold Multiclass Support Vector Machine(MSVM)is used for classification purpose.Experiments con-ducted on Madurai LISS IV multispectral image exploited that the adaptive win-dow approach is used to increase the classification accuracy.
文摘Support vector machine (SVM) is a popular pattern classification method with many application areas. SVM shows its outstanding performance in high-dimensional data classification. In the process of classification, SVM kernel parameter setting during the SVM training procedure, along with the feature selection significantly influences the classification accuracy. This paper proposes two novel intelligent optimization methods, which simultaneously determines the parameter values while discovering a subset of features to increase SVM classification accuracy. The study focuses on two evolutionary computing approaches to optimize the parameters of SVM: particle swarm optimization (PSO) and genetic algorithm (GA). And we combine above the two intelligent optimization methods with SVM to choose appropriate subset features and SVM parameters, which are termed GA-FSSVM (Genetic Algorithm-Feature Selection Support Vector Machines) and PSO-FSSVM(Particle Swarm Optimization-Feature Selection Support Vector Machines) models. Experimental results demonstrate that the classification accuracy by our proposed methods outperforms traditional grid search approach and many other approaches. Moreover, the result indicates that PSO-FSSVM can obtain higher classification accuracy than GA-FSSVM classification for hyperspectral data.
基金the National Natural Science Foundation of China (60303029)
文摘The image shape feature can be described by the image Zernike moments. In this paper, we points out the problem that the high dimension image Zernike moments shape feature vector can describe more detail of the original image but has too many elements making trouble for the next image analysis phases. Then the low dimension image Zernike moments shape feature vector should be improved and optimized to describe more detail of the original image. So the optimization algorithm based on evolutionary computation is designed and implemented in this paper to solve this problem. The experimental results demonstrate the feasibility of the optimization algorithm.
文摘Alzheimer’s disease is a non-reversible,non-curable,and progressive neurological disorder that induces the shrinkage and death of a specific neuronal population associated with memory formation and retention.It is a frequently occurring mental illness that occurs in about 60%–80%of cases of dementia.It is usually observed between people in the age group of 60 years and above.Depending upon the severity of symptoms the patients can be categorized in Cognitive Normal(CN),Mild Cognitive Impairment(MCI)and Alzheimer’s Disease(AD).Alzheimer’s disease is the last phase of the disease where the brain is severely damaged,and the patients are not able to live on their own.Radiomics is an approach to extracting a huge number of features from medical images with the help of data characterization algorithms.Here,105 number of radiomic features are extracted and used to predict the alzhimer’s.This paper uses Support Vector Machine,K-Nearest Neighbour,Gaussian Naïve Bayes,eXtreme Gradient Boosting(XGBoost)and Random Forest to predict Alzheimer’s disease.The proposed random forest-based approach with the Radiomic features achieved an accuracy of 85%.This proposed approach also achieved 88%accuracy,88%recall,88%precision and 87%F1-score for AD vs.CN,it achieved 72%accuracy,73%recall,72%precisionand 71%F1-score for AD vs.MCI and it achieved 69%accuracy,69%recall,68%precision and 69%F1-score for MCI vs.CN.The comparative analysis shows that the proposed approach performs better than others approaches.
基金Supported by Humanities and Social Science Programme in Hubei Province,China(Grant No.14Y035)National Natural Science Foundation of China(Grant No.71203170)National Special Research Project in Food Nonprofit Industry(Grant No.201413002-2)
文摘For more accurate fault detection and diagnosis, there is an increasing trend to use a large number of sensors and to collect data at high frequency. This inevitably produces large-scale data and causes difficulties in fault classification. Actually, the classification methods are simply intractable when applied to high-dimensional condition monitoring data. In order to solve the problem, engineers have to resort to complicated feature extraction methods to reduce the dimensionality of data. However, the features transformed by the methods cannot be understood by the engineers due to a loss of the original engineering meaning. In this paper, other forms of dimensionality reduction technique(feature selection methods) are employed to identify machinery condition, based only on frequency spectrum data. Feature selection methods are usually divided into three main types: filter, wrapper and embedded methods. Most studies are mainly focused on the first two types, whilst the development and application of the embedded feature selection methods are very limited. This paper attempts to explore a novel embedded method. The method is formed by merging a sequential bidirectional search algorithm into scale parameters tuning within a kernel function in the relevance vector machine. To demonstrate the potential for applying the method to machinery fault diagnosis, the method is implemented to rolling bearing experimental data. The results obtained by using the method are consistent with the theoretical interpretation, proving that this algorithm has important engineering significance in revealing the correlation between the faults and relevant frequency features. The proposed method is a theoretical extension of relevance vector machine, and provides an effective solution to detect the fault-related frequency components with high efficiency.
文摘Depth estimation of subsurface faults is one of the problems in gravity interpretation. We tried using the support vector classifier (SVC) method in the estimation. Using forward and nonlinear inverse techniques, detecting the depth of subsurface faults with related error is possible but it is necessary to have an initial guess for the depth and this initial guess usually comes from non-gravity data. We introduce SVC in this paper as one of the tools for estimating the depth of subsurface faults using gravity data. We can suppose that each subsurface fault depth is a class and that SVC is a classification algorithm. To better use the SVC algorithm, we select proper depth estimation features using a proper features selection (FS) algorithm. In this research, we produce a training set consisting of synthetic gravity profiles created by subsurface faults at different depths to train the SVC code to estimate the depth of real subsurface faults. Then we test our trained SVC code by a testing set consisting of other synthetic gravity profiles created by subsurface faults at different depths. We also tested our trained SVC code using real data.
基金This work was supported by Taif University Researchers Supporting Project(TURSP)under number(TURSP-2020/73)Taif University,Taif,Saudi Arabia。
文摘In recent times,the images and videos have emerged as one of the most important information source depicting the real time scenarios.Digital images nowadays serve as input for many applications and replacing the manual methods due to their capabilities of 3D scene representation in 2D plane.The capabilities of digital images along with utilization of machine learning methodologies are showing promising accuracies in many applications of prediction and pattern recognition.One of the application fields pertains to detection of diseases occurring in the plants,which are destroying the widespread fields.Traditionally the disease detection process was done by a domain expert using manual examination and laboratory tests.This is a tedious and time consuming process and does not suffice the accuracy levels.This creates a room for the research in developing automation based methods where the images captured through sensors and cameras will be used for detection of disease and control its spreading.The digital images captured from the field’s forms the dataset which trains the machine learning models to predict the nature of the disease.The accuracy of these models is greatly affected by the amount of noise and ailments present in the input images,appropriate segmentation methodology,feature vector development and the choice of machine learning algorithm.To ensure the high rated performance of the designed system the research is moving in a direction to fine tune each and every stage separately considering their dependencies on subsequent stages.Therefore the most optimum solution can be obtained by considering the image processing methodologies for improving the quality of image and then applying statistical methods for feature extraction and selection.The training vector thus developed is capable of presenting the relationship between the feature values and the target class.In this article,a highly accurate system model for detecting the diseases occurring in citrus fruits using a hybrid feature development approach is proposed.The overall improvement in terms of accuracy is measured and depicted.