Radiomics is a non-invasive method for extracting quantitative and higher-dimensional features from medical images for diagnosis.It has received great attention due to its huge application prospects in recent years.We...Radiomics is a non-invasive method for extracting quantitative and higher-dimensional features from medical images for diagnosis.It has received great attention due to its huge application prospects in recent years.We can know that the number of features selected by the existing radiomics feature selectionmethods is basically about ten.In this paper,a heuristic feature selection method based on frequency iteration and multiple supervised training mode is proposed.Based on the combination between features,it decomposes all features layer by layer to select the optimal features for each layer,then fuses the optimal features to form a local optimal group layer by layer and iterates to the global optimal combination finally.Compared with the currentmethod with the best prediction performance in the three data sets,thismethod proposed in this paper can reduce the number of features fromabout ten to about three without losing classification accuracy and even significantly improving classification accuracy.The proposed method has better interpretability and generalization ability,which gives it great potential in the feature selection of radiomics.展开更多
Feature Selection(FS)is a key pre-processing step in pattern recognition and data mining tasks,which can effectively avoid the impact of irrelevant and redundant features on the performance of classification models.In...Feature Selection(FS)is a key pre-processing step in pattern recognition and data mining tasks,which can effectively avoid the impact of irrelevant and redundant features on the performance of classification models.In recent years,meta-heuristic algorithms have been widely used in FS problems,so a Hybrid Binary Chaotic Salp Swarm Dung Beetle Optimization(HBCSSDBO)algorithm is proposed in this paper to improve the effect of FS.In this hybrid algorithm,the original continuous optimization algorithm is converted into binary form by the S-type transfer function and applied to the FS problem.By combining the K nearest neighbor(KNN)classifier,the comparative experiments for FS are carried out between the proposed method and four advanced meta-heuristic algorithms on 16 UCI(University of California,Irvine)datasets.Seven evaluation metrics such as average adaptation,average prediction accuracy,and average running time are chosen to judge and compare the algorithms.The selected dataset is also discussed by categorizing it into three dimensions:high,medium,and low dimensions.Experimental results show that the HBCSSDBO feature selection method has the ability to obtain a good subset of features while maintaining high classification accuracy,shows better optimization performance.In addition,the results of statistical tests confirm the significant validity of the method.展开更多
In classification problems,datasets often contain a large amount of features,but not all of them are relevant for accurate classification.In fact,irrelevant features may even hinder classification accuracy.Feature sel...In classification problems,datasets often contain a large amount of features,but not all of them are relevant for accurate classification.In fact,irrelevant features may even hinder classification accuracy.Feature selection aims to alleviate this issue by minimizing the number of features in the subset while simultaneously minimizing the classification error rate.Single-objective optimization approaches employ an evaluation function designed as an aggregate function with a parameter,but the results obtained depend on the value of the parameter.To eliminate this parameter’s influence,the problem can be reformulated as a multi-objective optimization problem.The Whale Optimization Algorithm(WOA)is widely used in optimization problems because of its simplicity and easy implementation.In this paper,we propose a multi-strategy assisted multi-objective WOA(MSMOWOA)to address feature selection.To enhance the algorithm’s search ability,we integrate multiple strategies such as Levy flight,Grey Wolf Optimizer,and adaptive mutation into it.Additionally,we utilize an external repository to store non-dominant solution sets and grid technology is used to maintain diversity.Results on fourteen University of California Irvine(UCI)datasets demonstrate that our proposed method effectively removes redundant features and improves classification performance.The source code can be accessed from the website:https://github.com/zc0315/MSMOWOA.展开更多
Deception detection plays a crucial role in criminal investigation.Videos contain a wealth of information regarding apparent and physiological changes in individuals,and thus can serve as an effective means of decepti...Deception detection plays a crucial role in criminal investigation.Videos contain a wealth of information regarding apparent and physiological changes in individuals,and thus can serve as an effective means of deception detection.In this paper,we investigate video-based deception detection considering both apparent visual features such as eye gaze,head pose and facial action unit(AU),and non-contact heart rate detected by remote photoplethysmography(rPPG)technique.Multiple wrapper-based feature selection methods combined with the K-nearest neighbor(KNN)and support vector machine(SVM)classifiers are employed to screen the most effective features for deception detection.We evaluate the performance of the proposed method on both a self-collected physiological-assisted visual deception detection(PV3D)dataset and a public bag-oflies(BOL)dataset.Experimental results demonstrate that the SVM classifier with symbiotic organisms search(SOS)feature selection yields the best overall performance,with an area under the curve(AUC)of 83.27%and accuracy(ACC)of 83.33%for PV3D,and an AUC of 71.18%and ACC of 70.33%for BOL.This demonstrates the stability and effectiveness of the proposed method in video-based deception detection tasks.展开更多
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
With the advancement of wireless network technology,vast amounts of traffic have been generated,and malicious traffic attacks that threaten the network environment are becoming increasingly sophisticated.While signatu...With the advancement of wireless network technology,vast amounts of traffic have been generated,and malicious traffic attacks that threaten the network environment are becoming increasingly sophisticated.While signature-based detection methods,static analysis,and dynamic analysis techniques have been previously explored for malicious traffic detection,they have limitations in identifying diversified malware traffic patterns.Recent research has been focused on the application of machine learning to detect these patterns.However,applying machine learning to lightweight devices like IoT devices is challenging because of the high computational demands and complexity involved in the learning process.In this study,we examined methods for effectively utilizing machine learning-based malicious traffic detection approaches for lightweight devices.We introduced the suboptimal feature selection model(SFSM),a feature selection technique designed to reduce complexity while maintaining the effectiveness of malicious traffic detection.Detection performance was evaluated on various malicious traffic,benign,exploits,and generic,using the UNSW-NB15 dataset and SFSM sub-optimized hyperparameters for feature selection and narrowed the search scope to encompass all features.SFSM improved learning performance while minimizing complexity by considering feature selection and exhaustive search as two steps,a problem not considered in conventional models.Our experimental results showed that the detection accuracy was improved by approximately 20%compared to the random model,and the reduction in accuracy compared to the greedy model,which performs an exhaustive search on all features,was kept within 6%.Additionally,latency and complexity were reduced by approximately 96%and 99.78%,respectively,compared to the greedy model.This study demonstrates that malicious traffic can be effectively detected even in lightweight device environments.SFSM verified the possibility of detecting various attack traffic on lightweight devices.展开更多
The diversity of data sources resulted in seeking effective manipulation and dissemination.The challenge that arises from the increasing dimensionality has a negative effect on the computation performance,efficiency,a...The diversity of data sources resulted in seeking effective manipulation and dissemination.The challenge that arises from the increasing dimensionality has a negative effect on the computation performance,efficiency,and stability of computing.One of the most successful optimization algorithms is Particle Swarm Optimization(PSO)which has proved its effectiveness in exploring the highest influencing features in the search space based on its fast convergence and the ability to utilize a small set of parameters in the search task.This research proposes an effective enhancement of PSO that tackles the challenge of randomness search which directly enhances PSO performance.On the other hand,this research proposes a generic intelligent framework for early prediction of orders delay and eliminate orders backlogs which could be considered as an efficient potential solution for raising the supply chain performance.The proposed adapted algorithm has been applied to a supply chain dataset which minimized the features set from twenty-one features to ten significant features.To confirm the proposed algorithm results,the updated data has been examined by eight of the well-known classification algorithms which reached a minimum accuracy percentage equal to 94.3%for random forest and a maximum of 99.0 for Naïve Bayes.Moreover,the proposed algorithm adaptation has been compared with other proposed adaptations of PSO from the literature over different datasets.The proposed PSO adaptation reached a higher accuracy compared with the literature ranging from 97.8 to 99.36 which also proved the advancement of the current research.展开更多
Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotiona...Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.展开更多
The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques we...The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.展开更多
Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of mu...Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way.Existing methods,while useful,have limitations in predictive accuracy,delay,personalization,and user interpretability,requiring a more comprehensive and efficient approach to harness modern medical IoT devices.MAIPFE is a multimodal approach integrating pre-emptive analysis,personalized feature selection,and explainable AI for real-time health monitoring and disease detection.By using AI for early disease detection,personalized health recommendations,and transparency,healthcare will be transformed.The Multimodal Approach Integrating Pre-emptive Analysis,Personalized Feature Selection,and Explainable AI(MAIPFE)framework,which combines Firefly Optimizer,Recurrent Neural Network(RNN),Fuzzy C Means(FCM),and Explainable AI,improves disease detection precision over existing methods.Comprehensive metrics show the model’s superiority in real-time health analysis.The proposed framework outperformed existing models by 8.3%in disease detection classification precision,8.5%in accuracy,5.5%in recall,2.9%in specificity,4.5%in AUC(Area Under the Curve),and 4.9%in delay reduction.Disease prediction precision increased by 4.5%,accuracy by 3.9%,recall by 2.5%,specificity by 3.5%,AUC by 1.9%,and delay levels decreased by 9.4%.MAIPFE can revolutionize healthcare with preemptive analysis,personalized health insights,and actionable recommendations.The research shows that this innovative approach improves patient outcomes and healthcare efficiency in the real world.展开更多
A dandelion algorithm(DA) is a recently developed intelligent optimization algorithm for function optimization problems. Many of its parameters need to be set by experience in DA,which might not be appropriate for all...A dandelion algorithm(DA) is a recently developed intelligent optimization algorithm for function optimization problems. Many of its parameters need to be set by experience in DA,which might not be appropriate for all optimization problems. A self-adapting and efficient dandelion algorithm is proposed in this work to lower the number of DA's parameters and simplify DA's structure. Only the normal sowing operator is retained;while the other operators are discarded. An adaptive seeding radius strategy is designed for the core dandelion. The results show that the proposed algorithm achieves better performance on the standard test functions with less time consumption than its competitive peers. In addition, the proposed algorithm is applied to feature selection for credit card fraud detection(CCFD), and the results indicate that it can obtain higher classification and detection performance than the-state-of-the-art methods.展开更多
High-dimensional datasets present significant challenges for classification tasks.Dimensionality reduction,a crucial aspect of data preprocessing,has gained substantial attention due to its ability to improve classifi...High-dimensional datasets present significant challenges for classification tasks.Dimensionality reduction,a crucial aspect of data preprocessing,has gained substantial attention due to its ability to improve classification per-formance.However,identifying the optimal features within high-dimensional datasets remains a computationally demanding task,necessitating the use of efficient algorithms.This paper introduces the Arithmetic Optimization Algorithm(AOA),a novel approach for finding the optimal feature subset.AOA is specifically modified to address feature selection problems based on a transfer function.Additionally,two enhancements are incorporated into the AOA algorithm to overcome limitations such as limited precision,slow convergence,and susceptibility to local optima.The first enhancement proposes a new method for selecting solutions to be improved during the search process.This method effectively improves the original algorithm’s accuracy and convergence speed.The second enhancement introduces a local search with neighborhood strategies(AOA_NBH)during the AOA exploitation phase.AOA_NBH explores the vast search space,aiding the algorithm in escaping local optima.Our results demonstrate that incorporating neighborhood methods enhances the output and achieves significant improvement over state-of-the-art methods.展开更多
In agriculture sector, machine learning has been widely used by researchers for crop yield prediction. However, it is quite difficult to identify the most critical features from a dataset. Feature selection techniques...In agriculture sector, machine learning has been widely used by researchers for crop yield prediction. However, it is quite difficult to identify the most critical features from a dataset. Feature selection techniques allow us to remove the extraneous and noisy features from the original feature set. The feature selection techniques help the model to focus only on the important features of the data, thus reducing execution time and improving efficiency of the model. The aim of this study is to determine relevant subset features for achieving high predictive performance by using different feature selection techniques like Filter methods, Wrapper methods and embedded methods. In this work, different feature selection techniques like Rank-based feature selection technique, weighted feature selection technique and Hybrid Feature Selection Technique have been applied to the agricultural data. The optimal feature set returned by different feature selection techniques is used for yield prediction using Linear regression, Random Forest, and Decision Tree Regressor. The accuracy of prediction obtained using the above three methods has been analyzed by using different evaluation parameters. This study helps in increasing predictive accuracy with the minimum number of features.展开更多
In stock market forecasting,the identification of critical features that affect the performance of machine learning(ML)models is crucial to achieve accurate stock price predictions.Several review papers in the literat...In stock market forecasting,the identification of critical features that affect the performance of machine learning(ML)models is crucial to achieve accurate stock price predictions.Several review papers in the literature have focused on various ML,statistical,and deep learning-based methods used in stock market forecasting.However,no survey study has explored feature selection and extraction techniques for stock market forecasting.This survey presents a detailed analysis of 32 research works that use a combination of feature study and ML approaches in various stock market applications.We conduct a systematic search for articles in the Scopus and Web of Science databases for the years 2011–2022.We review a variety of feature selection and feature extraction approaches that have been successfully applied in the stock market analyses presented in the articles.We also describe the combination of feature analysis techniques and ML methods and evaluate their performance.Moreover,we present other survey articles,stock market input and output data,and analyses based on various factors.We find that correlation criteria,random forest,principal component analysis,and autoencoder are the most widely used feature selection and extraction techniques with the best prediction accuracy for various stock market applications.展开更多
Selecting the most relevant subset of features from a dataset is a vital step in data mining and machine learning.Each feature in a dataset has 2n possible subsets,making it challenging to select the optimum collectio...Selecting the most relevant subset of features from a dataset is a vital step in data mining and machine learning.Each feature in a dataset has 2n possible subsets,making it challenging to select the optimum collection of features using typical methods.As a result,a new metaheuristicsbased feature selection method based on the dipper-throated and grey-wolf optimization(DTO-GW)algorithms has been developed in this research.Instability can result when the selection of features is subject to metaheuristics,which can lead to a wide range of results.Thus,we adopted hybrid optimization in our method of optimizing,which allowed us to better balance exploration and harvesting chores more equitably.We propose utilizing the binary DTO-GW search approach we previously devised for selecting the optimal subset of attributes.In the proposed method,the number of features selected is minimized,while classification accuracy is increased.To test the proposed method’s performance against eleven other state-of-theart approaches,eight datasets from the UCI repository were used,such as binary grey wolf search(bGWO),binary hybrid grey wolf,and particle swarm optimization(bGWO-PSO),bPSO,binary stochastic fractal search(bSFS),binary whale optimization algorithm(bWOA),binary modified grey wolf optimization(bMGWO),binary multiverse optimization(bMVO),binary bowerbird optimization(bSBO),binary hysteresis optimization(bHy),and binary hysteresis optimization(bHWO).The suggested method is superior 4532 CMC,2023,vol.74,no.2 and successful in handling the problem of feature selection,according to the results of the experiments.展开更多
This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while ...This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.展开更多
Machine learning(ML)practices such as classification have played a very important role in classifying diseases in medical science.Since medical science is a sensitive field,the pre-processing of medical data requires ...Machine learning(ML)practices such as classification have played a very important role in classifying diseases in medical science.Since medical science is a sensitive field,the pre-processing of medical data requires careful handling to make quality clinical decisions.Generally,medical data is considered high-dimensional and complex data that contains many irrelevant and redundant features.These factors indirectly upset the disease prediction and classification accuracy of any ML model.To address this issue,various data pre-processing methods called Feature Selection(FS)techniques have been presented in the literature.However,the majority of such techniques frequently suffer from local minima issues due to large solution space.Thus,this study has proposed a novel wrapper-based Sand Cat SwarmOptimization(SCSO)technique as an FS approach to find optimum features from ten benchmark medical datasets.The SCSO algorithm replicates the hunting and searching strategies of the sand cat while having the advantage of avoiding local optima and finding the ideal solution with minimal control variables.Moreover,K-Nearest Neighbor(KNN)classifier was used to evaluate the effectiveness of the features identified by the proposed SCSO algorithm.The performance of the proposed SCSO algorithm was compared with six state-of-the-art and recent wrapper-based optimization algorithms using the validation metrics of classification accuracy,optimum feature size,and computational cost in seconds.The simulation results on the benchmark medical datasets revealed that the proposed SCSO-KNN approach has outperformed comparative algorithms with an average classification accuracy of 93.96%by selecting 14.2 features within 1.91 s.Additionally,the Wilcoxon rank test was used to perform the significance analysis between the proposed SCSOKNN method and six other algorithms for a p-value less than 5.00E-02.The findings revealed that the proposed algorithm produces better outcomes with an average p-value of 1.82E-02.Moreover,potential future directions are also suggested as a result of the study’s promising findings.展开更多
Graphs help to define the relationships between entities in the data.These relationships,represented by edges,often provide additional context information which can be utilised to discover patterns in the data.Graph N...Graphs help to define the relationships between entities in the data.These relationships,represented by edges,often provide additional context information which can be utilised to discover patterns in the data.Graph Neural Networks(GNNs)employ the inductive bias of the graph structure to learn and predict on various tasks.The primary operation of graph neural networks is the feature aggregation step performed over neighbours of the node based on the structure of the graph.In addition to its own features,for each hop,the node gets additional combined features from its neighbours.These aggregated features help define the similarity or dissimilarity of the nodes with respect to the labels and are useful for tasks like node classification.However,in real-world data,features of neighbours at different hops may not correlate with the node's features.Thus,any indiscriminate feature aggregation by GNN might cause the addition of noisy features leading to degradation in model's performance.In this work,we show that selective aggregation of node features from various hops leads to better performance than default aggregation on the node classification task.Furthermore,we propose a Dual-Net GNN architecture with a classifier model and a selector model.The classifier model trains over a subset of input node features to predict node labels while the selector model learns to provide optimal input subset to the classifier for the best performance.These two models are trained jointly to learn the best subset of features that give higher accuracy in node label predictions.With extensive experiments,we show that our proposed model outperforms both feature selection methods and state-of-the-art GNN models with remarkable improvements up to 27.8%.展开更多
Federated learning has been used extensively in business inno-vation scenarios in various industries.This research adopts the federated learning approach for the first time to address the issue of bank-enterprise info...Federated learning has been used extensively in business inno-vation scenarios in various industries.This research adopts the federated learning approach for the first time to address the issue of bank-enterprise information asymmetry in the credit assessment scenario.First,this research designs a credit risk assessment model based on federated learning and feature selection for micro and small enterprises(MSEs)using multi-dimensional enterprise data and multi-perspective enterprise information.The proposed model includes four main processes:namely encrypted entity alignment,hybrid feature selection,secure multi-party computation,and global model updating.Secondly,a two-step feature selection algorithm based on wrapper and filter is designed to construct the optimal feature set in multi-source heterogeneous data,which can provide excellent accuracy and interpretability.In addition,a local update screening strategy is proposed to select trustworthy model parameters for aggregation each time to ensure the quality of the global model.The results of the study show that the model error rate is reduced by 6.22%and the recall rate is improved by 11.03%compared to the algorithms commonly used in credit risk research,significantly improving the ability to identify defaulters.Finally,the business operations of commercial banks are used to confirm the potential of the proposed model for real-world implementation.展开更多
In real life,a large amount of data describing the same learning task may be stored in different institutions(called participants),and these data cannot be shared among par-ticipants due to privacy protection.The case...In real life,a large amount of data describing the same learning task may be stored in different institutions(called participants),and these data cannot be shared among par-ticipants due to privacy protection.The case that different attributes/features of the same instance are stored in different institutions is called vertically distributed data.The pur-pose of vertical‐federated feature selection(FS)is to reduce the feature dimension of vertical distributed data jointly without sharing local original data so that the feature subset obtained has the same or better performance as the original feature set.To solve this problem,in the paper,an embedded vertical‐federated FS algorithm based on particle swarm optimisation(PSO‐EVFFS)is proposed by incorporating evolutionary FS into the SecureBoost framework for the first time.By optimising both hyper‐parameters of the XGBoost model and feature subsets,PSO‐EVFFS can obtain a feature subset,which makes the XGBoost model more accurate.At the same time,since different participants only share insensitive parameters such as model loss function,PSO‐EVFFS can effec-tively ensure the privacy of participants'data.Moreover,an ensemble ranking strategy of feature importance based on the XGBoost tree model is developed to effectively remove irrelevant features on each participant.Finally,the proposed algorithm is applied to 10 test datasets and compared with three typical vertical‐federated learning frameworks and two variants of the proposed algorithm with different initialisation strategies.Experi-mental results show that the proposed algorithm can significantly improve the classifi-cation performance of selected feature subsets while fully protecting the data privacy of all participants.展开更多
基金Major Project for New Generation of AI Grant No.2018AAA0100400)the Scientific Research Fund of Hunan Provincial Education Department,China(Grant Nos.21A0350,21C0439,22A0408,22A0414,2022JJ30231,22B0559)the National Natural Science Foundation of Hunan Province,China(Grant No.2022JJ50051).
文摘Radiomics is a non-invasive method for extracting quantitative and higher-dimensional features from medical images for diagnosis.It has received great attention due to its huge application prospects in recent years.We can know that the number of features selected by the existing radiomics feature selectionmethods is basically about ten.In this paper,a heuristic feature selection method based on frequency iteration and multiple supervised training mode is proposed.Based on the combination between features,it decomposes all features layer by layer to select the optimal features for each layer,then fuses the optimal features to form a local optimal group layer by layer and iterates to the global optimal combination finally.Compared with the currentmethod with the best prediction performance in the three data sets,thismethod proposed in this paper can reduce the number of features fromabout ten to about three without losing classification accuracy and even significantly improving classification accuracy.The proposed method has better interpretability and generalization ability,which gives it great potential in the feature selection of radiomics.
基金This research was funded by the Short-Term Electrical Load Forecasting Based on Feature Selection and optimized LSTM with DBO which is the Fundamental Scientific Research Project of Liaoning Provincial Department of Education(JYTMS20230189)the Application of Hybrid Grey Wolf Algorithm in Job Shop Scheduling Problem of the Research Support Plan for Introducing High-Level Talents to Shenyang Ligong University(No.1010147001131).
文摘Feature Selection(FS)is a key pre-processing step in pattern recognition and data mining tasks,which can effectively avoid the impact of irrelevant and redundant features on the performance of classification models.In recent years,meta-heuristic algorithms have been widely used in FS problems,so a Hybrid Binary Chaotic Salp Swarm Dung Beetle Optimization(HBCSSDBO)algorithm is proposed in this paper to improve the effect of FS.In this hybrid algorithm,the original continuous optimization algorithm is converted into binary form by the S-type transfer function and applied to the FS problem.By combining the K nearest neighbor(KNN)classifier,the comparative experiments for FS are carried out between the proposed method and four advanced meta-heuristic algorithms on 16 UCI(University of California,Irvine)datasets.Seven evaluation metrics such as average adaptation,average prediction accuracy,and average running time are chosen to judge and compare the algorithms.The selected dataset is also discussed by categorizing it into three dimensions:high,medium,and low dimensions.Experimental results show that the HBCSSDBO feature selection method has the ability to obtain a good subset of features while maintaining high classification accuracy,shows better optimization performance.In addition,the results of statistical tests confirm the significant validity of the method.
基金supported in part by the Natural Science Youth Foundation of Hebei Province under Grant F2019403207in part by the PhD Research Startup Foundation of Hebei GEO University under Grant BQ2019055+3 种基金in part by the Open Research Project of the Hubei Key Laboratory of Intelligent Geo-Information Processing under Grant KLIGIP-2021A06in part by the Fundamental Research Funds for the Universities in Hebei Province under Grant QN202220in part by the Science and Technology Research Project for Universities of Hebei under Grant ZD2020344in part by the Guangxi Natural Science Fund General Project under Grant 2021GXNSFAA075029.
文摘In classification problems,datasets often contain a large amount of features,but not all of them are relevant for accurate classification.In fact,irrelevant features may even hinder classification accuracy.Feature selection aims to alleviate this issue by minimizing the number of features in the subset while simultaneously minimizing the classification error rate.Single-objective optimization approaches employ an evaluation function designed as an aggregate function with a parameter,but the results obtained depend on the value of the parameter.To eliminate this parameter’s influence,the problem can be reformulated as a multi-objective optimization problem.The Whale Optimization Algorithm(WOA)is widely used in optimization problems because of its simplicity and easy implementation.In this paper,we propose a multi-strategy assisted multi-objective WOA(MSMOWOA)to address feature selection.To enhance the algorithm’s search ability,we integrate multiple strategies such as Levy flight,Grey Wolf Optimizer,and adaptive mutation into it.Additionally,we utilize an external repository to store non-dominant solution sets and grid technology is used to maintain diversity.Results on fourteen University of California Irvine(UCI)datasets demonstrate that our proposed method effectively removes redundant features and improves classification performance.The source code can be accessed from the website:https://github.com/zc0315/MSMOWOA.
基金National Natural Science Foundation of China(No.62271186)Anhui Key Project of Research and Development Plan(No.202104d07020005)。
文摘Deception detection plays a crucial role in criminal investigation.Videos contain a wealth of information regarding apparent and physiological changes in individuals,and thus can serve as an effective means of deception detection.In this paper,we investigate video-based deception detection considering both apparent visual features such as eye gaze,head pose and facial action unit(AU),and non-contact heart rate detected by remote photoplethysmography(rPPG)technique.Multiple wrapper-based feature selection methods combined with the K-nearest neighbor(KNN)and support vector machine(SVM)classifiers are employed to screen the most effective features for deception detection.We evaluate the performance of the proposed method on both a self-collected physiological-assisted visual deception detection(PV3D)dataset and a public bag-oflies(BOL)dataset.Experimental results demonstrate that the SVM classifier with symbiotic organisms search(SOS)feature selection yields the best overall performance,with an area under the curve(AUC)of 83.27%and accuracy(ACC)of 83.33%for PV3D,and an AUC of 71.18%and ACC of 70.33%for BOL.This demonstrates the stability and effectiveness of the proposed method in video-based deception detection tasks.
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
基金supported by the Korea Institute for Advancement of Technology(KIAT)Grant funded by the Korean Government(MOTIE)(P0008703,The Competency Development Program for Industry Specialists)MSIT under the ICAN(ICT Challenge and Advanced Network of HRD)Program(No.IITP-2022-RS-2022-00156310)supervised by the Institute of Information&Communication Technology Planning and Evaluation(IITP).
文摘With the advancement of wireless network technology,vast amounts of traffic have been generated,and malicious traffic attacks that threaten the network environment are becoming increasingly sophisticated.While signature-based detection methods,static analysis,and dynamic analysis techniques have been previously explored for malicious traffic detection,they have limitations in identifying diversified malware traffic patterns.Recent research has been focused on the application of machine learning to detect these patterns.However,applying machine learning to lightweight devices like IoT devices is challenging because of the high computational demands and complexity involved in the learning process.In this study,we examined methods for effectively utilizing machine learning-based malicious traffic detection approaches for lightweight devices.We introduced the suboptimal feature selection model(SFSM),a feature selection technique designed to reduce complexity while maintaining the effectiveness of malicious traffic detection.Detection performance was evaluated on various malicious traffic,benign,exploits,and generic,using the UNSW-NB15 dataset and SFSM sub-optimized hyperparameters for feature selection and narrowed the search scope to encompass all features.SFSM improved learning performance while minimizing complexity by considering feature selection and exhaustive search as two steps,a problem not considered in conventional models.Our experimental results showed that the detection accuracy was improved by approximately 20%compared to the random model,and the reduction in accuracy compared to the greedy model,which performs an exhaustive search on all features,was kept within 6%.Additionally,latency and complexity were reduced by approximately 96%and 99.78%,respectively,compared to the greedy model.This study demonstrates that malicious traffic can be effectively detected even in lightweight device environments.SFSM verified the possibility of detecting various attack traffic on lightweight devices.
基金funded by the University of Jeddah,Jeddah,Saudi Arabia,under Grant No.(UJ-23-DR-26)。
文摘The diversity of data sources resulted in seeking effective manipulation and dissemination.The challenge that arises from the increasing dimensionality has a negative effect on the computation performance,efficiency,and stability of computing.One of the most successful optimization algorithms is Particle Swarm Optimization(PSO)which has proved its effectiveness in exploring the highest influencing features in the search space based on its fast convergence and the ability to utilize a small set of parameters in the search task.This research proposes an effective enhancement of PSO that tackles the challenge of randomness search which directly enhances PSO performance.On the other hand,this research proposes a generic intelligent framework for early prediction of orders delay and eliminate orders backlogs which could be considered as an efficient potential solution for raising the supply chain performance.The proposed adapted algorithm has been applied to a supply chain dataset which minimized the features set from twenty-one features to ten significant features.To confirm the proposed algorithm results,the updated data has been examined by eight of the well-known classification algorithms which reached a minimum accuracy percentage equal to 94.3%for random forest and a maximum of 99.0 for Naïve Bayes.Moreover,the proposed algorithm adaptation has been compared with other proposed adaptations of PSO from the literature over different datasets.The proposed PSO adaptation reached a higher accuracy compared with the literature ranging from 97.8 to 99.36 which also proved the advancement of the current research.
文摘Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.
基金supported by the Second Tibetan Plateau Scientific Expedition and Research Program(Grant no.2019QZKK0904)Natural Science Foundation of Hebei Province(Grant no.D2022403032)S&T Program of Hebei(Grant no.E2021403001).
文摘The selection of important factors in machine learning-based susceptibility assessments is crucial to obtain reliable susceptibility results.In this study,metaheuristic optimization and feature selection techniques were applied to identify the most important input parameters for mapping debris flow susceptibility in the southern mountain area of Chengde City in Hebei Province,China,by using machine learning algorithms.In total,133 historical debris flow records and 16 related factors were selected.The support vector machine(SVM)was first used as the base classifier,and then a hybrid model was introduced by a two-step process.First,the particle swarm optimization(PSO)algorithm was employed to select the SVM model hyperparameters.Second,two feature selection algorithms,namely principal component analysis(PCA)and PSO,were integrated into the PSO-based SVM model,which generated the PCA-PSO-SVM and FS-PSO-SVM models,respectively.Three statistical metrics(accuracy,recall,and specificity)and the area under the receiver operating characteristic curve(AUC)were employed to evaluate and validate the performance of the models.The results indicated that the feature selection-based models exhibited the best performance,followed by the PSO-based SVM and SVM models.Moreover,the performance of the FS-PSO-SVM model was better than that of the PCA-PSO-SVM model,showing the highest AUC,accuracy,recall,and specificity values in both the training and testing processes.It was found that the selection of optimal features is crucial to improving the reliability of debris flow susceptibility assessment results.Moreover,the PSO algorithm was found to be not only an effective tool for hyperparameter optimization,but also a useful feature selection algorithm to improve prediction accuracies of debris flow susceptibility by using machine learning algorithms.The high and very high debris flow susceptibility zone appropriately covers 38.01%of the study area,where debris flow may occur under intensive human activities and heavy rainfall events.
文摘Medical Internet of Things(IoT)devices are becoming more and more common in healthcare.This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way.Existing methods,while useful,have limitations in predictive accuracy,delay,personalization,and user interpretability,requiring a more comprehensive and efficient approach to harness modern medical IoT devices.MAIPFE is a multimodal approach integrating pre-emptive analysis,personalized feature selection,and explainable AI for real-time health monitoring and disease detection.By using AI for early disease detection,personalized health recommendations,and transparency,healthcare will be transformed.The Multimodal Approach Integrating Pre-emptive Analysis,Personalized Feature Selection,and Explainable AI(MAIPFE)framework,which combines Firefly Optimizer,Recurrent Neural Network(RNN),Fuzzy C Means(FCM),and Explainable AI,improves disease detection precision over existing methods.Comprehensive metrics show the model’s superiority in real-time health analysis.The proposed framework outperformed existing models by 8.3%in disease detection classification precision,8.5%in accuracy,5.5%in recall,2.9%in specificity,4.5%in AUC(Area Under the Curve),and 4.9%in delay reduction.Disease prediction precision increased by 4.5%,accuracy by 3.9%,recall by 2.5%,specificity by 3.5%,AUC by 1.9%,and delay levels decreased by 9.4%.MAIPFE can revolutionize healthcare with preemptive analysis,personalized health insights,and actionable recommendations.The research shows that this innovative approach improves patient outcomes and healthcare efficiency in the real world.
基金supported by the Institutional Fund Projects(IFPIP-1481-611-1443)the Key Projects of Natural Science Research in Anhui Higher Education Institutions(2022AH051909)+1 种基金the Provincial Quality Project of Colleges and Universities in Anhui Province(2022sdxx020,2022xqhz044)Bengbu University 2021 High-Level Scientific Research and Cultivation Project(2021pyxm04)。
文摘A dandelion algorithm(DA) is a recently developed intelligent optimization algorithm for function optimization problems. Many of its parameters need to be set by experience in DA,which might not be appropriate for all optimization problems. A self-adapting and efficient dandelion algorithm is proposed in this work to lower the number of DA's parameters and simplify DA's structure. Only the normal sowing operator is retained;while the other operators are discarded. An adaptive seeding radius strategy is designed for the core dandelion. The results show that the proposed algorithm achieves better performance on the standard test functions with less time consumption than its competitive peers. In addition, the proposed algorithm is applied to feature selection for credit card fraud detection(CCFD), and the results indicate that it can obtain higher classification and detection performance than the-state-of-the-art methods.
文摘High-dimensional datasets present significant challenges for classification tasks.Dimensionality reduction,a crucial aspect of data preprocessing,has gained substantial attention due to its ability to improve classification per-formance.However,identifying the optimal features within high-dimensional datasets remains a computationally demanding task,necessitating the use of efficient algorithms.This paper introduces the Arithmetic Optimization Algorithm(AOA),a novel approach for finding the optimal feature subset.AOA is specifically modified to address feature selection problems based on a transfer function.Additionally,two enhancements are incorporated into the AOA algorithm to overcome limitations such as limited precision,slow convergence,and susceptibility to local optima.The first enhancement proposes a new method for selecting solutions to be improved during the search process.This method effectively improves the original algorithm’s accuracy and convergence speed.The second enhancement introduces a local search with neighborhood strategies(AOA_NBH)during the AOA exploitation phase.AOA_NBH explores the vast search space,aiding the algorithm in escaping local optima.Our results demonstrate that incorporating neighborhood methods enhances the output and achieves significant improvement over state-of-the-art methods.
文摘In agriculture sector, machine learning has been widely used by researchers for crop yield prediction. However, it is quite difficult to identify the most critical features from a dataset. Feature selection techniques allow us to remove the extraneous and noisy features from the original feature set. The feature selection techniques help the model to focus only on the important features of the data, thus reducing execution time and improving efficiency of the model. The aim of this study is to determine relevant subset features for achieving high predictive performance by using different feature selection techniques like Filter methods, Wrapper methods and embedded methods. In this work, different feature selection techniques like Rank-based feature selection technique, weighted feature selection technique and Hybrid Feature Selection Technique have been applied to the agricultural data. The optimal feature set returned by different feature selection techniques is used for yield prediction using Linear regression, Random Forest, and Decision Tree Regressor. The accuracy of prediction obtained using the above three methods has been analyzed by using different evaluation parameters. This study helps in increasing predictive accuracy with the minimum number of features.
基金funded by The University of Groningen and Prospect Burma organization.
文摘In stock market forecasting,the identification of critical features that affect the performance of machine learning(ML)models is crucial to achieve accurate stock price predictions.Several review papers in the literature have focused on various ML,statistical,and deep learning-based methods used in stock market forecasting.However,no survey study has explored feature selection and extraction techniques for stock market forecasting.This survey presents a detailed analysis of 32 research works that use a combination of feature study and ML approaches in various stock market applications.We conduct a systematic search for articles in the Scopus and Web of Science databases for the years 2011–2022.We review a variety of feature selection and feature extraction approaches that have been successfully applied in the stock market analyses presented in the articles.We also describe the combination of feature analysis techniques and ML methods and evaluate their performance.Moreover,we present other survey articles,stock market input and output data,and analyses based on various factors.We find that correlation criteria,random forest,principal component analysis,and autoencoder are the most widely used feature selection and extraction techniques with the best prediction accuracy for various stock market applications.
文摘Selecting the most relevant subset of features from a dataset is a vital step in data mining and machine learning.Each feature in a dataset has 2n possible subsets,making it challenging to select the optimum collection of features using typical methods.As a result,a new metaheuristicsbased feature selection method based on the dipper-throated and grey-wolf optimization(DTO-GW)algorithms has been developed in this research.Instability can result when the selection of features is subject to metaheuristics,which can lead to a wide range of results.Thus,we adopted hybrid optimization in our method of optimizing,which allowed us to better balance exploration and harvesting chores more equitably.We propose utilizing the binary DTO-GW search approach we previously devised for selecting the optimal subset of attributes.In the proposed method,the number of features selected is minimized,while classification accuracy is increased.To test the proposed method’s performance against eleven other state-of-theart approaches,eight datasets from the UCI repository were used,such as binary grey wolf search(bGWO),binary hybrid grey wolf,and particle swarm optimization(bGWO-PSO),bPSO,binary stochastic fractal search(bSFS),binary whale optimization algorithm(bWOA),binary modified grey wolf optimization(bMGWO),binary multiverse optimization(bMVO),binary bowerbird optimization(bSBO),binary hysteresis optimization(bHy),and binary hysteresis optimization(bHWO).The suggested method is superior 4532 CMC,2023,vol.74,no.2 and successful in handling the problem of feature selection,according to the results of the experiments.
文摘This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.
基金This research was supported by a Researchers Supporting Project Number(RSP2021/309)King Saud University,Riyadh,Saudi Arabia.The authors wish to acknowledge Yayasan Universiti Teknologi Petronas for supporting this work through the research grant(015LC0-308).
文摘Machine learning(ML)practices such as classification have played a very important role in classifying diseases in medical science.Since medical science is a sensitive field,the pre-processing of medical data requires careful handling to make quality clinical decisions.Generally,medical data is considered high-dimensional and complex data that contains many irrelevant and redundant features.These factors indirectly upset the disease prediction and classification accuracy of any ML model.To address this issue,various data pre-processing methods called Feature Selection(FS)techniques have been presented in the literature.However,the majority of such techniques frequently suffer from local minima issues due to large solution space.Thus,this study has proposed a novel wrapper-based Sand Cat SwarmOptimization(SCSO)technique as an FS approach to find optimum features from ten benchmark medical datasets.The SCSO algorithm replicates the hunting and searching strategies of the sand cat while having the advantage of avoiding local optima and finding the ideal solution with minimal control variables.Moreover,K-Nearest Neighbor(KNN)classifier was used to evaluate the effectiveness of the features identified by the proposed SCSO algorithm.The performance of the proposed SCSO algorithm was compared with six state-of-the-art and recent wrapper-based optimization algorithms using the validation metrics of classification accuracy,optimum feature size,and computational cost in seconds.The simulation results on the benchmark medical datasets revealed that the proposed SCSO-KNN approach has outperformed comparative algorithms with an average classification accuracy of 93.96%by selecting 14.2 features within 1.91 s.Additionally,the Wilcoxon rank test was used to perform the significance analysis between the proposed SCSOKNN method and six other algorithms for a p-value less than 5.00E-02.The findings revealed that the proposed algorithm produces better outcomes with an average p-value of 1.82E-02.Moreover,potential future directions are also suggested as a result of the study’s promising findings.
基金New Energy and Industrial Technology Development Organization,Grant/Award Number:JPNP20006JSPS Grant-in-Aid for Scientific Research,Grant/Award Numbers:21K12042,17H01785。
文摘Graphs help to define the relationships between entities in the data.These relationships,represented by edges,often provide additional context information which can be utilised to discover patterns in the data.Graph Neural Networks(GNNs)employ the inductive bias of the graph structure to learn and predict on various tasks.The primary operation of graph neural networks is the feature aggregation step performed over neighbours of the node based on the structure of the graph.In addition to its own features,for each hop,the node gets additional combined features from its neighbours.These aggregated features help define the similarity or dissimilarity of the nodes with respect to the labels and are useful for tasks like node classification.However,in real-world data,features of neighbours at different hops may not correlate with the node's features.Thus,any indiscriminate feature aggregation by GNN might cause the addition of noisy features leading to degradation in model's performance.In this work,we show that selective aggregation of node features from various hops leads to better performance than default aggregation on the node classification task.Furthermore,we propose a Dual-Net GNN architecture with a classifier model and a selector model.The classifier model trains over a subset of input node features to predict node labels while the selector model learns to provide optimal input subset to the classifier for the best performance.These two models are trained jointly to learn the best subset of features that give higher accuracy in node label predictions.With extensive experiments,we show that our proposed model outperforms both feature selection methods and state-of-the-art GNN models with remarkable improvements up to 27.8%.
基金funded by the State Grid Jiangsu Electric Power Company(Grant No.JS2020112)the National Natural Science Foundation of China(Grant No.62272236).
文摘Federated learning has been used extensively in business inno-vation scenarios in various industries.This research adopts the federated learning approach for the first time to address the issue of bank-enterprise information asymmetry in the credit assessment scenario.First,this research designs a credit risk assessment model based on federated learning and feature selection for micro and small enterprises(MSEs)using multi-dimensional enterprise data and multi-perspective enterprise information.The proposed model includes four main processes:namely encrypted entity alignment,hybrid feature selection,secure multi-party computation,and global model updating.Secondly,a two-step feature selection algorithm based on wrapper and filter is designed to construct the optimal feature set in multi-source heterogeneous data,which can provide excellent accuracy and interpretability.In addition,a local update screening strategy is proposed to select trustworthy model parameters for aggregation each time to ensure the quality of the global model.The results of the study show that the model error rate is reduced by 6.22%and the recall rate is improved by 11.03%compared to the algorithms commonly used in credit risk research,significantly improving the ability to identify defaulters.Finally,the business operations of commercial banks are used to confirm the potential of the proposed model for real-world implementation.
基金supported by the two funding sources:Scientific Innovation 2030 Major Project for New Generation of AI,Ministry of Science and Technology of the Peoples Republic of China(2020AAA0107300)National Natural Science Foundation of China(62133015).
文摘In real life,a large amount of data describing the same learning task may be stored in different institutions(called participants),and these data cannot be shared among par-ticipants due to privacy protection.The case that different attributes/features of the same instance are stored in different institutions is called vertically distributed data.The pur-pose of vertical‐federated feature selection(FS)is to reduce the feature dimension of vertical distributed data jointly without sharing local original data so that the feature subset obtained has the same or better performance as the original feature set.To solve this problem,in the paper,an embedded vertical‐federated FS algorithm based on particle swarm optimisation(PSO‐EVFFS)is proposed by incorporating evolutionary FS into the SecureBoost framework for the first time.By optimising both hyper‐parameters of the XGBoost model and feature subsets,PSO‐EVFFS can obtain a feature subset,which makes the XGBoost model more accurate.At the same time,since different participants only share insensitive parameters such as model loss function,PSO‐EVFFS can effec-tively ensure the privacy of participants'data.Moreover,an ensemble ranking strategy of feature importance based on the XGBoost tree model is developed to effectively remove irrelevant features on each participant.Finally,the proposed algorithm is applied to 10 test datasets and compared with three typical vertical‐federated learning frameworks and two variants of the proposed algorithm with different initialisation strategies.Experi-mental results show that the proposed algorithm can significantly improve the classifi-cation performance of selected feature subsets while fully protecting the data privacy of all participants.