期刊文献+
共找到69篇文章
< 1 2 4 >
每页显示 20 50 100
Particle Swarm Optimization-Based Hyperparameters Tuning of Machine Learning Models for Big COVID-19 Data Analysis
1
作者 Hend S. Salem Mohamed A. Mead Ghada S. El-Taweel 《Journal of Computer and Communications》 2024年第3期160-183,共24页
Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the ne... Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the need for effective risk prediction models. Machine learning (ML) techniques have shown promise in analyzing complex data patterns and predicting disease outcomes. The accuracy of these techniques is greatly affected by changing their parameters. Hyperparameter optimization plays a crucial role in improving model performance. In this work, the Particle Swarm Optimization (PSO) algorithm was used to effectively search the hyperparameter space and improve the predictive power of the machine learning models by identifying the optimal hyperparameters that can provide the highest accuracy. A dataset with a variety of clinical and epidemiological characteristics linked to COVID-19 cases was used in this study. Various machine learning models, including Random Forests, Decision Trees, Support Vector Machines, and Neural Networks, were utilized to capture the complex relationships present in the data. To evaluate the predictive performance of the models, the accuracy metric was employed. The experimental findings showed that the suggested method of estimating COVID-19 risk is effective. When compared to baseline models, the optimized machine learning models performed better and produced better results. 展开更多
关键词 Big COVID-19 Data Machine Learning Hyperparameter Optimization Particle Swarm Optimization Computational Intelligence
下载PDF
The posterior selection method for hyperparameters in regularized least squares method
2
作者 Yanxin Zhang Jing Chen +1 位作者 Yawen Mao Quanmin Zhu 《Control Theory and Technology》 EI CSCD 2024年第2期184-194,共11页
The selection of hyperparameters in regularized least squares plays an important role in large-scale system identification. The traditional methods for selecting hyperparameters are based on experience or marginal lik... The selection of hyperparameters in regularized least squares plays an important role in large-scale system identification. The traditional methods for selecting hyperparameters are based on experience or marginal likelihood maximization method, which are inaccurate or computationally expensive. In this paper, two posterior methods are proposed to select hyperparameters based on different prior knowledge (constraints), which can obtain the optimal hyperparameters using the optimization theory. Moreover, we also give the theoretical optimal constraints, and verify its effectiveness. Numerical simulation shows that the hyperparameters and parameter vector estimate obtained by the proposed methods are the optimal ones. 展开更多
关键词 Regularization method Hyperparameter System identification Least squares algorithm
原文传递
Tuning hyperparameters of doublet-detection methods for single-cell RNA sequencing data
3
作者 Nan Miles Xi Angelos Vasilopoulos 《Quantitative Biology》 CSCD 2023年第3期297-305,共9页
Background:The existence of doublets in single-cell RNA sequencing(scRNA-seq)data poses a great challenge in downstream data analysis.Computational doublet-detection methods have been developed to remove doublets from... Background:The existence of doublets in single-cell RNA sequencing(scRNA-seq)data poses a great challenge in downstream data analysis.Computational doublet-detection methods have been developed to remove doublets from scRNA-seq data.Yet,the default hyperparameter settings of those methods may not provide optimal performance.Methods:We propose a strategy to tune hyperparameters for a cutting-edge doublet-detection method.We utilize a full factorial design to explore the relationship between hyperparameters and detection accuracy on 16 real scRNA-seq datasets.The optimal hyperparameters are obtained by a response surface model and convex optimization.Results:We show that the optimal hyperparameters provide top performance across scRNA-seq datasets under various biological conditions.Our tuning strategy can be applied to other computational doublet-detection methods.It also offers insights into hyperparameter tuning for broader computational methods in scRNA-seq data analysis.Conclusions:The hyperparameter configuration significantly impacts the performance of computational doublet-detection methods.Our study is the first attempt to systematically explore the optimal hyperparameters under various biological conditions and optimization objectives.Our study provides much-needed guidance for hyperparameter tuning in computational doublet-detection methods. 展开更多
关键词 scRNA-seq doublet detection hyperparameter tuning experimental design response surface model
原文传递
An Enhanced Ensemble-Based Long Short-Term Memory Approach for Traffic Volume Prediction
4
作者 Duy Quang Tran Huy Q.Tran Minh Van Nguyen 《Computers, Materials & Continua》 SCIE EI 2024年第3期3585-3602,共18页
With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning ... With the advancement of artificial intelligence,traffic forecasting is gaining more and more interest in optimizing route planning and enhancing service quality.Traffic volume is an influential parameter for planning and operating traffic structures.This study proposed an improved ensemble-based deep learning method to solve traffic volume prediction problems.A set of optimal hyperparameters is also applied for the suggested approach to improve the performance of the learning process.The fusion of these methodologies aims to harness ensemble empirical mode decomposition’s capacity to discern complex traffic patterns and long short-term memory’s proficiency in learning temporal relationships.Firstly,a dataset for automatic vehicle identification is obtained and utilized in the preprocessing stage of the ensemble empirical mode decomposition model.The second aspect involves predicting traffic volume using the long short-term memory algorithm.Next,the study employs a trial-and-error approach to select a set of optimal hyperparameters,including the lookback window,the number of neurons in the hidden layers,and the gradient descent optimization.Finally,the fusion of the obtained results leads to a final traffic volume prediction.The experimental results show that the proposed method outperforms other benchmarks regarding various evaluation measures,including mean absolute error,root mean squared error,mean absolute percentage error,and R-squared.The achieved R-squared value reaches an impressive 98%,while the other evaluation indices surpass the competing.These findings highlight the accuracy of traffic pattern prediction.Consequently,this offers promising prospects for enhancing transportation management systems and urban infrastructure planning. 展开更多
关键词 Ensemble empirical mode decomposition traffic volume prediction long short-term memory optimal hyperparameters deep learning
下载PDF
Optimizing the neural network hyperparameters utilizing genetic algorithm 被引量:6
5
作者 Saeid NIKBAKHT Cosmin ANITESCU Timon RABCZUK 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2021年第6期407-426,共20页
Neural networks(NNs),as one of the most robust and efficient machine learning methods,have been commonly used in solving several problems.However,choosing proper hyperparameters(e.g.the numbers of layers and neurons i... Neural networks(NNs),as one of the most robust and efficient machine learning methods,have been commonly used in solving several problems.However,choosing proper hyperparameters(e.g.the numbers of layers and neurons in each layer)has a significant influence on the accuracy of these methods.Therefore,a considerable number of studies have been carried out to optimize the NN hyperpaxameters.In this study,the genetic algorithm is applied to NN to find the optimal hyperpaxameters.Thus,the deep energy method,which contains a deep neural network,is applied first on a Timoshenko beam and a plate with a hole.Subsequently,the numbers of hidden layers,integration points,and neurons in each layer are optimized to reach the highest accuracy to predict the stress distribution through these structures.Thus,applying the proper optimization method on NN leads to significant increase in the NN prediction accuracy after conducting the optimization in various examples. 展开更多
关键词 Machine learning Neural network(NN) hyperparameters Genetic algorithm
原文传递
An Efficient Modelling of Oversampling with Optimal Deep Learning Enabled Anomaly Detection in Streaming Data
6
作者 R.Rajakumar S.Sathiya Devi 《China Communications》 SCIE CSCD 2024年第5期249-260,共12页
Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL... Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets. 展开更多
关键词 anomaly detection deep learning hyperparameter optimization OVERSAMPLING SMOTE streaming data
下载PDF
Credit Card Fraud Detection Using Improved Deep Learning Models
7
作者 Sumaya S.Sulaiman Ibraheem Nadher Sarab M.Hameed 《Computers, Materials & Continua》 SCIE EI 2024年第1期1049-1069,共21页
Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown pr... Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection. 展开更多
关键词 Card fraud detection hyperparameter tuning deep learning autoencoder convolution neural network long short-term memory RESAMPLING
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
8
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Parallel Inference for Real-Time Machine Learning Applications
9
作者 Sultan Al Bayyat Ammar Alomran +3 位作者 Mohsen Alshatti Ahmed Almousa Rayyan Almousa Yasir Alguwaifli 《Journal of Computer and Communications》 2024年第1期139-146,共8页
Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes... Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes the performance gains from parallel versus sequential hyperparameter optimization. Using scikit-learn’s Randomized SearchCV, this project tuned a Random Forest classifier for fake news detection via randomized grid search. Setting n_jobs to -1 enabled full parallelization across CPU cores. Results show the parallel implementation achieved over 5× faster CPU times and 3× faster total run times compared to sequential tuning. However, test accuracy slightly dropped from 99.26% sequentially to 99.15% with parallelism, indicating a trade-off between evaluation efficiency and model performance. Still, the significant computational gains allow more extensive hyperparameter exploration within reasonable timeframes, outweighing the small accuracy decrease. Further analysis could better quantify this trade-off across different models, tuning techniques, tasks, and hardware. 展开更多
关键词 Machine Learning Models Computational Efficiency Parallel Computing Systems Random Forest Inference Hyperparameter Tuning Python Frameworks (TensorFlow PyTorch Scikit-Learn) High-Performance Computing
下载PDF
PSTCNN: Explainable COVID-19 diagnosis using PSO-guided self-tuning CNN
10
作者 WEI WANG YANRONG PEI +2 位作者 SHUI-HUA WANG JUAN MANUEL GORRZ YU-DONG ZHANG 《BIOCELL》 SCIE 2023年第2期373-384,共12页
Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coron... Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coronavirus 2,a single-stranded RNA virus of the genus Betacoronavirus.This virus is highly infectious and relies on its angiotensin-converting enzyme 2-receptor to enter cells.With the increase in the number of confirmed COVID-19 diagnoses,the difficulty of diagnosis due to the lack of global healthcare resources becomes increasingly apparent.Deep learning-based computer-aided diagnosis models with high generalisability can effectively alleviate this pressure.Hyperparameter tuning is essential in training such models and significantly impacts their final performance and training speed.However,traditional hyperparameter tuning methods are usually time-consuming and unstable.To solve this issue,we introduce Particle Swarm Optimisation to build a PSO-guided Self-Tuning Convolution Neural Network(PSTCNN),allowing the model to tune hyperparameters automatically.Therefore,the proposed approach can reduce human involvement.Also,the optimisation algorithm can select the combination of hyperparameters in a targeted manner,thus stably achieving a solution closer to the global optimum.Experimentally,the PSTCNN can obtain quite excellent results,with a sensitivity of 93.65%±1.86%,a specificity of 94.32%±2.07%,a precision of 94.30%±2.04%,an accuracy of 93.99%±1.78%,an F1-score of 93.97%±1.78%,Matthews Correlation Coefficient of 87.99%±3.56%,and Fowlkes-Mallows Index of 93.97%±1.78%.Our experiments demonstrate that compared to traditional methods,hyperparameter tuning of the model using an optimisation algorithm is faster and more effective. 展开更多
关键词 COVID-19 SARS-CoV-2 Particle swarm optimisation Convolutional neural network hyperparameters tuning
下载PDF
Intelligent Deep Learning Enabled Human Activity Recognition for Improved Medical Services 被引量:2
11
作者 E.Dhiravidachelvi M.Suresh Kumar +4 位作者 L.D.Vijay Anand D.Pritima Seifedine Kadry Byeong-Gwon Kang Yunyoung Nam 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期961-977,共17页
Human Activity Recognition(HAR)has been made simple in recent years,thanks to recent advancements made in Artificial Intelligence(AI)techni-ques.These techniques are applied in several areas like security,surveillance,... Human Activity Recognition(HAR)has been made simple in recent years,thanks to recent advancements made in Artificial Intelligence(AI)techni-ques.These techniques are applied in several areas like security,surveillance,healthcare,human-robot interaction,and entertainment.Since wearable sensor-based HAR system includes in-built sensors,human activities can be categorized based on sensor values.Further,it can also be employed in other applications such as gait diagnosis,observation of children/adult’s cognitive nature,stroke-patient hospital direction,Epilepsy and Parkinson’s disease examination,etc.Recently-developed Artificial Intelligence(AI)techniques,especially Deep Learning(DL)models can be deployed to accomplish effective outcomes on HAR process.With this motivation,the current research paper focuses on designing Intelligent Hyperparameter Tuned Deep Learning-based HAR(IHPTDL-HAR)technique in healthcare environment.The proposed IHPTDL-HAR technique aims at recogniz-ing the human actions in healthcare environment and helps the patients in mana-ging their healthcare service.In addition,the presented model makes use of Hierarchical Clustering(HC)-based outlier detection technique to remove the out-liers.IHPTDL-HAR technique incorporates DL-based Deep Belief Network(DBN)model to recognize the activities of users.Moreover,Harris Hawks Opti-mization(HHO)algorithm is used for hyperparameter tuning of DBN model.Finally,a comprehensive experimental analysis was conducted upon benchmark dataset and the results were examined under different aspects.The experimental results demonstrate that the proposed IHPTDL-HAR technique is a superior per-former compared to other recent techniques under different measures. 展开更多
关键词 Artificial intelligence human activity recognition deep learning deep belief network hyperparameter tuning healthcare
下载PDF
Modeling of Optimal Deep Learning Based Flood Forecasting Model Using Twitter Data 被引量:1
12
作者 G.Indra N.Duraipandian 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期1455-1470,共16页
Aflood is a significant damaging natural calamity that causes loss of life and property.Earlier work on the construction offlood prediction models intended to reduce risks,suggest policies,reduce mortality,and limit prop... Aflood is a significant damaging natural calamity that causes loss of life and property.Earlier work on the construction offlood prediction models intended to reduce risks,suggest policies,reduce mortality,and limit property damage caused byfloods.The massive amount of data generated by social media platforms such as Twitter opens the door toflood analysis.Because of the real-time nature of Twitter data,some government agencies and authorities have used it to track natural catastrophe events in order to build a more rapid rescue strategy.However,due to the shorter duration of Tweets,it is difficult to construct a perfect prediction model for determiningflood.Machine learning(ML)and deep learning(DL)approaches can be used to statistically developflood prediction models.At the same time,the vast amount of Tweets necessitates the use of a big data analytics(BDA)tool forflood prediction.In this regard,this work provides an optimal deep learning-basedflood forecasting model with big data analytics(ODLFF-BDA)based on Twitter data.The suggested ODLFF-BDA technique intends to anticipate the existence offloods using tweets in a big data setting.The ODLFF-BDA technique comprises data pre-processing to convert the input tweets into a usable format.In addition,a Bidirectional Encoder Representations from Transformers(BERT)model is used to generate emotive contextual embed-ding from tweets.Furthermore,a gated recurrent unit(GRU)with a Multilayer Convolutional Neural Network(MLCNN)is used to extract local data and predict theflood.Finally,an Equilibrium Optimizer(EO)is used tofine-tune the hyper-parameters of the GRU and MLCNN models in order to increase prediction performance.The memory usage is pull down lesser than 3.5 MB,if its compared with the other algorithm techniques.The ODLFF-BDA technique’s performance was validated using a benchmark Kaggle dataset,and thefindings showed that it outperformed other recent approaches significantly. 展开更多
关键词 Big data analytics predictive models deep learning flood prediction twitter data hyperparameter tuning
下载PDF
Electroencephalography (EEG) Based Neonatal Sleep Staging and Detection Using Various Classification Algorithms
13
作者 Hafza Ayesha Siddiqa Muhammad Irfan +1 位作者 Saadullah Farooq Abbasi Wei Chen 《Computers, Materials & Continua》 SCIE EI 2023年第11期1759-1778,共20页
Automatic sleep staging of neonates is essential for monitoring their brain development and maturity of the nervous system.EEG based neonatal sleep staging provides valuable information about an infant’s growth and h... Automatic sleep staging of neonates is essential for monitoring their brain development and maturity of the nervous system.EEG based neonatal sleep staging provides valuable information about an infant’s growth and health,but is challenging due to the unique characteristics of EEG and lack of standardized protocols.This study aims to develop and compare 18 machine learning models using Automated Machine Learning(autoML)technique for accurate and reliable multi-channel EEG-based neonatal sleep-wake classification.The study investigates autoML feasibility without extensive manual selection of features or hyperparameter tuning.The data is obtained from neonates at post-menstrual age 37±05 weeks.352530-s EEG segments from 19 infants are used to train and test the proposed models.There are twelve time and frequency domain features extracted from each channel.Each model receives the common features of nine channels as an input vector of size 108.Each model’s performance was evaluated based on a variety of evaluation metrics.The maximum mean accuracy of 84.78%and kappa of 69.63%has been obtained by the AutoML-based Random Forest estimator.This is the highest accuracy for EEG-based sleep-wake classification,until now.While,for the AutoML-based Adaboost Random Forest model,accuracy and kappa were 84.59%and 69.24%,respectively.High performance achieved in the proposed autoML-based approach can facilitate early identification and treatment of sleep-related issues in neonates. 展开更多
关键词 AutoML Random Forest adaboost EEG NEONATES PSG hyperparameter tuning sleep-wake classification
下载PDF
Optimal Deep Learning Driven Intrusion Detection in SDN-Enabled IoT Environment
14
作者 Mohammed Maray Haya Mesfer Alshahrani +5 位作者 Khalid A.Alissa Najm Alotaibi Abdulbaset Gaddah AliMeree Mahmoud Othman Manar Ahmed Hamza 《Computers, Materials & Continua》 SCIE EI 2023年第3期6587-6604,共18页
In recent years,wireless networks are widely used in different domains.This phenomenon has increased the number of Internet of Things(IoT)devices and their applications.Though IoT has numerous advantages,the commonly-... In recent years,wireless networks are widely used in different domains.This phenomenon has increased the number of Internet of Things(IoT)devices and their applications.Though IoT has numerous advantages,the commonly-used IoT devices are exposed to cyber-attacks periodically.This scenario necessitates real-time automated detection and the mitigation of different types of attacks in high-traffic networks.The Software-Defined Networking(SDN)technique and the Machine Learning(ML)-based intrusion detection technique are effective tools that can quickly respond to different types of attacks in the IoT networks.The Intrusion Detection System(IDS)models can be employed to secure the SDN-enabled IoT environment in this scenario.The current study devises a Harmony Search algorithmbased Feature Selection with Optimal Convolutional Autoencoder(HSAFSOCAE)for intrusion detection in the SDN-enabled IoT environment.The presented HSAFS-OCAE method follows a three-stage process in which the Harmony Search Algorithm-based FS(HSAFS)technique is exploited at first for feature selection.Next,the CAE method is leveraged to recognize and classify intrusions in the SDN-enabled IoT environment.Finally,the Artificial Fish SwarmAlgorithm(AFSA)is used to fine-tune the hyperparameters.This process improves the outcomes of the intrusion detection process executed by the CAE algorithm and shows the work’s novelty.The proposed HSAFSOCAE technique was experimentally validated under different aspects,and the comparative analysis results established the supremacy of the proposed model. 展开更多
关键词 Internet of things SDN controller feature selection hyperparameter tuning autoencoder
下载PDF
Hybrid Metaheuristics with Deep Learning Enabled Automated Deception Detection and Classification of Facial Expressions
15
作者 Haya Alaskar 《Computers, Materials & Continua》 SCIE EI 2023年第6期5433-5449,共17页
Automatic deception recognition has received considerable atten-tion from the machine learning community due to recent research on its vast application to social media,interviews,law enforcement,and the mil-itary.Vide... Automatic deception recognition has received considerable atten-tion from the machine learning community due to recent research on its vast application to social media,interviews,law enforcement,and the mil-itary.Video analysis-based techniques for automated deception detection have received increasing interest.This study develops a new self-adaptive population-based firefly algorithm with a deep learning-enabled automated deception detection(SAPFF-DLADD)model for analyzing facial cues.Ini-tially,the input video is separated into a set of video frames.Then,the SAPFF-DLADD model applies the MobileNet-based feature extractor to produce a useful set of features.The long short-term memory(LSTM)model is exploited for deception detection and classification.In the final stage,the SAPFF technique is applied to optimally alter the hyperparameter values of the LSTM model,showing the novelty of the work.The experimental validation of the SAPFF-DLADD model is tested using the Miami University Deception Detection Database(MU3D),a database comprised of two classes,namely,truth and deception.An extensive comparative analysis reported a better performance of the SAPFF-DLADD model compared to recent approaches,with a higher accuracy of 99%. 展开更多
关键词 Deception detection facial cues deep learning computer vision hyperparameter tuning
下载PDF
Performance Evaluation of Deep Dense Layer Neural Network for Diabetes Prediction
16
作者 Niharika Gupta Baijnath Kaushik +1 位作者 Mohammad Khalid Imam Rahmani Saima Anwar Lashari 《Computers, Materials & Continua》 SCIE EI 2023年第7期347-366,共20页
Diabetes is one of the fastest-growing human diseases worldwide and poses a significant threat to the population’s longer lives.Early prediction of diabetes is crucial to taking precautionary steps to avoid or delay ... Diabetes is one of the fastest-growing human diseases worldwide and poses a significant threat to the population’s longer lives.Early prediction of diabetes is crucial to taking precautionary steps to avoid or delay its onset.In this study,we proposed a Deep Dense Layer Neural Network(DDLNN)for diabetes prediction using a dataset with 768 instances and nine variables.We also applied a combination of classical machine learning(ML)algorithms and ensemble learning algorithms for the effective prediction of the disease.The classical ML algorithms used were Support Vector Machine(SVM),Logistic Regression(LR),Decision Tree(DT),K-Nearest Neighbor(KNN),and Naïve Bayes(NB).We also constructed ensemble models such as bagging(Random Forest)and boosting like AdaBoost and Extreme Gradient Boosting(XGBoost)to evaluate the performance of prediction models.The proposed DDLNN model and ensemble learning models were trained and tested using hyperparameter tuning and K-Fold cross-validation to determine the best parameters for predicting the disease.The combined ML models used majority voting to select the best outcomes among the models.The efficacy of the proposed and other models was evaluated for effective diabetes prediction.The investigation concluded that the proposed model,after hyperparameter tuning,outperformed other learning models with an accuracy of 84.42%,a precision of 85.12%,a recall rate of 65.40%,and a specificity of 94.11%. 展开更多
关键词 Diabetes prediction hyperparameter tuning k-fold validation machine learning neural network
下载PDF
Data Mining with Comprehensive Oppositional Based Learning for Rainfall Prediction
17
作者 Mohammad Alamgeer Amal Al-Rasheed +3 位作者 Ahmad Alhindi Manar Ahmed Hamza Abdelwahed Motwakel Mohamed I.Eldesouki 《Computers, Materials & Continua》 SCIE EI 2023年第2期2725-2738,共14页
Data mining process involves a number of steps fromdata collection to visualization to identify useful data from massive data set.the same time,the recent advances of machine learning(ML)and deep learning(DL)models ca... Data mining process involves a number of steps fromdata collection to visualization to identify useful data from massive data set.the same time,the recent advances of machine learning(ML)and deep learning(DL)models can be utilized for effectual rainfall prediction.With this motivation,this article develops a novel comprehensive oppositionalmoth flame optimization with deep learning for rainfall prediction(COMFO-DLRP)Technique.The proposed CMFO-DLRP model mainly intends to predict the rainfall and thereby determine the environmental changes.Primarily,data pre-processing and correlation matrix(CM)based feature selection processes are carried out.In addition,deep belief network(DBN)model is applied for the effective prediction of rainfall data.Moreover,COMFO algorithm was derived by integrating the concepts of comprehensive oppositional based learning(COBL)with traditional MFO algorithm.Finally,the COMFO algorithm is employed for the optimal hyperparameter selection of the DBN model.For demonstrating the improved outcomes of the COMFO-DLRP approach,a sequence of simulations were carried out and the outcomes are assessed under distinct measures.The simulation outcome highlighted the enhanced outcomes of the COMFO-DLRP method on the other techniques. 展开更多
关键词 Data mining rainfall prediction deep learning correlation matrix hyperparameter tuning metaheuristics
下载PDF
Optimal Hybrid Deep Learning Enabled Attack Detection and Classificationin IoT Environment
18
作者 Fahad F.Alruwaili 《Computers, Materials & Continua》 SCIE EI 2023年第4期99-115,共17页
The Internet of Things (IoT) paradigm enables end users to accessnetworking services amongst diverse kinds of electronic devices. IoT securitymechanism is a technology that concentrates on safeguarding the devicesand ... The Internet of Things (IoT) paradigm enables end users to accessnetworking services amongst diverse kinds of electronic devices. IoT securitymechanism is a technology that concentrates on safeguarding the devicesand networks connected in the IoT environment. In recent years, False DataInjection Attacks (FDIAs) have gained considerable interest in the IoT environment.Cybercriminals compromise the devices connected to the networkand inject the data. Such attacks on the IoT environment can result in a considerableloss and interrupt normal activities among the IoT network devices.The FDI attacks have been effectively overcome so far by conventional threatdetection techniques. The current research article develops a Hybrid DeepLearning to Combat Sophisticated False Data Injection Attacks detection(HDL-FDIAD) for the IoT environment. The presented HDL-FDIAD modelmajorly recognizes the presence of FDI attacks in the IoT environment.The HDL-FDIAD model exploits the Equilibrium Optimizer-based FeatureSelection (EO-FS) technique to select the optimal subset of the features.Moreover, the Long Short Term Memory with Recurrent Neural Network(LSTM-RNN) model is also utilized for the purpose of classification. At last,the Bayesian Optimization (BO) algorithm is employed as a hyperparameteroptimizer in this study. To validate the enhanced performance of the HDLFDIADmodel, a wide range of simulations was conducted, and the resultswere investigated in detail. A comparative study was conducted between theproposed model and the existing models. The outcomes revealed that theproposed HDL-FDIAD model is superior to other models. 展开更多
关键词 False data injection attacks hyperparameter optimizer deep learning feature selection IOT SECURITY
下载PDF
Jellyfish Search Optimization with Deep Learning Driven Autism Spectrum Disorder Classification
19
作者 S.Rama Sree Inderjeet Kaur +5 位作者 Alexey Tikhonov E.Laxmi Lydia Ahmed A.Thabit Zahraa H.Kareem Yousif Kerrar Yousif Ahmed Alkhayyat 《Computers, Materials & Continua》 SCIE EI 2023年第1期2195-2209,共15页
Autism spectrum disorder(ASD)is regarded as a neurological disorder well-defined by a specific set of problems associated with social skills,recurrent conduct,and communication.Identifying ASD as soon as possible is f... Autism spectrum disorder(ASD)is regarded as a neurological disorder well-defined by a specific set of problems associated with social skills,recurrent conduct,and communication.Identifying ASD as soon as possible is favourable due to prior identification of ASD permits prompt interferences in children with ASD.Recognition of ASD related to objective pathogenicmutation screening is the initial step against prior intervention and efficient treatment of children who were affected.Nowadays,healthcare and machine learning(ML)industries are combined for determining the existence of various diseases.This article devises a Jellyfish Search Optimization with Deep Learning Driven ASD Detection and Classification(JSODL-ASDDC)model.The goal of the JSODL-ASDDC algorithm is to identify the different stages of ASD with the help of biomedical data.The proposed JSODLASDDC model initially performs min-max data normalization approach to scale the data into uniform range.In addition,the JSODL-ASDDC model involves JSO based feature selection(JFSO-FS)process to choose optimal feature subsets.Moreover,Gated Recurrent Unit(GRU)based classification model is utilized for the recognition and classification of ASD.Furthermore,the Bacterial Foraging Optimization(BFO)assisted parameter tuning process gets executed to enhance the efficacy of the GRU system.The experimental assessment of the JSODL-ASDDC model is investigated against distinct datasets.The experimental outcomes highlighted the enhanced performances of the JSODL-ASDDC algorithm over recent approaches. 展开更多
关键词 Autism spectral disorder biomedical data deep learning feature selection hyperparameter optimization data classification machine learning
下载PDF
Sailfish Optimizer with Deep Transfer Learning-Enabled Arabic Handwriting Character Recognition
20
作者 Mohammed Maray Badriyya B.Al-onazi +5 位作者 Jaber S.Alzahrani Saeed Masoud Alshahrani Najm Alotaibi Sana Alazwari Mahmoud Othman Manar Ahmed Hamza 《Computers, Materials & Continua》 SCIE EI 2023年第3期5467-5482,共16页
The recognition of the Arabic characters is a crucial task incomputer vision and Natural Language Processing fields. Some major complicationsin recognizing handwritten texts include distortion and patternvariabilities... The recognition of the Arabic characters is a crucial task incomputer vision and Natural Language Processing fields. Some major complicationsin recognizing handwritten texts include distortion and patternvariabilities. So, the feature extraction process is a significant task in NLPmodels. If the features are automatically selected, it might result in theunavailability of adequate data for accurately forecasting the character classes.But, many features usually create difficulties due to high dimensionality issues.Against this background, the current study develops a Sailfish Optimizer withDeep Transfer Learning-Enabled Arabic Handwriting Character Recognition(SFODTL-AHCR) model. The projected SFODTL-AHCR model primarilyfocuses on identifying the handwritten Arabic characters in the inputimage. The proposed SFODTL-AHCR model pre-processes the input imageby following the Histogram Equalization approach to attain this objective.The Inception with ResNet-v2 model examines the pre-processed image toproduce the feature vectors. The Deep Wavelet Neural Network (DWNN)model is utilized to recognize the handwritten Arabic characters. At last,the SFO algorithm is utilized for fine-tuning the parameters involved in theDWNNmodel to attain better performance. The performance of the proposedSFODTL-AHCR model was validated using a series of images. Extensivecomparative analyses were conducted. The proposed method achieved a maximum accuracy of 99.73%. The outcomes inferred the supremacy of theproposed SFODTL-AHCR model over other approaches. 展开更多
关键词 Arabic language handwritten character recognition deep learning feature extraction hyperparameter tuning
下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部