期刊文献+
共找到35篇文章
< 1 2 >
每页显示 20 50 100
Hyperparameter Tuning Based Machine Learning Classifier for Breast Cancer Prediction
1
作者 Mohammed Mijanur Rahman Asikur Rahman +1 位作者 Swarnali Akter Sumiea Akter Pinky 《Journal of Computer and Communications》 2023年第4期149-165,共17页
Currently, the second most devastating form of cancer in people, particularly in women, is Breast Cancer (BC). In the healthcare industry, Machine Learning (ML) is commonly employed in fatal disease prediction. Due to... Currently, the second most devastating form of cancer in people, particularly in women, is Breast Cancer (BC). In the healthcare industry, Machine Learning (ML) is commonly employed in fatal disease prediction. Due to breast cancer’s favourable prognosis at an early stage, a model is created to utilize the Dataset on Wisconsin Diagnostic Breast Cancer (WDBC). Conversely, this model’s overarching axiom is to compare the effectiveness of five well-known ML classifiers, including Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), K-Nearest Neighbor (KNN), and Naive Bayes (NB) with the conventional method. To counterbalance the effect with conventional methods, the overarching tactic we utilized was hyperparameter tuning utilizing the grid search method, which improved accuracy, secondary precision, third recall, F1 score and finally the AUC & ROC curve. In this study of hyperparameter tuning model, the rate of accuracy increased from 94.15% to 98.83% whereas the accuracy of the conventional method increased from 93.56% to 97.08%. According to this investigation, KNN outperformed all other classifiers in terms of accuracy, achieving a score of 98.83%. In conclusion, our study shows that KNN works well with the hyper-tuning method. These analyses show that this study prediction approach is useful in prognosticating women with breast cancer with a viable performance and more accurate findings when compared to the conventional approach. 展开更多
关键词 Machine Learning Breast Cancer Prediction Grid Search hyperparameter tuning
下载PDF
Hybrid XGBoost model with hyperparameter tuning for prediction of liver disease with better accuracy 被引量:1
2
作者 Surjeet Dalal Edeh Michael Onyema Amit Malik 《World Journal of Gastroenterology》 SCIE CAS 2022年第46期6551-6563,共13页
BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver dise... BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver disease.This could be attributed to many factors,among which are human habits,awareness issues,poor healthcare,and late detection.To curb the growing threats from liver disease,early detection is critical to help reduce the risks and improve treatment outcome.Emerging technologies such as machine learning,as shown in this study,could be deployed to assist in enhancing its prediction and treatment.AIM To present a more efficient system for timely prediction of liver disease using a hybrid eXtreme Gradient Boosting model with hyperparameter tuning with a view to assist in early detection,diagnosis,and reduction of risks and mortality associated with the disease.METHODS The dataset used in this study consisted of 416 people with liver problems and 167 with no such history.The data were collected from the state of Andhra Pradesh,India,through https://www.kaggle.com/datasets/uciml/indian-liver-patientrecords.The population was divided into two sets depending on the disease state of the patient.This binary information was recorded in the attribute"is_patient".RESULTS The results indicated that the chi-square automated interaction detection and classification and regression trees models achieved an accuracy level of 71.36%and 73.24%,respectively,which was much better than the conventional method.The proposed solution would assist patients and physicians in tackling the problem of liver disease and ensuring that cases are detected early to prevent it from developing into cirrhosis(scarring)and to enhance the survival of patients.The study showed the potential of machine learning in health care,especially as it concerns disease prediction and monitoring.CONCLUSION This study contributed to the knowledge of machine learning application to health and to the efforts toward combating the problem of liver disease.However,relevant authorities have to invest more into machine learning research and other health technologies to maximize their potential. 展开更多
关键词 Liver infection Machine learning Chi-square automated interaction detection Classification and regression trees Decision tree XGBoost hyperparameter tuning
下载PDF
Energy Efficient Hyperparameter Tuned Deep Neural Network to Improve Accuracy of Near-Threshold Processor
3
作者 K.Chanthirasekaran Raghu Gundaala 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期471-489,共19页
When it comes to decreasing margins and increasing energy effi-ciency in near-threshold and sub-threshold processors,timing error resilience may be viewed as a potentially lucrative alternative to examine.On the other... When it comes to decreasing margins and increasing energy effi-ciency in near-threshold and sub-threshold processors,timing error resilience may be viewed as a potentially lucrative alternative to examine.On the other hand,the currently employed approaches have certain restrictions,including high levels of design complexity,severe time constraints on error consolidation and propagation,and uncontaminated architectural registers(ARs).The design of near-threshold circuits,often known as NT circuits,is becoming the approach of choice for the construction of energy-efficient digital circuits.As a result of the exponentially decreased driving current,there was a reduction in performance,which was one of the downsides.Numerous studies have advised the use of NT techniques to chip multiprocessors as a means to preserve outstanding energy efficiency while minimising performance loss.Over the past several years,there has been a clear growth in interest in the development of artificial intelligence hardware with low energy consumption(AI).This has resulted in both large corporations and start-ups producing items that compete on the basis of varying degrees of performance and energy use.This technology’s ultimate goal was to provide levels of efficiency and performance that could not be achieved with graphics processing units or general-purpose CPUs.To achieve this objective,the technology was created to integrate several processing units into a single chip.To accomplish this purpose,the hardware was designed with a number of unique properties.In this study,an Energy Effi-cient Hyperparameter Tuned Deep Neural Network(EEHPT-DNN)model for Variation-Tolerant Near-Threshold Processor was developed.In order to improve the energy efficiency of artificial intelligence(AI),the EEHPT-DNN model employs several AI techniques.The notion focuses mostly on the repercussions of embedded technologies positioned at the network’s edge.The presented model employs a deep stacked sparse autoencoder(DSSAE)model with the objective of creating a variation-tolerant NT processor.The time-consuming method of modifying hyperparameters through trial and error is substituted with the marine predators optimization algorithm(MPO).This method is utilised to modify the hyperparameters associated with the DSSAE model.To validate that the proposed EEHPT-DNN model has a higher degree of functionality,a full simulation study is conducted,and the results are analysed from a variety of perspectives.This was completed so that the enhanced performance could be evaluated and analysed.According to the results of the study that compared numerous DL models,the EEHPT-DNN model performed significantly better than the other models. 展开更多
关键词 Deep learning hyperparameter tuning artificial intelligence near-threshold processor embedded system
下载PDF
Abstractive Arabic Text Summarization Using Hyperparameter Tuned Denoising Deep Neural Network
4
作者 Ibrahim M.Alwayle Hala J.Alshahrani +5 位作者 Saud S.Alotaibi Khaled M.Alalayah Amira Sayed A.Aziz Khadija M.Alaidarous Ibrahim Abdulrab Ahmed Manar Ahmed Hamza 《Intelligent Automation & Soft Computing》 2023年第11期153-168,共16页
ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN t... ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN technique,the DDNN model is utilized to generate the summary.This study exploits the Chameleon Swarm Optimization(CSO)algorithm to fine-tune the hyperparameters relevant to the DDNN model since it considerably affects the summarization efficiency.This phase shows the novelty of the current study.To validate the enhanced summarization performance of the proposed AATS-HTDDNN model,a comprehensive experimental analysis was conducted.The comparison study outcomes confirmed the better performance of the AATS-HTDDNN model over other approaches. 展开更多
关键词 Text summarization deep learning denoising deep neural networks hyperparameter tuning Arabic language
下载PDF
PSTCNN: Explainable COVID-19 diagnosis using PSO-guided self-tuning CNN 被引量:3
5
作者 WEI WANG YANRONG PEI +2 位作者 SHUI-HUA WANG JUAN MANUEL GORRZ YU-DONG ZHANG 《BIOCELL》 SCIE 2023年第2期373-384,共12页
Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coron... Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coronavirus 2,a single-stranded RNA virus of the genus Betacoronavirus.This virus is highly infectious and relies on its angiotensin-converting enzyme 2-receptor to enter cells.With the increase in the number of confirmed COVID-19 diagnoses,the difficulty of diagnosis due to the lack of global healthcare resources becomes increasingly apparent.Deep learning-based computer-aided diagnosis models with high generalisability can effectively alleviate this pressure.Hyperparameter tuning is essential in training such models and significantly impacts their final performance and training speed.However,traditional hyperparameter tuning methods are usually time-consuming and unstable.To solve this issue,we introduce Particle Swarm Optimisation to build a PSO-guided Self-Tuning Convolution Neural Network(PSTCNN),allowing the model to tune hyperparameters automatically.Therefore,the proposed approach can reduce human involvement.Also,the optimisation algorithm can select the combination of hyperparameters in a targeted manner,thus stably achieving a solution closer to the global optimum.Experimentally,the PSTCNN can obtain quite excellent results,with a sensitivity of 93.65%±1.86%,a specificity of 94.32%±2.07%,a precision of 94.30%±2.04%,an accuracy of 93.99%±1.78%,an F1-score of 93.97%±1.78%,Matthews Correlation Coefficient of 87.99%±3.56%,and Fowlkes-Mallows Index of 93.97%±1.78%.Our experiments demonstrate that compared to traditional methods,hyperparameter tuning of the model using an optimisation algorithm is faster and more effective. 展开更多
关键词 COVID-19 SARS-CoV-2 Particle swarm optimisation Convolutional neural network hyperparameters tuning
下载PDF
Credit Card Fraud Detection Using Improved Deep Learning Models
6
作者 Sumaya S.Sulaiman Ibraheem Nadher Sarab M.Hameed 《Computers, Materials & Continua》 SCIE EI 2024年第1期1049-1069,共21页
Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown pr... Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection. 展开更多
关键词 Card fraud detection hyperparameter tuning deep learning autoencoder convolution neural network long short-term memory RESAMPLING
下载PDF
An Optimized Approach to Deep Learning for Botnet Detection and Classification for Cybersecurity in Internet of Things Environment
7
作者 Abdulrahman Alzahrani 《Computers, Materials & Continua》 SCIE EI 2024年第8期2331-2349,共19页
The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent ... The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent botnets in interconnected devices.Anomaly detection models evaluate transmission patterns,network traffic,and device behaviour to detect deviations from usual activities.Machine learning(ML)techniques detect patterns signalling botnet activity,namely sudden traffic increase,unusual command and control patterns,or irregular device behaviour.In addition,intrusion detection systems(IDSs)and signature-based techniques are applied to recognize known malware signatures related to botnets.Various ML and deep learning(DL)techniques have been developed to detect botnet attacks in IoT systems.To overcome security issues in an IoT environment,this article designs a gorilla troops optimizer with DL-enabled botnet attack detection and classification(GTODL-BADC)technique.The GTODL-BADC technique follows feature selection(FS)with optimal DL-based classification for accomplishing security in an IoT environment.For data preprocessing,the min-max data normalization approach is primarily used.The GTODL-BADC technique uses the GTO algorithm to select features and elect optimal feature subsets.Moreover,the multi-head attention-based long short-term memory(MHA-LSTM)technique was applied for botnet detection.Finally,the tree seed algorithm(TSA)was used to select the optimum hyperparameter for the MHA-LSTM method.The experimental validation of the GTODL-BADC technique can be tested on a benchmark dataset.The simulation results highlighted that the GTODL-BADC technique demonstrates promising performance in the botnet detection process. 展开更多
关键词 Botnet detection internet of things gorilla troops optimizer hyperparameter tuning intrusion detection system
下载PDF
Multiscale and Auto-Tuned Semi-Supervised Deep Subspace Clustering and Its Application in Brain Tumor Clustering
8
作者 Zhenyu Qian Yizhang Jiang +4 位作者 Zhou Hong Lijun Huang Fengda Li Khin Wee Lai Kaijian Xia 《Computers, Materials & Continua》 SCIE EI 2024年第6期4741-4762,共22页
In this paper,we introduce a novel Multi-scale and Auto-tuned Semi-supervised Deep Subspace Clustering(MAS-DSC)algorithm,aimed at addressing the challenges of deep subspace clustering in high-dimensional real-world da... In this paper,we introduce a novel Multi-scale and Auto-tuned Semi-supervised Deep Subspace Clustering(MAS-DSC)algorithm,aimed at addressing the challenges of deep subspace clustering in high-dimensional real-world data,particularly in the field of medical imaging.Traditional deep subspace clustering algorithms,which are mostly unsupervised,are limited in their ability to effectively utilize the inherent prior knowledge in medical images.Our MAS-DSC algorithm incorporates a semi-supervised learning framework that uses a small amount of labeled data to guide the clustering process,thereby enhancing the discriminative power of the feature representations.Additionally,the multi-scale feature extraction mechanism is designed to adapt to the complexity of medical imaging data,resulting in more accurate clustering performance.To address the difficulty of hyperparameter selection in deep subspace clustering,this paper employs a Bayesian optimization algorithm for adaptive tuning of hyperparameters related to subspace clustering,prior knowledge constraints,and model loss weights.Extensive experiments on standard clustering datasets,including ORL,Coil20,and Coil100,validate the effectiveness of the MAS-DSC algorithm.The results show that with its multi-scale network structure and Bayesian hyperparameter optimization,MAS-DSC achieves excellent clustering results on these datasets.Furthermore,tests on a brain tumor dataset demonstrate the robustness of the algorithm and its ability to leverage prior knowledge for efficient feature extraction and enhanced clustering performance within a semi-supervised learning framework. 展开更多
关键词 Deep subspace clustering multiscale network structure automatic hyperparameter tuning SEMI-SUPERVISED medical image clustering
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
9
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Parallel Inference for Real-Time Machine Learning Applications
10
作者 Sultan Al Bayyat Ammar Alomran +3 位作者 Mohsen Alshatti Ahmed Almousa Rayyan Almousa Yasir Alguwaifli 《Journal of Computer and Communications》 2024年第1期139-146,共8页
Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes... Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes the performance gains from parallel versus sequential hyperparameter optimization. Using scikit-learn’s Randomized SearchCV, this project tuned a Random Forest classifier for fake news detection via randomized grid search. Setting n_jobs to -1 enabled full parallelization across CPU cores. Results show the parallel implementation achieved over 5× faster CPU times and 3× faster total run times compared to sequential tuning. However, test accuracy slightly dropped from 99.26% sequentially to 99.15% with parallelism, indicating a trade-off between evaluation efficiency and model performance. Still, the significant computational gains allow more extensive hyperparameter exploration within reasonable timeframes, outweighing the small accuracy decrease. Further analysis could better quantify this trade-off across different models, tuning techniques, tasks, and hardware. 展开更多
关键词 Machine Learning Models Computational Efficiency Parallel Computing Systems Random Forest Inference hyperparameter tuning Python Frameworks (TensorFlow PyTorch Scikit-Learn) High-Performance Computing
下载PDF
Grid Search for Predicting Coronary Heart Disease by Tuning Hyper-Parameters 被引量:2
11
作者 S.Prabu B.Thiyaneswaran +2 位作者 M.Sujatha C.Nalini Sujatha Rajkumar 《Computer Systems Science & Engineering》 SCIE EI 2022年第11期737-749,共13页
Diagnosing the cardiovascular disease is one of the biggest medical difficulties in recent years.Coronary cardiovascular(CHD)is a kind of heart and blood vascular disease.Predicting this sort of cardiac illness leads ... Diagnosing the cardiovascular disease is one of the biggest medical difficulties in recent years.Coronary cardiovascular(CHD)is a kind of heart and blood vascular disease.Predicting this sort of cardiac illness leads to more precise decisions for cardiac disorders.Implementing Grid Search Optimization(GSO)machine training models is therefore a useful way to forecast the sickness as soon as possible.The state-of-the-art work is the tuning of the hyperparameter together with the selection of the feature by utilizing the model search to minimize the false-negative rate.Three models with a cross-validation approach do the required task.Feature Selection based on the use of statistical and correlation matrices for multivariate analysis.For Random Search and Grid Search models,extensive comparison findings are produced utilizing retrieval,F1 score,and precision measurements.The models are evaluated using the metrics and kappa statistics that illustrate the three models’comparability.The study effort focuses on optimizing function selection,tweaking hyperparameters to improve model accuracy and the prediction of heart disease by examining Framingham datasets using random forestry classification.Tuning the hyperparameter in the model of grid search thus decreases the erroneous rate achieves global optimization. 展开更多
关键词 Grid search coronary heart disease(CHD) machine learning feature selection hyperparameter tuning
下载PDF
Intelligent Deep Learning Enabled Human Activity Recognition for Improved Medical Services 被引量:2
12
作者 E.Dhiravidachelvi M.Suresh Kumar +4 位作者 L.D.Vijay Anand D.Pritima Seifedine Kadry Byeong-Gwon Kang Yunyoung Nam 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期961-977,共17页
Human Activity Recognition(HAR)has been made simple in recent years,thanks to recent advancements made in Artificial Intelligence(AI)techni-ques.These techniques are applied in several areas like security,surveillance,... Human Activity Recognition(HAR)has been made simple in recent years,thanks to recent advancements made in Artificial Intelligence(AI)techni-ques.These techniques are applied in several areas like security,surveillance,healthcare,human-robot interaction,and entertainment.Since wearable sensor-based HAR system includes in-built sensors,human activities can be categorized based on sensor values.Further,it can also be employed in other applications such as gait diagnosis,observation of children/adult’s cognitive nature,stroke-patient hospital direction,Epilepsy and Parkinson’s disease examination,etc.Recently-developed Artificial Intelligence(AI)techniques,especially Deep Learning(DL)models can be deployed to accomplish effective outcomes on HAR process.With this motivation,the current research paper focuses on designing Intelligent Hyperparameter Tuned Deep Learning-based HAR(IHPTDL-HAR)technique in healthcare environment.The proposed IHPTDL-HAR technique aims at recogniz-ing the human actions in healthcare environment and helps the patients in mana-ging their healthcare service.In addition,the presented model makes use of Hierarchical Clustering(HC)-based outlier detection technique to remove the out-liers.IHPTDL-HAR technique incorporates DL-based Deep Belief Network(DBN)model to recognize the activities of users.Moreover,Harris Hawks Opti-mization(HHO)algorithm is used for hyperparameter tuning of DBN model.Finally,a comprehensive experimental analysis was conducted upon benchmark dataset and the results were examined under different aspects.The experimental results demonstrate that the proposed IHPTDL-HAR technique is a superior per-former compared to other recent techniques under different measures. 展开更多
关键词 Artificial intelligence human activity recognition deep learning deep belief network hyperparameter tuning healthcare
下载PDF
Deep Learning with Natural Language Processing Enabled Sentimental Analysis on Sarcasm Classification 被引量:1
13
作者 Abdul Rahaman Wahab Sait Mohamad Khairi Ishak 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期2553-2567,共15页
Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier... Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier works on sarcasm detection on text utilize lexical as well as pragmatic cues namely interjection,punctuations,and sentiment shift that are vital indicators of sarcasm.With the advent of deep-learning,recent works,leveraging neural networks in learning lexical and contextual features,removing the need for handcrafted feature.In this aspect,this study designs a deep learning with natural language processing enabled SA(DLNLP-SA)technique for sarcasm classification.The proposed DLNLP-SA technique aims to detect and classify the occurrence of sarcasm in the input data.Besides,the DLNLP-SA technique holds various sub-processes namely preprocessing,feature vector conversion,and classification.Initially,the pre-processing is performed in diverse ways such as single character removal,multi-spaces removal,URL removal,stopword removal,and tokenization.Secondly,the transformation of feature vectors takes place using the N-gram feature vector technique.Finally,mayfly optimization(MFO)with multi-head self-attention based gated recurrent unit(MHSA-GRU)model is employed for the detection and classification of sarcasm.To verify the enhanced outcomes of the DLNLP-SA model,a comprehensive experimental investigation is performed on the News Headlines Dataset from Kaggle Repository and the results signified the supremacy over the existing approaches. 展开更多
关键词 Sentiment analysis sarcasm detection deep learning natural language processing N-GRAMS hyperparameter tuning
下载PDF
Modeling of Optimal Deep Learning Based Flood Forecasting Model Using Twitter Data 被引量:1
14
作者 G.Indra N.Duraipandian 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期1455-1470,共16页
Aflood is a significant damaging natural calamity that causes loss of life and property.Earlier work on the construction offlood prediction models intended to reduce risks,suggest policies,reduce mortality,and limit prop... Aflood is a significant damaging natural calamity that causes loss of life and property.Earlier work on the construction offlood prediction models intended to reduce risks,suggest policies,reduce mortality,and limit property damage caused byfloods.The massive amount of data generated by social media platforms such as Twitter opens the door toflood analysis.Because of the real-time nature of Twitter data,some government agencies and authorities have used it to track natural catastrophe events in order to build a more rapid rescue strategy.However,due to the shorter duration of Tweets,it is difficult to construct a perfect prediction model for determiningflood.Machine learning(ML)and deep learning(DL)approaches can be used to statistically developflood prediction models.At the same time,the vast amount of Tweets necessitates the use of a big data analytics(BDA)tool forflood prediction.In this regard,this work provides an optimal deep learning-basedflood forecasting model with big data analytics(ODLFF-BDA)based on Twitter data.The suggested ODLFF-BDA technique intends to anticipate the existence offloods using tweets in a big data setting.The ODLFF-BDA technique comprises data pre-processing to convert the input tweets into a usable format.In addition,a Bidirectional Encoder Representations from Transformers(BERT)model is used to generate emotive contextual embed-ding from tweets.Furthermore,a gated recurrent unit(GRU)with a Multilayer Convolutional Neural Network(MLCNN)is used to extract local data and predict theflood.Finally,an Equilibrium Optimizer(EO)is used tofine-tune the hyper-parameters of the GRU and MLCNN models in order to increase prediction performance.The memory usage is pull down lesser than 3.5 MB,if its compared with the other algorithm techniques.The ODLFF-BDA technique’s performance was validated using a benchmark Kaggle dataset,and thefindings showed that it outperformed other recent approaches significantly. 展开更多
关键词 Big data analytics predictive models deep learning flood prediction twitter data hyperparameter tuning
下载PDF
Electroencephalography (EEG) Based Neonatal Sleep Staging and Detection Using Various Classification Algorithms
15
作者 Hafza Ayesha Siddiqa Muhammad Irfan +1 位作者 Saadullah Farooq Abbasi Wei Chen 《Computers, Materials & Continua》 SCIE EI 2023年第11期1759-1778,共20页
Automatic sleep staging of neonates is essential for monitoring their brain development and maturity of the nervous system.EEG based neonatal sleep staging provides valuable information about an infant’s growth and h... Automatic sleep staging of neonates is essential for monitoring their brain development and maturity of the nervous system.EEG based neonatal sleep staging provides valuable information about an infant’s growth and health,but is challenging due to the unique characteristics of EEG and lack of standardized protocols.This study aims to develop and compare 18 machine learning models using Automated Machine Learning(autoML)technique for accurate and reliable multi-channel EEG-based neonatal sleep-wake classification.The study investigates autoML feasibility without extensive manual selection of features or hyperparameter tuning.The data is obtained from neonates at post-menstrual age 37±05 weeks.352530-s EEG segments from 19 infants are used to train and test the proposed models.There are twelve time and frequency domain features extracted from each channel.Each model receives the common features of nine channels as an input vector of size 108.Each model’s performance was evaluated based on a variety of evaluation metrics.The maximum mean accuracy of 84.78%and kappa of 69.63%has been obtained by the AutoML-based Random Forest estimator.This is the highest accuracy for EEG-based sleep-wake classification,until now.While,for the AutoML-based Adaboost Random Forest model,accuracy and kappa were 84.59%and 69.24%,respectively.High performance achieved in the proposed autoML-based approach can facilitate early identification and treatment of sleep-related issues in neonates. 展开更多
关键词 AutoML Random Forest adaboost EEG NEONATES PSG hyperparameter tuning sleep-wake classification
下载PDF
Data Mining with Comprehensive Oppositional Based Learning for Rainfall Prediction
16
作者 Mohammad Alamgeer Amal Al-Rasheed +3 位作者 Ahmad Alhindi Manar Ahmed Hamza Abdelwahed Motwakel Mohamed I.Eldesouki 《Computers, Materials & Continua》 SCIE EI 2023年第2期2725-2738,共14页
Data mining process involves a number of steps fromdata collection to visualization to identify useful data from massive data set.the same time,the recent advances of machine learning(ML)and deep learning(DL)models ca... Data mining process involves a number of steps fromdata collection to visualization to identify useful data from massive data set.the same time,the recent advances of machine learning(ML)and deep learning(DL)models can be utilized for effectual rainfall prediction.With this motivation,this article develops a novel comprehensive oppositionalmoth flame optimization with deep learning for rainfall prediction(COMFO-DLRP)Technique.The proposed CMFO-DLRP model mainly intends to predict the rainfall and thereby determine the environmental changes.Primarily,data pre-processing and correlation matrix(CM)based feature selection processes are carried out.In addition,deep belief network(DBN)model is applied for the effective prediction of rainfall data.Moreover,COMFO algorithm was derived by integrating the concepts of comprehensive oppositional based learning(COBL)with traditional MFO algorithm.Finally,the COMFO algorithm is employed for the optimal hyperparameter selection of the DBN model.For demonstrating the improved outcomes of the COMFO-DLRP approach,a sequence of simulations were carried out and the outcomes are assessed under distinct measures.The simulation outcome highlighted the enhanced outcomes of the COMFO-DLRP method on the other techniques. 展开更多
关键词 Data mining rainfall prediction deep learning correlation matrix hyperparameter tuning metaheuristics
下载PDF
Tuning hyperparameters of doublet-detection methods for single-cell RNA sequencing data
17
作者 Nan Miles Xi Angelos Vasilopoulos 《Quantitative Biology》 CSCD 2023年第3期297-305,共9页
Background:The existence of doublets in single-cell RNA sequencing(scRNA-seq)data poses a great challenge in downstream data analysis.Computational doublet-detection methods have been developed to remove doublets from... Background:The existence of doublets in single-cell RNA sequencing(scRNA-seq)data poses a great challenge in downstream data analysis.Computational doublet-detection methods have been developed to remove doublets from scRNA-seq data.Yet,the default hyperparameter settings of those methods may not provide optimal performance.Methods:We propose a strategy to tune hyperparameters for a cutting-edge doublet-detection method.We utilize a full factorial design to explore the relationship between hyperparameters and detection accuracy on 16 real scRNA-seq datasets.The optimal hyperparameters are obtained by a response surface model and convex optimization.Results:We show that the optimal hyperparameters provide top performance across scRNA-seq datasets under various biological conditions.Our tuning strategy can be applied to other computational doublet-detection methods.It also offers insights into hyperparameter tuning for broader computational methods in scRNA-seq data analysis.Conclusions:The hyperparameter configuration significantly impacts the performance of computational doublet-detection methods.Our study is the first attempt to systematically explore the optimal hyperparameters under various biological conditions and optimization objectives.Our study provides much-needed guidance for hyperparameter tuning in computational doublet-detection methods. 展开更多
关键词 scRNA-seq doublet detection hyperparameter tuning experimental design response surface model
原文传递
Optimal Deep Learning Driven Intrusion Detection in SDN-Enabled IoT Environment
18
作者 Mohammed Maray Haya Mesfer Alshahrani +5 位作者 Khalid A.Alissa Najm Alotaibi Abdulbaset Gaddah AliMeree Mahmoud Othman Manar Ahmed Hamza 《Computers, Materials & Continua》 SCIE EI 2023年第3期6587-6604,共18页
In recent years,wireless networks are widely used in different domains.This phenomenon has increased the number of Internet of Things(IoT)devices and their applications.Though IoT has numerous advantages,the commonly-... In recent years,wireless networks are widely used in different domains.This phenomenon has increased the number of Internet of Things(IoT)devices and their applications.Though IoT has numerous advantages,the commonly-used IoT devices are exposed to cyber-attacks periodically.This scenario necessitates real-time automated detection and the mitigation of different types of attacks in high-traffic networks.The Software-Defined Networking(SDN)technique and the Machine Learning(ML)-based intrusion detection technique are effective tools that can quickly respond to different types of attacks in the IoT networks.The Intrusion Detection System(IDS)models can be employed to secure the SDN-enabled IoT environment in this scenario.The current study devises a Harmony Search algorithmbased Feature Selection with Optimal Convolutional Autoencoder(HSAFSOCAE)for intrusion detection in the SDN-enabled IoT environment.The presented HSAFS-OCAE method follows a three-stage process in which the Harmony Search Algorithm-based FS(HSAFS)technique is exploited at first for feature selection.Next,the CAE method is leveraged to recognize and classify intrusions in the SDN-enabled IoT environment.Finally,the Artificial Fish SwarmAlgorithm(AFSA)is used to fine-tune the hyperparameters.This process improves the outcomes of the intrusion detection process executed by the CAE algorithm and shows the work’s novelty.The proposed HSAFSOCAE technique was experimentally validated under different aspects,and the comparative analysis results established the supremacy of the proposed model. 展开更多
关键词 Internet of things SDN controller feature selection hyperparameter tuning autoencoder
下载PDF
Hybrid Metaheuristics with Deep Learning Enabled Automated Deception Detection and Classification of Facial Expressions
19
作者 Haya Alaskar 《Computers, Materials & Continua》 SCIE EI 2023年第6期5433-5449,共17页
Automatic deception recognition has received considerable atten-tion from the machine learning community due to recent research on its vast application to social media,interviews,law enforcement,and the mil-itary.Vide... Automatic deception recognition has received considerable atten-tion from the machine learning community due to recent research on its vast application to social media,interviews,law enforcement,and the mil-itary.Video analysis-based techniques for automated deception detection have received increasing interest.This study develops a new self-adaptive population-based firefly algorithm with a deep learning-enabled automated deception detection(SAPFF-DLADD)model for analyzing facial cues.Ini-tially,the input video is separated into a set of video frames.Then,the SAPFF-DLADD model applies the MobileNet-based feature extractor to produce a useful set of features.The long short-term memory(LSTM)model is exploited for deception detection and classification.In the final stage,the SAPFF technique is applied to optimally alter the hyperparameter values of the LSTM model,showing the novelty of the work.The experimental validation of the SAPFF-DLADD model is tested using the Miami University Deception Detection Database(MU3D),a database comprised of two classes,namely,truth and deception.An extensive comparative analysis reported a better performance of the SAPFF-DLADD model compared to recent approaches,with a higher accuracy of 99%. 展开更多
关键词 Deception detection facial cues deep learning computer vision hyperparameter tuning
下载PDF
Performance Evaluation of Deep Dense Layer Neural Network for Diabetes Prediction
20
作者 Niharika Gupta Baijnath Kaushik +1 位作者 Mohammad Khalid Imam Rahmani Saima Anwar Lashari 《Computers, Materials & Continua》 SCIE EI 2023年第7期347-366,共20页
Diabetes is one of the fastest-growing human diseases worldwide and poses a significant threat to the population’s longer lives.Early prediction of diabetes is crucial to taking precautionary steps to avoid or delay ... Diabetes is one of the fastest-growing human diseases worldwide and poses a significant threat to the population’s longer lives.Early prediction of diabetes is crucial to taking precautionary steps to avoid or delay its onset.In this study,we proposed a Deep Dense Layer Neural Network(DDLNN)for diabetes prediction using a dataset with 768 instances and nine variables.We also applied a combination of classical machine learning(ML)algorithms and ensemble learning algorithms for the effective prediction of the disease.The classical ML algorithms used were Support Vector Machine(SVM),Logistic Regression(LR),Decision Tree(DT),K-Nearest Neighbor(KNN),and Naïve Bayes(NB).We also constructed ensemble models such as bagging(Random Forest)and boosting like AdaBoost and Extreme Gradient Boosting(XGBoost)to evaluate the performance of prediction models.The proposed DDLNN model and ensemble learning models were trained and tested using hyperparameter tuning and K-Fold cross-validation to determine the best parameters for predicting the disease.The combined ML models used majority voting to select the best outcomes among the models.The efficacy of the proposed and other models was evaluated for effective diabetes prediction.The investigation concluded that the proposed model,after hyperparameter tuning,outperformed other learning models with an accuracy of 84.42%,a precision of 85.12%,a recall rate of 65.40%,and a specificity of 94.11%. 展开更多
关键词 Diabetes prediction hyperparameter tuning k-fold validation machine learning neural network
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部