期刊文献+
共找到77篇文章
< 1 2 4 >
每页显示 20 50 100
Landslide susceptibility mapping(LSM)based on different boosting and hyperparameter optimization algorithms:A case of Wanzhou District,China
1
作者 Deliang Sun Jing Wang +2 位作者 Haijia Wen YueKai Ding Changlin Mi 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第8期3221-3232,共12页
Boosting algorithms have been widely utilized in the development of landslide susceptibility mapping(LSM)studies.However,these algorithms possess distinct computational strategies and hyperparameters,making it challen... Boosting algorithms have been widely utilized in the development of landslide susceptibility mapping(LSM)studies.However,these algorithms possess distinct computational strategies and hyperparameters,making it challenging to propose an ideal LSM model.To investigate the impact of different boosting algorithms and hyperparameter optimization algorithms on LSM,this study constructed a geospatial database comprising 12 conditioning factors,such as elevation,stratum,and annual average rainfall.The XGBoost(XGB),LightGBM(LGBM),and CatBoost(CB)algorithms were employed to construct the LSM model.Furthermore,the Bayesian optimization(BO),particle swarm optimization(PSO),and Hyperband optimization(HO)algorithms were applied to optimizing the LSM model.The boosting algorithms exhibited varying performances,with CB demonstrating the highest precision,followed by LGBM,and XGB showing poorer precision.Additionally,the hyperparameter optimization algorithms displayed different performances,with HO outperforming PSO and BO showing poorer performance.The HO-CB model achieved the highest precision,boasting an accuracy of 0.764,an F1-score of 0.777,an area under the curve(AUC)value of 0.837 for the training set,and an AUC value of 0.863 for the test set.The model was interpreted using SHapley Additive exPlanations(SHAP),revealing that slope,curvature,topographic wetness index(TWI),degree of relief,and elevation significantly influenced landslides in the study area.This study offers a scientific reference for LSM and disaster prevention research.This study examines the utilization of various boosting algorithms and hyperparameter optimization algorithms in Wanzhou District.It proposes the HO-CB-SHAP framework as an effective approach to accurately forecast landslide disasters and interpret LSM models.However,limitations exist concerning the generalizability of the model and the data processing,which require further exploration in subsequent studies. 展开更多
关键词 Landslide susceptibility hyperparameter optimization Boosting algorithms SHapley additive exPlanations(SHAP)
下载PDF
Improving Prediction Efficiency of Machine Learning Models for Cardiovascular Disease in IoST-Based Systems through Hyperparameter Optimization
2
作者 Tajim Md.Niamat Ullah Akhund Waleed M.Al-Nuwaiser 《Computers, Materials & Continua》 SCIE EI 2024年第9期3485-3506,共22页
This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning ap... This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies. 展开更多
关键词 Internet of sensing things(IoST) machine learning hyperparameter optimization cardiovascular disease prediction execution time analysis performance analysis wilcoxon signed-rank test
下载PDF
Particle Swarm Optimization-Based Hyperparameters Tuning of Machine Learning Models for Big COVID-19 Data Analysis
3
作者 Hend S. Salem Mohamed A. Mead Ghada S. El-Taweel 《Journal of Computer and Communications》 2024年第3期160-183,共24页
Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the ne... Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the need for effective risk prediction models. Machine learning (ML) techniques have shown promise in analyzing complex data patterns and predicting disease outcomes. The accuracy of these techniques is greatly affected by changing their parameters. Hyperparameter optimization plays a crucial role in improving model performance. In this work, the Particle Swarm Optimization (PSO) algorithm was used to effectively search the hyperparameter space and improve the predictive power of the machine learning models by identifying the optimal hyperparameters that can provide the highest accuracy. A dataset with a variety of clinical and epidemiological characteristics linked to COVID-19 cases was used in this study. Various machine learning models, including Random Forests, Decision Trees, Support Vector Machines, and Neural Networks, were utilized to capture the complex relationships present in the data. To evaluate the predictive performance of the models, the accuracy metric was employed. The experimental findings showed that the suggested method of estimating COVID-19 risk is effective. When compared to baseline models, the optimized machine learning models performed better and produced better results. 展开更多
关键词 Big COVID-19 Data Machine Learning hyperparameter Optimization Particle Swarm Optimization Computational Intelligence
下载PDF
Scale adaptive fitness evaluation‐based particle swarm optimisation for hyperparameter and architecture optimisation in neural networks and deep learning
4
作者 Ye‐Qun Wang Jian‐Yu Li +2 位作者 Chun‐Hua Chen Jun Zhang Zhi‐Hui Zhan 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第3期849-862,共14页
Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to ... Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to two issues:Both the hyperparameter and ar-chitecture should be optimised and the optimisation process is computationally expen-sive.To tackle these two issues,this paper focusses on solving the hyperparameter and architecture optimization problem for the NN and proposes a novel light‐weight scale‐adaptive fitness evaluation‐based particle swarm optimisation(SAFE‐PSO)approach.Firstly,the SAFE‐PSO algorithm considers the hyperparameters and architectures together in the optimisation problem and therefore can find their optimal combination for the globally best NN.Secondly,the computational cost can be reduced by using multi‐scale accuracy evaluation methods to evaluate candidates.Thirdly,a stagnation‐based switch strategy is proposed to adaptively switch different evaluation methods to better balance the search performance and computational cost.The SAFE‐PSO algorithm is tested on two widely used datasets:The 10‐category(i.e.,CIFAR10)and the 100−cate-gory(i.e.,CIFAR100).The experimental results show that SAFE‐PSO is very effective and efficient,which can not only find a promising NN automatically but also find a better NN than compared algorithms at the same computational cost. 展开更多
关键词 deep learning evolutionary computation hyperparameter and architecture optimisation neural networks particle swarm optimisation scale‐adaptive fitness evaluation
下载PDF
Energy Efficient Hyperparameter Tuned Deep Neural Network to Improve Accuracy of Near-Threshold Processor
5
作者 K.Chanthirasekaran Raghu Gundaala 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期471-489,共19页
When it comes to decreasing margins and increasing energy effi-ciency in near-threshold and sub-threshold processors,timing error resilience may be viewed as a potentially lucrative alternative to examine.On the other... When it comes to decreasing margins and increasing energy effi-ciency in near-threshold and sub-threshold processors,timing error resilience may be viewed as a potentially lucrative alternative to examine.On the other hand,the currently employed approaches have certain restrictions,including high levels of design complexity,severe time constraints on error consolidation and propagation,and uncontaminated architectural registers(ARs).The design of near-threshold circuits,often known as NT circuits,is becoming the approach of choice for the construction of energy-efficient digital circuits.As a result of the exponentially decreased driving current,there was a reduction in performance,which was one of the downsides.Numerous studies have advised the use of NT techniques to chip multiprocessors as a means to preserve outstanding energy efficiency while minimising performance loss.Over the past several years,there has been a clear growth in interest in the development of artificial intelligence hardware with low energy consumption(AI).This has resulted in both large corporations and start-ups producing items that compete on the basis of varying degrees of performance and energy use.This technology’s ultimate goal was to provide levels of efficiency and performance that could not be achieved with graphics processing units or general-purpose CPUs.To achieve this objective,the technology was created to integrate several processing units into a single chip.To accomplish this purpose,the hardware was designed with a number of unique properties.In this study,an Energy Effi-cient Hyperparameter Tuned Deep Neural Network(EEHPT-DNN)model for Variation-Tolerant Near-Threshold Processor was developed.In order to improve the energy efficiency of artificial intelligence(AI),the EEHPT-DNN model employs several AI techniques.The notion focuses mostly on the repercussions of embedded technologies positioned at the network’s edge.The presented model employs a deep stacked sparse autoencoder(DSSAE)model with the objective of creating a variation-tolerant NT processor.The time-consuming method of modifying hyperparameters through trial and error is substituted with the marine predators optimization algorithm(MPO).This method is utilised to modify the hyperparameters associated with the DSSAE model.To validate that the proposed EEHPT-DNN model has a higher degree of functionality,a full simulation study is conducted,and the results are analysed from a variety of perspectives.This was completed so that the enhanced performance could be evaluated and analysed.According to the results of the study that compared numerous DL models,the EEHPT-DNN model performed significantly better than the other models. 展开更多
关键词 Deep learning hyperparameter tuning artificial intelligence near-threshold processor embedded system
下载PDF
Hyperparameter Optimization for Capsule Network Based Modified Hybrid Rice Optimization Algorithm
6
作者 Zhiwei Ye Ziqian Fang +4 位作者 Zhina Song Haigang Sui Chunyan Yan Wen Zhou Mingwei Wang 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期2019-2035,共17页
Hyperparameters play a vital impact in the performance of most machine learning algorithms.It is a challenge for traditional methods to con-figure hyperparameters of the capsule network to obtain high-performance manu... Hyperparameters play a vital impact in the performance of most machine learning algorithms.It is a challenge for traditional methods to con-figure hyperparameters of the capsule network to obtain high-performance manually.Some swarm intelligence or evolutionary computation algorithms have been effectively employed to seek optimal hyperparameters as a com-binatorial optimization problem.However,these algorithms are prone to get trapped in the local optimal solution as random search strategies are adopted.The inspiration for the hybrid rice optimization(HRO)algorithm is from the breeding technology of three-line hybrid rice in China,which has the advantages of easy implementation,less parameters and fast convergence.In the paper,genetic search is combined with the hybrid rice optimization algorithm(GHRO)and employed to obtain the optimal hyperparameter of the capsule network automatically,that is,a probability search technique and a hybridization strategy belong with the primary HRO.Thirteen benchmark functions are used to evaluate the performance of GHRO.Furthermore,the MNIST,Chest X-Ray(pneumonia),and Chest X-Ray(COVID-19&pneumonia)datasets are also utilized to evaluate the capsule network learnt by GHRO.The experimental results show that GHRO is an effective method for optimizing the hyperparameters of the capsule network,which is able to boost the performance of the capsule network on image classification. 展开更多
关键词 hyperparameter optimization hybrid rice optimization algorithm genetic algorithm capsule network image classification
下载PDF
Neural network hyperparameter optimization based on improved particle swarm optimization
7
作者 谢晓燕 HE Wanqi +1 位作者 ZHU Yun YU Jinhao 《High Technology Letters》 EI CAS 2023年第4期427-433,共7页
Hyperparameter optimization is considered as one of the most challenges in deep learning and dominates the precision of model in a certain.Recent proposals tried to solve this issue through the particle swarm optimiza... Hyperparameter optimization is considered as one of the most challenges in deep learning and dominates the precision of model in a certain.Recent proposals tried to solve this issue through the particle swarm optimization(PSO),but its native defect may result in the local optima trapped and convergence difficulty.In this paper,the genetic operations are introduced to the PSO,which makes the best hyperparameter combination scheme for specific network architecture be located easier.Spe-cifically,to prevent the troubles caused by the different data types and value scopes,a mixed coding method is used to ensure the effectiveness of particles.Moreover,the crossover and mutation opera-tions are added to the process of particles updating,to increase the diversity of particles and avoid local optima in searching.Verified with three benchmark datasets,MNIST,Fashion-MNIST,and CIFAR10,it is demonstrated that the proposed scheme can achieve accuracies of 99.58%,93.39%,and 78.96%,respectively,improving the accuracy by about 0.1%,0.5%,and 2%,respectively,compared with that of the PSO. 展开更多
关键词 hyperparameter optimization particle swarm optimization(PSO)algorithm neu-ral network
下载PDF
Abstractive Arabic Text Summarization Using Hyperparameter Tuned Denoising Deep Neural Network
8
作者 Ibrahim M.Alwayle Hala J.Alshahrani +5 位作者 Saud S.Alotaibi Khaled M.Alalayah Amira Sayed A.Aziz Khadija M.Alaidarous Ibrahim Abdulrab Ahmed Manar Ahmed Hamza 《Intelligent Automation & Soft Computing》 2023年第11期153-168,共16页
ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN t... ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN technique,the DDNN model is utilized to generate the summary.This study exploits the Chameleon Swarm Optimization(CSO)algorithm to fine-tune the hyperparameters relevant to the DDNN model since it considerably affects the summarization efficiency.This phase shows the novelty of the current study.To validate the enhanced summarization performance of the proposed AATS-HTDDNN model,a comprehensive experimental analysis was conducted.The comparison study outcomes confirmed the better performance of the AATS-HTDDNN model over other approaches. 展开更多
关键词 Text summarization deep learning denoising deep neural networks hyperparameter tuning Arabic language
下载PDF
Hyperparameter Tuning Based Machine Learning Classifier for Breast Cancer Prediction
9
作者 Mohammed Mijanur Rahman Asikur Rahman +1 位作者 Swarnali Akter Sumiea Akter Pinky 《Journal of Computer and Communications》 2023年第4期149-165,共17页
Currently, the second most devastating form of cancer in people, particularly in women, is Breast Cancer (BC). In the healthcare industry, Machine Learning (ML) is commonly employed in fatal disease prediction. Due to... Currently, the second most devastating form of cancer in people, particularly in women, is Breast Cancer (BC). In the healthcare industry, Machine Learning (ML) is commonly employed in fatal disease prediction. Due to breast cancer’s favourable prognosis at an early stage, a model is created to utilize the Dataset on Wisconsin Diagnostic Breast Cancer (WDBC). Conversely, this model’s overarching axiom is to compare the effectiveness of five well-known ML classifiers, including Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), K-Nearest Neighbor (KNN), and Naive Bayes (NB) with the conventional method. To counterbalance the effect with conventional methods, the overarching tactic we utilized was hyperparameter tuning utilizing the grid search method, which improved accuracy, secondary precision, third recall, F1 score and finally the AUC & ROC curve. In this study of hyperparameter tuning model, the rate of accuracy increased from 94.15% to 98.83% whereas the accuracy of the conventional method increased from 93.56% to 97.08%. According to this investigation, KNN outperformed all other classifiers in terms of accuracy, achieving a score of 98.83%. In conclusion, our study shows that KNN works well with the hyper-tuning method. These analyses show that this study prediction approach is useful in prognosticating women with breast cancer with a viable performance and more accurate findings when compared to the conventional approach. 展开更多
关键词 Machine Learning Breast Cancer Prediction Grid Search hyperparameter Tuning
下载PDF
Hyperparameter Optimization for Machine Learning Models Based on Bayesian Optimization 被引量:24
10
作者 Jia Wu Xiu-Yun Chen +3 位作者 Hao Zhang Li-Dong Xiong Hang Lei Si-Hao Deng 《Journal of Electronic Science and Technology》 CAS CSCD 2019年第1期26-40,共15页
Hyperparameters are important for machine learning algorithms since they directly control the behaviors of training algorithms and have a significant effect on the performance of machine learning models. Several techn... Hyperparameters are important for machine learning algorithms since they directly control the behaviors of training algorithms and have a significant effect on the performance of machine learning models. Several techniques have been developed and successfully applied for certain application domains. However, this work demands professional knowledge and expert experience. And sometimes it has to resort to the brute-force search.Therefore, if an efficient hyperparameter optimization algorithm can be developed to optimize any given machine learning method, it will greatly improve the efficiency of machine learning. In this paper, we consider building the relationship between the performance of the machine learning models and their hyperparameters by Gaussian processes. In this way, the hyperparameter tuning problem can be abstracted as an optimization problem and Bayesian optimization is used to solve the problem. Bayesian optimization is based on the Bayesian theorem. It sets a prior over the optimization function and gathers the information from the previous sample to update the posterior of the optimization function. A utility function selects the next sample point to maximize the optimization function.Several experiments were conducted on standard test datasets. Experiment results show that the proposed method can find the best hyperparameters for the widely used machine learning models, such as the random forest algorithm and the neural networks, even multi-grained cascade forest under the consideration of time cost. 展开更多
关键词 BAYESIAN OPTIMIZATION GAUSSIAN process hyperparameter OPTIMIZATION MACHINE LEARNING
下载PDF
Hyperparameter Tuned Deep Learning Enabled Cyberbullying Classification in Social Media 被引量:1
11
作者 Mesfer Al Duhayyim Heba G.Mohamed +5 位作者 Saud S.Alotaibi Hany Mahgoub Abdullah Mohamed Abdelwahed Motwakel Abu Sarwar Zamani Mohamed I.Eldesouki 《Computers, Materials & Continua》 SCIE EI 2022年第12期5011-5024,共14页
Cyberbullying(CB)is a challenging issue in social media and it becomes important to effectively identify the occurrence of CB.The recently developed deep learning(DL)models pave the way to design CB classifier models ... Cyberbullying(CB)is a challenging issue in social media and it becomes important to effectively identify the occurrence of CB.The recently developed deep learning(DL)models pave the way to design CB classifier models with maximum performance.At the same time,optimal hyperparameter tuning process plays a vital role to enhance overall results.This study introduces a Teacher Learning Genetic Optimization with Deep Learning Enabled Cyberbullying Classification(TLGODL-CBC)model in Social Media.The proposed TLGODL-CBC model intends to identify the existence and non-existence of CB in social media context.Initially,the input data is cleaned and pre-processed to make it compatible for further processing.Followed by,independent recurrent autoencoder(IRAE)model is utilized for the recognition and classification of CBs.Finally,the TLGO algorithm is used to optimally adjust the parameters related to the IRAE model and shows the novelty of the work.To assuring the improved outcomes of the TLGODLCBC approach,a wide range of simulations are executed and the outcomes are investigated under several aspects.The simulation outcomes make sure the improvements of the TLGODL-CBC model over recent approaches. 展开更多
关键词 Social media deep learning CYBERBULLYING CYBERSECURITY hyperparameter optimization
下载PDF
Hybrid XGBoost model with hyperparameter tuning for prediction of liver disease with better accuracy
12
作者 Surjeet Dalal Edeh Michael Onyema Amit Malik 《World Journal of Gastroenterology》 SCIE CAS 2022年第46期6551-6563,共13页
BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver dise... BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver disease.This could be attributed to many factors,among which are human habits,awareness issues,poor healthcare,and late detection.To curb the growing threats from liver disease,early detection is critical to help reduce the risks and improve treatment outcome.Emerging technologies such as machine learning,as shown in this study,could be deployed to assist in enhancing its prediction and treatment.AIM To present a more efficient system for timely prediction of liver disease using a hybrid eXtreme Gradient Boosting model with hyperparameter tuning with a view to assist in early detection,diagnosis,and reduction of risks and mortality associated with the disease.METHODS The dataset used in this study consisted of 416 people with liver problems and 167 with no such history.The data were collected from the state of Andhra Pradesh,India,through https://www.kaggle.com/datasets/uciml/indian-liver-patientrecords.The population was divided into two sets depending on the disease state of the patient.This binary information was recorded in the attribute"is_patient".RESULTS The results indicated that the chi-square automated interaction detection and classification and regression trees models achieved an accuracy level of 71.36%and 73.24%,respectively,which was much better than the conventional method.The proposed solution would assist patients and physicians in tackling the problem of liver disease and ensuring that cases are detected early to prevent it from developing into cirrhosis(scarring)and to enhance the survival of patients.The study showed the potential of machine learning in health care,especially as it concerns disease prediction and monitoring.CONCLUSION This study contributed to the knowledge of machine learning application to health and to the efforts toward combating the problem of liver disease.However,relevant authorities have to invest more into machine learning research and other health technologies to maximize their potential. 展开更多
关键词 Liver infection Machine learning Chi-square automated interaction detection Classification and regression trees Decision tree XGBoost hyperparameter tuning
下载PDF
Hyperparameter on-line learning of stochastic resonance based threshold networks
13
作者 Weijin Li Yuhao Ren Fabing Duan 《Chinese Physics B》 SCIE EI CAS CSCD 2022年第8期289-295,共7页
Aiming at training the feed-forward threshold neural network consisting of nondifferentiable activation functions, the approach of noise injection forms a stochastic resonance based threshold network that can be optim... Aiming at training the feed-forward threshold neural network consisting of nondifferentiable activation functions, the approach of noise injection forms a stochastic resonance based threshold network that can be optimized by various gradientbased optimizers. The introduction of injected noise extends the noise level into the parameter space of the designed threshold network, but leads to a highly non-convex optimization landscape of the loss function. Thus, the hyperparameter on-line learning procedure with respective to network weights and noise levels becomes of challenge. It is shown that the Adam optimizer, as an adaptive variant of stochastic gradient descent, manifests its superior learning ability in training the stochastic resonance based threshold network effectively. Experimental results demonstrate the significant improvement of performance of the designed threshold network trained by the Adam optimizer for function approximation and image classification. 展开更多
关键词 noise injection adaptive stochastic resonance threshold neural network hyperparameter learning
下载PDF
A benchmark-based method for evaluating hyperparameter optimization techniques of neural networks for surface water quality prediction
14
作者 Xuan Wang Yan Dong +2 位作者 Jing Yang Zhipeng Liu Jinsuo Lu 《Frontiers of Environmental Science & Engineering》 SCIE EI CSCD 2024年第5期13-27,共15页
Neural networks(NNs)have been used extensively in surface water prediction tasks due to computing algorithm improvements and data accumulation.An essential step in developing an NN is the hyperparameter selection.In p... Neural networks(NNs)have been used extensively in surface water prediction tasks due to computing algorithm improvements and data accumulation.An essential step in developing an NN is the hyperparameter selection.In practice,it is common to manually determine hyperparameters in the studies of NNs in water resources tasks.This may result in considerable randomness and require significant computation time;therefore,hyperparameter optimization(HPO)is essential.This study adopted five representatives of the HPO techniques in the surface water quality prediction tasks,including the grid sampling(GS),random search(RS),genetic algorithm(GA),Bayesian optimization(BO)based on the Gaussian process(GP),and the tree Parzen estimator(TPE).For the evaluation of these techniques,this study proposed a method:first,the optimal hyperparameter value sets achieved by GS were regarded as the benchmark;then,the other HPO techniques were evaluated and compared with the benchmark in convergence,optimization orientation,and consistency of the optimized values.The results indicated that the TPE-based BO algorithm was recommended because it yielded stable convergence,reasonable optimization orientation,and the highest consistency rates with the benchmark values.The optimization consistency rates via TPE for the hyperparameters hidden layers,hidden dimension,learning rate,and batch size were 86.7%,73.3%,73.3%,and 80.0%,respectively.Unlike the evaluation of HPO techniques directly based on the prediction performance of the optimized NN in a single HPO test,the proposed benchmark-based HPO evaluation approach is feasible and robust. 展开更多
关键词 Neural networks hyperparameter optimization Surface water quality prediction Bayes optimization Genetic algorithm
原文传递
The posterior selection method for hyperparameters in regularized least squares method
15
作者 Yanxin Zhang Jing Chen +1 位作者 Yawen Mao Quanmin Zhu 《Control Theory and Technology》 EI CSCD 2024年第2期184-194,共11页
The selection of hyperparameters in regularized least squares plays an important role in large-scale system identification. The traditional methods for selecting hyperparameters are based on experience or marginal lik... The selection of hyperparameters in regularized least squares plays an important role in large-scale system identification. The traditional methods for selecting hyperparameters are based on experience or marginal likelihood maximization method, which are inaccurate or computationally expensive. In this paper, two posterior methods are proposed to select hyperparameters based on different prior knowledge (constraints), which can obtain the optimal hyperparameters using the optimization theory. Moreover, we also give the theoretical optimal constraints, and verify its effectiveness. Numerical simulation shows that the hyperparameters and parameter vector estimate obtained by the proposed methods are the optimal ones. 展开更多
关键词 Regularization method hyperparameter System identification Least squares algorithm
原文传递
Hyperparameter optimization for cardiovascular disease data-driven prognostic system
16
作者 Jayson Saputra Cindy Lawrencya +1 位作者 Jecky Mitra Saini Suharjito Suharjito 《Visual Computing for Industry,Biomedicine,and Art》 EI 2023年第1期218-244,共27页
Prediction and diagnosis of cardiovascular diseases(CVDs)based,among other things,on medical examinations and patient symptoms are the biggest challenges in medicine.About 17.9 million people die from CVDs annually,ac... Prediction and diagnosis of cardiovascular diseases(CVDs)based,among other things,on medical examinations and patient symptoms are the biggest challenges in medicine.About 17.9 million people die from CVDs annually,accounting for 31%of all deaths worldwide.With a timely prognosis and thorough consideration of the patient’s medical history and lifestyle,it is possible to predict CVDs and take preventive measures to eliminate or control this life-threatening disease.In this study,we used various patient datasets from a major hospital in the United States as prognostic factors for CVD.The data was obtained by monitoring a total of 918 patients whose criteria for adults were 28-77 years old.In this study,we present a data mining modeling approach to analyze the performance,classification accuracy and number of clusters on Cardiovascular Disease Prognostic datasets in unsupervised machine learning(ML)using the Orange data mining software.Various techniques are then used to classify the model parameters,such as k-nearest neighbors,support vector machine,random forest,artificial neural network(ANN),naïve bayes,logistic regression,stochastic gradient descent(SGD),and AdaBoost.To determine the number of clusters,various unsupervised ML clustering methods were used,such as k-means,hierarchical,and density-based spatial clustering of applications with noise clustering.The results showed that the best model performance analysis and classification accuracy were SGD and ANN,both of which had a high score of 0.900 on Cardiovascular Disease Prognostic datasets.Based on the results of most clustering methods,such as k-means and hierarchical clustering,Cardiovascular Disease Prognostic datasets can be divided into two clusters.The prognostic accuracy of CVD depends on the accuracy of the proposed model in determining the diagnostic model.The more accurate the model,the better it can predict which patients are at risk for CVD. 展开更多
关键词 Cardiovascular disease Data-driven analytics Data mining hyperparameter optimization Orange data mining software Prognostic system Unsupervised machine learning
下载PDF
A hybrid-model optimization algorithm based on the Gaussian process and particle swarm optimization for mixed-variable CNN hyperparameter automatic search
17
作者 Han YAN Chongquan ZHONG +2 位作者 Yuhu WU Liyong ZHANG Wei LU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2023年第11期1557-1573,共17页
Convolutional neural networks(CNNs)have been developed quickly in many real-world fields.However,CNN’s performance depends heavily on its hyperparameters,while finding suitable hyperparameters for CNNs working in app... Convolutional neural networks(CNNs)have been developed quickly in many real-world fields.However,CNN’s performance depends heavily on its hyperparameters,while finding suitable hyperparameters for CNNs working in application fields is challenging for three reasons:(1)the problem of mixed-variable encoding for different types of hyperparameters in CNNs,(2)expensive computational costs in evaluating candidate hyperparameter configuration,and(3)the problem of ensuring convergence rates and model performance during hyperparameter search.To overcome these problems and challenges,a hybrid-model optimization algorithm is proposed in this paper to search suitable hyperparameter configurations automatically based on the Gaussian process and particle swarm optimization(GPPSO)algorithm.First,a new encoding method is designed to efficiently deal with the CNN hyperparameter mixed-variable problem.Second,a hybrid-surrogate-assisted model is proposed to reduce the high cost of evaluating candidate hyperparameter configurations.Third,a novel activation function is suggested to improve the model performance and ensure the convergence rate.Intensive experiments are performed on image-classification benchmark datasets to demonstrate the superior performance of GPPSO over state-of-the-art methods.Moreover,a case study on metal fracture diagnosis is carried out to evaluate the GPPSO algorithm performance in practical applications.Experimental results demonstrate the effectiveness and efficiency of GPPSO,achieving accuracy of 95.26%and 76.36%only through 0.04 and 1.70 GPU days on the CIFAR-10 and CIFAR-100 datasets,respectively. 展开更多
关键词 Convolutional neural network Gaussian process Hybrid model hyperparameter optimization Mixed-variable Particle swarm optimization
原文传递
Tuning hyperparameters of doublet-detection methods for single-cell RNA sequencing data
18
作者 Nan Miles Xi Angelos Vasilopoulos 《Quantitative Biology》 CSCD 2023年第3期297-305,共9页
Background:The existence of doublets in single-cell RNA sequencing(scRNA-seq)data poses a great challenge in downstream data analysis.Computational doublet-detection methods have been developed to remove doublets from... Background:The existence of doublets in single-cell RNA sequencing(scRNA-seq)data poses a great challenge in downstream data analysis.Computational doublet-detection methods have been developed to remove doublets from scRNA-seq data.Yet,the default hyperparameter settings of those methods may not provide optimal performance.Methods:We propose a strategy to tune hyperparameters for a cutting-edge doublet-detection method.We utilize a full factorial design to explore the relationship between hyperparameters and detection accuracy on 16 real scRNA-seq datasets.The optimal hyperparameters are obtained by a response surface model and convex optimization.Results:We show that the optimal hyperparameters provide top performance across scRNA-seq datasets under various biological conditions.Our tuning strategy can be applied to other computational doublet-detection methods.It also offers insights into hyperparameter tuning for broader computational methods in scRNA-seq data analysis.Conclusions:The hyperparameter configuration significantly impacts the performance of computational doublet-detection methods.Our study is the first attempt to systematically explore the optimal hyperparameters under various biological conditions and optimization objectives.Our study provides much-needed guidance for hyperparameter tuning in computational doublet-detection methods. 展开更多
关键词 scRNA-seq doublet detection hyperparameter tuning experimental design response surface model
原文传递
An Efficient Modelling of Oversampling with Optimal Deep Learning Enabled Anomaly Detection in Streaming Data
19
作者 R.Rajakumar S.Sathiya Devi 《China Communications》 SCIE CSCD 2024年第5期249-260,共12页
Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL... Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets. 展开更多
关键词 anomaly detection deep learning hyperparameter optimization OVERSAMPLING SMOTE streaming data
下载PDF
Credit Card Fraud Detection Using Improved Deep Learning Models
20
作者 Sumaya S.Sulaiman Ibraheem Nadher Sarab M.Hameed 《Computers, Materials & Continua》 SCIE EI 2024年第1期1049-1069,共21页
Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown pr... Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection. 展开更多
关键词 Card fraud detection hyperparameter tuning deep learning autoencoder convolution neural network long short-term memory RESAMPLING
下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部