Driven piles are used in many geological environments as a practical and convenient structural component.Hence,the determination of the drivability of piles is actually of great importance in complex geotechnical appl...Driven piles are used in many geological environments as a practical and convenient structural component.Hence,the determination of the drivability of piles is actually of great importance in complex geotechnical applications.Conventional methods of predicting pile drivability often rely on simplified physicalmodels or empirical formulas,whichmay lack accuracy or applicability in complex geological conditions.Therefore,this study presents a practical machine learning approach,namely a Random Forest(RF)optimized by Bayesian Optimization(BO)and Particle Swarm Optimization(PSO),which not only enhances prediction accuracy but also better adapts to varying geological environments to predict the drivability parameters of piles(i.e.,maximumcompressive stress,maximum tensile stress,and blow per foot).In addition,support vector regression,extreme gradient boosting,k nearest neighbor,and decision tree are also used and applied for comparison purposes.In order to train and test these models,among the 4072 datasets collected with 17model inputs,3258 datasets were randomly selected for training,and the remaining 814 datasets were used for model testing.Lastly,the results of these models were compared and evaluated using two performance indices,i.e.,the root mean square error(RMSE)and the coefficient of determination(R2).The results indicate that the optimized RF model achieved lower RMSE than other prediction models in predicting the three parameters,specifically 0.044,0.438,and 0.146;and higher R2 values than other implemented techniques,specifically 0.966,0.884,and 0.977.In addition,the sensitivity and uncertainty of the optimized RF model were analyzed using Sobol sensitivity analysis and Monte Carlo(MC)simulation.It can be concluded that the optimized RF model could be used to predict the performance of the pile,and it may provide a useful reference for solving some problems under similar engineering conditions.展开更多
Magnesium(Mg),being the lightest structural metal,holds immense potential for widespread applications in various fields.The development of high-performance and cost-effective Mg alloys is crucial to further advancing ...Magnesium(Mg),being the lightest structural metal,holds immense potential for widespread applications in various fields.The development of high-performance and cost-effective Mg alloys is crucial to further advancing their commercial utilization.With the rapid advancement of machine learning(ML)technology in recent years,the“data-driven''approach for alloy design has provided new perspectives and opportunities for enhancing the performance of Mg alloys.This paper introduces a novel regression-based Bayesian optimization active learning model(RBOALM)for the development of high-performance Mg-Mn-based wrought alloys.RBOALM employs active learning to automatically explore optimal alloy compositions and process parameters within predefined ranges,facilitating the discovery of superior alloy combinations.This model further integrates pre-established regression models as surrogate functions in Bayesian optimization,significantly enhancing the precision of the design process.Leveraging RBOALM,several new high-performance alloys have been successfully designed and prepared.Notably,after mechanical property testing of the designed alloys,the Mg-2.1Zn-2.0Mn-0.5Sn-0.1Ca alloy demonstrates exceptional mechanical properties,including an ultimate tensile strength of 406 MPa,a yield strength of 287 MPa,and a 23%fracture elongation.Furthermore,the Mg-2.7Mn-0.5Al-0.1Ca alloy exhibits an ultimate tensile strength of 211 MPa,coupled with a remarkable 41%fracture elongation.展开更多
Many scholars have focused on applying machine learning models in bottom hole pressure (BHP) prediction. However, the complex and uncertain conditions in deep wells make it difficult to capture spatial and temporal co...Many scholars have focused on applying machine learning models in bottom hole pressure (BHP) prediction. However, the complex and uncertain conditions in deep wells make it difficult to capture spatial and temporal correlations of measurement while drilling (MWD) data with traditional intelligent models. In this work, we develop a novel hybrid neural network, which integrates the Convolution Neural Network (CNN) and the Gate Recurrent Unit (GRU) for predicting BHP fluctuations more accurately. The CNN structure is used to analyze spatial local dependency patterns and the GRU structure is used to discover depth variation trends of MWD data. To further improve the prediction accuracy, we explore two types of GRU-based structure: skip-GRU and attention-GRU, which can capture more long-term potential periodic correlation in drilling data. Then, the different model structures tuned by the Bayesian optimization (BO) algorithm are compared and analyzed. Results indicate that the hybrid models can extract spatial-temporal information of data effectively and predict more accurately than random forests, extreme gradient boosting, back propagation neural network, CNN and GRU. The CNN-attention-GRU model with BO algorithm shows great superiority in prediction accuracy and robustness due to the hybrid network structure and attention mechanism, having the lowest mean absolute percentage error of 0.025%. This study provides a reference for solving the problem of extracting spatial and temporal characteristics and guidance for managed pressure drilling in complex formations.展开更多
Accurate assessment of undrained shear strength(USS)for soft sensitive clays is a great concern in geotechnical engineering practice.This study applies novel data-driven extreme gradient boosting(XGBoost)and random fo...Accurate assessment of undrained shear strength(USS)for soft sensitive clays is a great concern in geotechnical engineering practice.This study applies novel data-driven extreme gradient boosting(XGBoost)and random forest(RF)ensemble learning methods for capturing the relationships between the USS and various basic soil parameters.Based on the soil data sets from TC304 database,a general approach is developed to predict the USS of soft clays using the two machine learning methods above,where five feature variables including the preconsolidation stress(PS),vertical effective stress(VES),liquid limit(LL),plastic limit(PL)and natural water content(W)are adopted.To reduce the dependence on the rule of thumb and inefficient brute-force search,the Bayesian optimization method is applied to determine the appropriate model hyper-parameters of both XGBoost and RF.The developed models are comprehensively compared with three comparison machine learning methods and two transformation models with respect to predictive accuracy and robustness under 5-fold cross-validation(CV).It is shown that XGBoost-based and RF-based methods outperform these approaches.Besides,the XGBoostbased model provides feature importance ranks,which makes it a promising tool in the prediction of geotechnical parameters and enhances the interpretability of model.展开更多
The dendritic cell algorithm(DCA)is an excellent prototype for developing Machine Learning inspired by the function of the powerful natural immune system.Too many parameters increase complexity and lead to plenty of c...The dendritic cell algorithm(DCA)is an excellent prototype for developing Machine Learning inspired by the function of the powerful natural immune system.Too many parameters increase complexity and lead to plenty of criticism in the signal fusion procedure of DCA.The loss function of DCA is ambiguous due to its complexity.To reduce the uncertainty,several researchers simplified the algorithm program;some introduced gradient descent to optimize parameters;some utilized searching methods to find the optimal parameter combination.However,these studies are either time-consuming or need to be revised in the case of non-convex functions.To overcome the problems,this study models the parameter optimization into a black-box optimization problem without knowing the information about its loss function.This study hybridizes bayesian optimization hyperband(BOHB)with DCA to propose a novel DCA version,BHDCA,for accomplishing parameter optimization in the signal fusion process.The BHDCA utilizes the bayesian optimization(BO)of BOHB to find promising parameter configurations and applies the hyperband of BOHB to allocate the suitable budget for each potential configuration.The experimental results show that the proposed algorithm has significant advantages over the otherDCAexpansion algorithms in terms of signal fusion.展开更多
Diabetes mellitus is a long-term condition characterized by hyperglycemia.It could lead to plenty of difficulties.According to rising morbidity in recent years,the world’s diabetic patients will exceed 642 million by...Diabetes mellitus is a long-term condition characterized by hyperglycemia.It could lead to plenty of difficulties.According to rising morbidity in recent years,the world’s diabetic patients will exceed 642 million by 2040,implying that one out of every ten persons will be diabetic.There is no doubt that this startling figure requires immediate attention from industry and academia to promote innovation and growth in diabetes risk prediction to save individuals’lives.Due to its rapid development,deep learning(DL)was used to predict numerous diseases.However,DLmethods still suffer from their limited prediction performance due to the hyperparameters selection and parameters optimization.Therefore,the selection of hyper-parameters is critical in improving classification performance.This study presents Convolutional Neural Network(CNN)that has achieved remarkable results in many medical domains where the Bayesian optimization algorithm(BOA)has been employed for hyperparameters selection and parameters optimization.Two issues have been investigated and solved during the experiment to enhance the results.The first is the dataset class imbalance,which is solved using Synthetic Minority Oversampling Technique(SMOTE)technique.The second issue is the model’s poor performance,which has been solved using the Bayesian optimization algorithm.The findings indicate that the Bayesian based-CNN model superbases all the state-of-the-art models in the literature with an accuracy of 89.36%,F1-score of 0.88.6,andMatthews Correlation Coefficient(MCC)of 0.88.6.展开更多
At present Bayesian Networks(BN)are being used widely for demonstrating uncertain knowledge in many disciplines,including biology,computer science,risk analysis,service quality analysis,and business.But they suffer fr...At present Bayesian Networks(BN)are being used widely for demonstrating uncertain knowledge in many disciplines,including biology,computer science,risk analysis,service quality analysis,and business.But they suffer from the problem that when the nodes and edges increase,the structure learning difficulty increases and algorithms become inefficient.To solve this problem,heuristic optimization algorithms are used,which tend to find a near-optimal answer rather than an exact one,with particle swarm optimization(PSO)being one of them.PSO is a swarm intelligence-based algorithm having basic inspiration from flocks of birds(how they search for food).PSO is employed widely because it is easier to code,converges quickly,and can be parallelized easily.We use a recently proposed version of PSO called generalized particle swarm optimization(GEPSO)to learn bayesian network structure.We construct an initial directed acyclic graph(DAG)by using the max-min parent’s children(MMPC)algorithm and cross relative average entropy.ThisDAGis used to create a population for theGEPSO optimization procedure.Moreover,we propose a velocity update procedure to increase the efficiency of the algorithmic search process.Results of the experiments show that as the complexity of the dataset increases,our algorithm Bayesian network generalized particle swarm optimization(BN-GEPSO)outperforms the PSO algorithm in terms of the Bayesian information criterion(BIC)score.展开更多
Breast cancer seriously affects many women.If breast cancer is detected at an early stage,it may be cured.This paper proposes a novel classification model based improved machine learning algorithms for diagnosis of br...Breast cancer seriously affects many women.If breast cancer is detected at an early stage,it may be cured.This paper proposes a novel classification model based improved machine learning algorithms for diagnosis of breast cancer at its initial stage.It has been used by combining feature selection and Bayesian optimization approaches to build improved machine learning models.Support Vector Machine,K-Nearest Neighbor,Naive Bayes,Ensemble Learning and Decision Tree approaches were used as machine learning algorithms.All experiments were tested on two different datasets,which are Wisconsin Breast Cancer Dataset(WBCD)and Mammographic Breast Cancer Dataset(MBCD).Experiments were implemented to obtain the best classification process.Relief,Least Absolute Shrinkage and Selection Operator(LASSO)and Sequential Forward Selection were used to determine the most relevant features,respectively.The machine learning models were optimized with the help of Bayesian optimization approach to obtain optimal hyperparameter values.Experimental results showed the unified feature selection-hyperparameter optimization method improved the classification performance in all machine learning algorithms.Among the various experiments,LASSO-BO-SVM showed the highest accuracy,precision,recall and F1-score for two datasets(97.95%,98.28%,98.28%,98.28%for MBCD and 98.95%,97.17%,100%,98.56%for MBCD),yielding outperforming results compared to recent studies.展开更多
Sign language recognition can be treated as one of the efficient solu-tions for disabled people to communicate with others.It helps them to convey the required data by the use of sign language with no issues.The lates...Sign language recognition can be treated as one of the efficient solu-tions for disabled people to communicate with others.It helps them to convey the required data by the use of sign language with no issues.The latest develop-ments in computer vision and image processing techniques can be accurately uti-lized for the sign recognition process by disabled people.American Sign Language(ASL)detection was challenging because of the enhancing intraclass similarity and higher complexity.This article develops a new Bayesian Optimiza-tion with Deep Learning-Driven Hand Gesture Recognition Based Sign Language Communication(BODL-HGRSLC)for Disabled People.The BODL-HGRSLC technique aims to recognize the hand gestures for disabled people’s communica-tion.The presented BODL-HGRSLC technique integrates the concepts of compu-ter vision(CV)and DL models.In the presented BODL-HGRSLC technique,a deep convolutional neural network-based residual network(ResNet)model is applied for feature extraction.Besides,the presented BODL-HGRSLC model uses Bayesian optimization for the hyperparameter tuning process.At last,a bidir-ectional gated recurrent unit(BiGRU)model is exploited for the HGR procedure.A wide range of experiments was conducted to demonstrate the enhanced perfor-mance of the presented BODL-HGRSLC model.The comprehensive comparison study reported the improvements of the BODL-HGRSLC model over other DL models with maximum accuracy of 99.75%.展开更多
基金supported by the National Science Foundation of China(42107183).
文摘Driven piles are used in many geological environments as a practical and convenient structural component.Hence,the determination of the drivability of piles is actually of great importance in complex geotechnical applications.Conventional methods of predicting pile drivability often rely on simplified physicalmodels or empirical formulas,whichmay lack accuracy or applicability in complex geological conditions.Therefore,this study presents a practical machine learning approach,namely a Random Forest(RF)optimized by Bayesian Optimization(BO)and Particle Swarm Optimization(PSO),which not only enhances prediction accuracy but also better adapts to varying geological environments to predict the drivability parameters of piles(i.e.,maximumcompressive stress,maximum tensile stress,and blow per foot).In addition,support vector regression,extreme gradient boosting,k nearest neighbor,and decision tree are also used and applied for comparison purposes.In order to train and test these models,among the 4072 datasets collected with 17model inputs,3258 datasets were randomly selected for training,and the remaining 814 datasets were used for model testing.Lastly,the results of these models were compared and evaluated using two performance indices,i.e.,the root mean square error(RMSE)and the coefficient of determination(R2).The results indicate that the optimized RF model achieved lower RMSE than other prediction models in predicting the three parameters,specifically 0.044,0.438,and 0.146;and higher R2 values than other implemented techniques,specifically 0.966,0.884,and 0.977.In addition,the sensitivity and uncertainty of the optimized RF model were analyzed using Sobol sensitivity analysis and Monte Carlo(MC)simulation.It can be concluded that the optimized RF model could be used to predict the performance of the pile,and it may provide a useful reference for solving some problems under similar engineering conditions.
基金supported by the National Natural the Science Foundation of China(51971042,51901028)the Chongqing Academician Special Fund(cstc2020yszxjcyj X0001)+1 种基金the China Scholarship Council(CSC)Norwegian University of Science and Technology(NTNU)for their financial and technical support。
文摘Magnesium(Mg),being the lightest structural metal,holds immense potential for widespread applications in various fields.The development of high-performance and cost-effective Mg alloys is crucial to further advancing their commercial utilization.With the rapid advancement of machine learning(ML)technology in recent years,the“data-driven''approach for alloy design has provided new perspectives and opportunities for enhancing the performance of Mg alloys.This paper introduces a novel regression-based Bayesian optimization active learning model(RBOALM)for the development of high-performance Mg-Mn-based wrought alloys.RBOALM employs active learning to automatically explore optimal alloy compositions and process parameters within predefined ranges,facilitating the discovery of superior alloy combinations.This model further integrates pre-established regression models as surrogate functions in Bayesian optimization,significantly enhancing the precision of the design process.Leveraging RBOALM,several new high-performance alloys have been successfully designed and prepared.Notably,after mechanical property testing of the designed alloys,the Mg-2.1Zn-2.0Mn-0.5Sn-0.1Ca alloy demonstrates exceptional mechanical properties,including an ultimate tensile strength of 406 MPa,a yield strength of 287 MPa,and a 23%fracture elongation.Furthermore,the Mg-2.7Mn-0.5Al-0.1Ca alloy exhibits an ultimate tensile strength of 211 MPa,coupled with a remarkable 41%fracture elongation.
基金The authors express their appreciation to National Key Research and Development Project“Key Scientific Issues of Revolutionary Technology”(2019YFA0708300)Strategic Cooperation Technology Projects of CNPC and CUPB(ZLZX2020-03)+1 种基金Distinguished Young Foundation of National Natural Science Foundation of China(52125401)Science Foundation of China University of Petroleum,Beijing(2462022SZBH002).
文摘Many scholars have focused on applying machine learning models in bottom hole pressure (BHP) prediction. However, the complex and uncertain conditions in deep wells make it difficult to capture spatial and temporal correlations of measurement while drilling (MWD) data with traditional intelligent models. In this work, we develop a novel hybrid neural network, which integrates the Convolution Neural Network (CNN) and the Gate Recurrent Unit (GRU) for predicting BHP fluctuations more accurately. The CNN structure is used to analyze spatial local dependency patterns and the GRU structure is used to discover depth variation trends of MWD data. To further improve the prediction accuracy, we explore two types of GRU-based structure: skip-GRU and attention-GRU, which can capture more long-term potential periodic correlation in drilling data. Then, the different model structures tuned by the Bayesian optimization (BO) algorithm are compared and analyzed. Results indicate that the hybrid models can extract spatial-temporal information of data effectively and predict more accurately than random forests, extreme gradient boosting, back propagation neural network, CNN and GRU. The CNN-attention-GRU model with BO algorithm shows great superiority in prediction accuracy and robustness due to the hybrid network structure and attention mechanism, having the lowest mean absolute percentage error of 0.025%. This study provides a reference for solving the problem of extracting spatial and temporal characteristics and guidance for managed pressure drilling in complex formations.
基金financial support from High-end Foreign Expert Introduction program(No.G20190022002)Chongqing Construction Science and Technology Plan Project(2019-0045)as well as Chongqing Engineering Research Center of Disaster Prevention&Control for Banks and Structures in Three Gorges Reservoir Area(Nos.SXAPGC18ZD01 and SXAPGC18YB03)。
文摘Accurate assessment of undrained shear strength(USS)for soft sensitive clays is a great concern in geotechnical engineering practice.This study applies novel data-driven extreme gradient boosting(XGBoost)and random forest(RF)ensemble learning methods for capturing the relationships between the USS and various basic soil parameters.Based on the soil data sets from TC304 database,a general approach is developed to predict the USS of soft clays using the two machine learning methods above,where five feature variables including the preconsolidation stress(PS),vertical effective stress(VES),liquid limit(LL),plastic limit(PL)and natural water content(W)are adopted.To reduce the dependence on the rule of thumb and inefficient brute-force search,the Bayesian optimization method is applied to determine the appropriate model hyper-parameters of both XGBoost and RF.The developed models are comprehensively compared with three comparison machine learning methods and two transformation models with respect to predictive accuracy and robustness under 5-fold cross-validation(CV).It is shown that XGBoost-based and RF-based methods outperform these approaches.Besides,the XGBoostbased model provides feature importance ranks,which makes it a promising tool in the prediction of geotechnical parameters and enhances the interpretability of model.
基金National Natural Science Foundation of China with the Grant Number 61877045。
文摘The dendritic cell algorithm(DCA)is an excellent prototype for developing Machine Learning inspired by the function of the powerful natural immune system.Too many parameters increase complexity and lead to plenty of criticism in the signal fusion procedure of DCA.The loss function of DCA is ambiguous due to its complexity.To reduce the uncertainty,several researchers simplified the algorithm program;some introduced gradient descent to optimize parameters;some utilized searching methods to find the optimal parameter combination.However,these studies are either time-consuming or need to be revised in the case of non-convex functions.To overcome the problems,this study models the parameter optimization into a black-box optimization problem without knowing the information about its loss function.This study hybridizes bayesian optimization hyperband(BOHB)with DCA to propose a novel DCA version,BHDCA,for accomplishing parameter optimization in the signal fusion process.The BHDCA utilizes the bayesian optimization(BO)of BOHB to find promising parameter configurations and applies the hyperband of BOHB to allocate the suitable budget for each potential configuration.The experimental results show that the proposed algorithm has significant advantages over the otherDCAexpansion algorithms in terms of signal fusion.
基金This research/paper was fully supported by Universiti Teknologi PETRONAS,under the Yayasan Universiti Teknologi PETRONAS(YUTP)Fundamental Research Grant Scheme(015LC0-311).
文摘Diabetes mellitus is a long-term condition characterized by hyperglycemia.It could lead to plenty of difficulties.According to rising morbidity in recent years,the world’s diabetic patients will exceed 642 million by 2040,implying that one out of every ten persons will be diabetic.There is no doubt that this startling figure requires immediate attention from industry and academia to promote innovation and growth in diabetes risk prediction to save individuals’lives.Due to its rapid development,deep learning(DL)was used to predict numerous diseases.However,DLmethods still suffer from their limited prediction performance due to the hyperparameters selection and parameters optimization.Therefore,the selection of hyper-parameters is critical in improving classification performance.This study presents Convolutional Neural Network(CNN)that has achieved remarkable results in many medical domains where the Bayesian optimization algorithm(BOA)has been employed for hyperparameters selection and parameters optimization.Two issues have been investigated and solved during the experiment to enhance the results.The first is the dataset class imbalance,which is solved using Synthetic Minority Oversampling Technique(SMOTE)technique.The second issue is the model’s poor performance,which has been solved using the Bayesian optimization algorithm.The findings indicate that the Bayesian based-CNN model superbases all the state-of-the-art models in the literature with an accuracy of 89.36%,F1-score of 0.88.6,andMatthews Correlation Coefficient(MCC)of 0.88.6.
基金The authors extended their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through the Large Groups Project under grant number RGP.2/132/43。
文摘At present Bayesian Networks(BN)are being used widely for demonstrating uncertain knowledge in many disciplines,including biology,computer science,risk analysis,service quality analysis,and business.But they suffer from the problem that when the nodes and edges increase,the structure learning difficulty increases and algorithms become inefficient.To solve this problem,heuristic optimization algorithms are used,which tend to find a near-optimal answer rather than an exact one,with particle swarm optimization(PSO)being one of them.PSO is a swarm intelligence-based algorithm having basic inspiration from flocks of birds(how they search for food).PSO is employed widely because it is easier to code,converges quickly,and can be parallelized easily.We use a recently proposed version of PSO called generalized particle swarm optimization(GEPSO)to learn bayesian network structure.We construct an initial directed acyclic graph(DAG)by using the max-min parent’s children(MMPC)algorithm and cross relative average entropy.ThisDAGis used to create a population for theGEPSO optimization procedure.Moreover,we propose a velocity update procedure to increase the efficiency of the algorithmic search process.Results of the experiments show that as the complexity of the dataset increases,our algorithm Bayesian network generalized particle swarm optimization(BN-GEPSO)outperforms the PSO algorithm in terms of the Bayesian information criterion(BIC)score.
文摘Breast cancer seriously affects many women.If breast cancer is detected at an early stage,it may be cured.This paper proposes a novel classification model based improved machine learning algorithms for diagnosis of breast cancer at its initial stage.It has been used by combining feature selection and Bayesian optimization approaches to build improved machine learning models.Support Vector Machine,K-Nearest Neighbor,Naive Bayes,Ensemble Learning and Decision Tree approaches were used as machine learning algorithms.All experiments were tested on two different datasets,which are Wisconsin Breast Cancer Dataset(WBCD)and Mammographic Breast Cancer Dataset(MBCD).Experiments were implemented to obtain the best classification process.Relief,Least Absolute Shrinkage and Selection Operator(LASSO)and Sequential Forward Selection were used to determine the most relevant features,respectively.The machine learning models were optimized with the help of Bayesian optimization approach to obtain optimal hyperparameter values.Experimental results showed the unified feature selection-hyperparameter optimization method improved the classification performance in all machine learning algorithms.Among the various experiments,LASSO-BO-SVM showed the highest accuracy,precision,recall and F1-score for two datasets(97.95%,98.28%,98.28%,98.28%for MBCD and 98.95%,97.17%,100%,98.56%for MBCD),yielding outperforming results compared to recent studies.
基金The authors extend their appreciation to the King Salman centre for Disability Research for funding this work through Research Group no KSRG-2022-017.
文摘Sign language recognition can be treated as one of the efficient solu-tions for disabled people to communicate with others.It helps them to convey the required data by the use of sign language with no issues.The latest develop-ments in computer vision and image processing techniques can be accurately uti-lized for the sign recognition process by disabled people.American Sign Language(ASL)detection was challenging because of the enhancing intraclass similarity and higher complexity.This article develops a new Bayesian Optimiza-tion with Deep Learning-Driven Hand Gesture Recognition Based Sign Language Communication(BODL-HGRSLC)for Disabled People.The BODL-HGRSLC technique aims to recognize the hand gestures for disabled people’s communica-tion.The presented BODL-HGRSLC technique integrates the concepts of compu-ter vision(CV)and DL models.In the presented BODL-HGRSLC technique,a deep convolutional neural network-based residual network(ResNet)model is applied for feature extraction.Besides,the presented BODL-HGRSLC model uses Bayesian optimization for the hyperparameter tuning process.At last,a bidir-ectional gated recurrent unit(BiGRU)model is exploited for the HGR procedure.A wide range of experiments was conducted to demonstrate the enhanced perfor-mance of the presented BODL-HGRSLC model.The comprehensive comparison study reported the improvements of the BODL-HGRSLC model over other DL models with maximum accuracy of 99.75%.