Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w...Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.展开更多
With the rapid development ofmobile Internet,spatial crowdsourcing has becomemore andmore popular.Spatial crowdsourcing consists of many different types of applications,such as spatial crowd-sensing services.In terms ...With the rapid development ofmobile Internet,spatial crowdsourcing has becomemore andmore popular.Spatial crowdsourcing consists of many different types of applications,such as spatial crowd-sensing services.In terms of spatial crowd-sensing,it collects and analyzes traffic sensing data from clients like vehicles and traffic lights to construct intelligent traffic prediction models.Besides collecting sensing data,spatial crowdsourcing also includes spatial delivery services like DiDi and Uber.Appropriate task assignment and worker selection dominate the service quality for spatial crowdsourcing applications.Previous research conducted task assignments via traditional matching approaches or using simple network models.However,advanced mining methods are lacking to explore the relationship between workers,task publishers,and the spatio-temporal attributes in tasks.Therefore,in this paper,we propose a Deep Double Dueling Spatial-temporal Q Network(D3SQN)to adaptively learn the spatialtemporal relationship between task,task publishers,and workers in a dynamic environment to achieve optimal allocation.Specifically,D3SQNis revised through reinforcement learning by adding a spatial-temporal transformer that can estimate the expected state values and action advantages so as to improve the accuracy of task assignments.Extensive experiments are conducted over real data collected fromDiDi and ELM,and the simulation results verify the effectiveness of our proposed models.展开更多
This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective o...This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations.展开更多
Surface wave inversion is a key step in the application of surface waves to soil velocity profiling.Currently,a common practice for the process of inversion is that the number of soil layers is assumed to be known bef...Surface wave inversion is a key step in the application of surface waves to soil velocity profiling.Currently,a common practice for the process of inversion is that the number of soil layers is assumed to be known before using heuristic search algorithms to compute the shear wave velocity profile or the number of soil layers is considered as an optimization variable.However,an improper selection of the number of layers may lead to an incorrect shear wave velocity profile.In this study,a deep learning and genetic algorithm hybrid learning procedure is proposed to perform the surface wave inversion without the need to assume the number of soil layers.First,a deep neural network is adapted to learn from a large number of synthetic dispersion curves for inferring the layer number.Then,the shear-wave velocity profile is determined by a genetic algorithm with the known layer number.By applying this procedure to both simulated and real-world cases,the results indicate that the proposed method is reliable and efficient for surface wave inversion.展开更多
In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia...In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.展开更多
The Advanced Geosynchronous Radiation Imager(AGRI)is a mission-critical instrument for the Fengyun series of satellites.AGRI acquires full-disk images every 15 min and views East Asia every 5 min through 14 spectral b...The Advanced Geosynchronous Radiation Imager(AGRI)is a mission-critical instrument for the Fengyun series of satellites.AGRI acquires full-disk images every 15 min and views East Asia every 5 min through 14 spectral bands,enabling the detection of highly variable aerosol optical depth(AOD).Quantitative retrieval of AOD has hitherto been challenging,especially over land.In this study,an AOD retrieval algorithm is proposed that combines deep learning and transfer learning.The algorithm uses core concepts from both the Dark Target(DT)and Deep Blue(DB)algorithms to select features for the machinelearning(ML)algorithm,allowing for AOD retrieval at 550 nm over both dark and bright surfaces.The algorithm consists of two steps:①A baseline deep neural network(DNN)with skip connections is developed using 10 min Advanced Himawari Imager(AHI)AODs as the target variable,and②sunphotometer AODs from 89 ground-based stations are used to fine-tune the DNN parameters.Out-of-station validation shows that the retrieved AOD attains high accuracy,characterized by a coefficient of determination(R2)of 0.70,a mean bias error(MBE)of 0.03,and a percentage of data within the expected error(EE)of 70.7%.A sensitivity study reveals that the top-of-atmosphere reflectance at 650 and 470 nm,as well as the surface reflectance at 650 nm,are the two largest sources of uncertainty impacting the retrieval.In a case study of monitoring an extreme aerosol event,the AGRI AOD is found to be able to capture the detailed temporal evolution of the event.This work demonstrates the superiority of the transfer-learning technique in satellite AOD retrievals and the applicability of the retrieved AGRI AOD in monitoring extreme pollution events.展开更多
Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Indu...Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects.展开更多
The chirp sub-bottom profiler,for its high resolution,easy accessibility and cost-effectiveness,has been widely used in acoustic detection.In this paper,the acoustic impedance and grain size compositions were obtained...The chirp sub-bottom profiler,for its high resolution,easy accessibility and cost-effectiveness,has been widely used in acoustic detection.In this paper,the acoustic impedance and grain size compositions were obtained based on the chirp sub-bottom profiler data collected in the Chukchi Plateau area during the 11th Arctic Expedition of China.The time-domain adaptive search matching algorithm was used and validated on our established theoretical model.The misfit between the inversion result and the theoretical model is less than 0.067%.The grain size was calculated according to the empirical relationship between the acoustic impedance and the grain size of the sediment.The average acoustic impedance of sub-seafloor strata is 2.5026×10^(6) kg(s m^(2))^(-1)and the average grain size(θvalue)of the seafloor surface sediment is 7.1498,indicating the predominant occurrence of very fine silt sediment in the study area.Comparison of the inversion results and the laboratory measurements of nearby borehole samples shows that they are in general agreement.展开更多
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st...At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated.展开更多
High penetration of distributed renewable energy sources and electric vehicles(EVs)makes future active distribution network(ADN)highly variable.These characteristics put great challenges to traditional voltage control...High penetration of distributed renewable energy sources and electric vehicles(EVs)makes future active distribution network(ADN)highly variable.These characteristics put great challenges to traditional voltage control methods.Voltage control based on the deep Q-network(DQN)algorithm offers a potential solution to this problem because it possesses humanlevel control performance.However,the traditional DQN methods may produce overestimation of action reward values,resulting in degradation of obtained solutions.In this paper,an intelligent voltage control method based on averaged weighted double deep Q-network(AWDDQN)algorithm is proposed to overcome the shortcomings of overestimation of action reward values in DQN algorithm and underestimation of action reward values in double deep Q-network(DDQN)algorithm.Using the proposed method,the voltage control objective is incorporated into the designed action reward values and normalized to form a Markov decision process(MDP)model which is solved by the AWDDQN algorithm.The designed AWDDQN-based intelligent voltage control agent is trained offline and used as online intelligent dynamic voltage regulator for the ADN.The proposed voltage control method is validated using the IEEE 33-bus and 123-bus systems containing renewable energy sources and EVs,and compared with the DQN and DDQN algorithms based methods,and traditional mixed-integer nonlinear program based methods.The simulation results show that the proposed method has better convergence and less voltage volatility than the other ones.展开更多
With the advent of Reinforcement Learning(RL)and its continuous progress,state-of-the-art RL systems have come up for many challenging and real-world tasks.Given the scope of this area,various techniques are found in ...With the advent of Reinforcement Learning(RL)and its continuous progress,state-of-the-art RL systems have come up for many challenging and real-world tasks.Given the scope of this area,various techniques are found in the literature.One such notable technique,Multiple Deep Q-Network(DQN)based RL systems use multiple DQN-based-entities,which learn together and communicate with each other.The learning has to be distributed wisely among all entities in such a scheme and the inter-entity communication protocol has to be carefully designed.As more complex DQNs come to the fore,the overall complexity of these multi-entity systems has increased many folds leading to issues like difficulty in training,need for high resources,more training time,and difficulty in fine-tuning leading to performance issues.Taking a cue from the parallel processing found in the nature and its efficacy,we propose a lightweight ensemble based approach for solving the core RL tasks.It uses multiple binary action DQNs having shared state and reward.The benefits of the proposed approach are overall simplicity,faster convergence and better performance compared to conventional DQN based approaches.The approach can potentially be extended to any type of DQN by forming its ensemble.Conducting extensive experimentation,promising results are obtained using the proposed ensemble approach on OpenAI Gym tasks,and Atari 2600 games as compared to recent techniques.The proposed approach gives a stateof-the-art score of 500 on the Cartpole-v1 task,259.2 on the LunarLander-v2 task,and state-of-the-art results on four out of five Atari 2600 games.展开更多
极深因子分解机(eXtreme deep factorization machine,xDeepFM)是一种基于上下文感知的推荐模型,它提出了一种压缩交叉网络对特征进行阶数可控的特征交叉,并将该网络与深度神经网络进行结合以优化推荐效果。为了进一步提升xDeepFM在推...极深因子分解机(eXtreme deep factorization machine,xDeepFM)是一种基于上下文感知的推荐模型,它提出了一种压缩交叉网络对特征进行阶数可控的特征交叉,并将该网络与深度神经网络进行结合以优化推荐效果。为了进一步提升xDeepFM在推荐场景下的表现,提出一种基于场因子分解的xDeepFM改进模型。该模型通过场信息增强了特征的表达能力,并建立了多个交叉压缩网络以学习高阶组合特征。最后分析了用户场、项目场设定的合理性,并在3个不同规模的MovieLens系列数据集上通过受试者工作特征曲线下面积、对数似然损失指标进行性能评估,验证了该改进模型的有效性。展开更多
In recent years,the demand for biometric-based human recog-nition methods has drastically increased to meet the privacy and security requirements.Palm prints,palm veins,finger veins,fingerprints,hand veins and other a...In recent years,the demand for biometric-based human recog-nition methods has drastically increased to meet the privacy and security requirements.Palm prints,palm veins,finger veins,fingerprints,hand veins and other anatomic and behavioral features are utilized in the development of different biometric recognition techniques.Amongst the available biometric recognition techniques,Finger Vein Recognition(FVR)is a general technique that analyzes the patterns of finger veins to authenticate the individuals.Deep Learning(DL)-based techniques have gained immense attention in the recent years,since it accomplishes excellent outcomes in various challenging domains such as computer vision,speech detection and Natural Language Processing(NLP).This technique is a natural fit to overcome the ever-increasing biomet-ric detection problems and cell phone authentication issues in airport security techniques.The current study presents an Automated Biometric Finger Vein Recognition using Evolutionary Algorithm with Deep Learning(ABFVR-EADL)model.The presented ABFVR-EADL model aims to accomplish bio-metric recognition using the patterns of the finger veins.Initially,the presented ABFVR-EADL model employs the histogram equalization technique to pre-process the input images.For feature extraction,the Salp Swarm Algorithm(SSA)with Densely-connected Networks(DenseNet-201)model is exploited,showing the proposed method’s novelty.Finally,the Deep-Stacked Denoising Autoencoder(DSAE)is utilized for biometric recognition.The proposed ABFVR-EADL method was experimentally validated using the benchmark databases,and the outcomes confirmed the productive performance of the proposed ABFVR-EADL model over other DL models.展开更多
Pavement crack detection plays a crucial role in ensuring road safety and reducing maintenance expenses.Recent advancements in deep learning(DL)techniques have shown promising results in detecting pavement cracks;howe...Pavement crack detection plays a crucial role in ensuring road safety and reducing maintenance expenses.Recent advancements in deep learning(DL)techniques have shown promising results in detecting pavement cracks;however,the selection of relevant features for classification remains challenging.In this study,we propose a new approach for pavement crack detection that integrates deep learning for feature extraction,the whale optimization algorithm(WOA)for feature selection,and random forest(RF)for classification.The performance of the models was evaluated using accuracy,recall,precision,F1 score,and area under the receiver operating characteristic curve(AUC).Our findings reveal that Model 2,which incorporates RF into the ResNet-18 architecture,outperforms baseline Model 1 across all evaluation metrics.Nevertheless,our proposed model,which combines ResNet-18 with both WOA and RF,achieves significantly higher accuracy,recall,precision,and F1 score compared to the other two models.These results underscore the effectiveness of integrating RF and WOA into ResNet-18 for pavement crack detection applications.We applied the proposed approach to a dataset of pavement images,achieving an accuracy of 97.16%and an AUC of 0.984.Our results demonstrate that the proposed approach surpasses existing methods for pavement crack detection,offering a promising solution for the automatic identification of pavement cracks.By leveraging this approach,potential safety hazards can be identified more effectively,enabling timely repairs and maintenance measures.Lastly,the findings of this study also emphasize the potential of integrating RF and WOA with deep learning for pavement crack detection,providing road authorities with the necessary tools to make informed decisions regarding road infrastructure maintenance.展开更多
Now a days,Remote Sensing(RS)techniques are used for earth observation and for detection of soil types with high accuracy and better reliability.This technique provides perspective view of spatial resolution and aids ...Now a days,Remote Sensing(RS)techniques are used for earth observation and for detection of soil types with high accuracy and better reliability.This technique provides perspective view of spatial resolution and aids in instantaneous measurement of soil’s minerals and its characteristics.There are a few challenges that is present in soil classification using image enhancement such as,locating and plotting soil boundaries,slopes,hazardous areas,drainage condition,land use,vegetation etc.There are some traditional approaches which involves few drawbacks such as,manual involvement which results in inaccuracy due to human interference,time consuming,inconsistent prediction etc.To overcome these draw backs and to improve the predictive analysis of soil characteristics,we propose a Hybrid Deep Learning improved BAT optimization algorithm(HDIB)for soil classification using remote sensing hyperspectral features.In HDIB,we propose a spontaneous BAT optimization algorithm for feature extraction of both spectral-spatial features by choosing pure pixels from the Hyper Spectral(HS)image.Spectral-spatial vector as training illustrations is attained by merging spatial and spectral vector by means of priority stacking methodology.Then,a recurring Deep Learning(DL)Neural Network(NN)is used for classifying the HS images,considering the datasets of Pavia University,Salinas and Tamil Nadu Hill Scene,which in turn improves the reliability of classification.Finally,the performance of the proposed HDIB based soil classifier is compared and analyzed with existing methodologies like Single Layer Perceptron(SLP),Convolutional Neural Networks(CNN)and Deep Metric Learning(DML)and it shows an improved classification accuracy of 99.87%,98.34%and 99.9%for Tamil Nadu Hills dataset,Pavia University and Salinas scene datasets respectively.展开更多
With new developments experienced in Internet of Things(IoT),wearable,and sensing technology,the value of healthcare services has enhanced.This evolution has brought significant changes from conventional medicine-base...With new developments experienced in Internet of Things(IoT),wearable,and sensing technology,the value of healthcare services has enhanced.This evolution has brought significant changes from conventional medicine-based healthcare to real-time observation-based healthcare.Biomedical Electrocardiogram(ECG)signals are generally utilized in examination and diagnosis of Cardiovascular Diseases(CVDs)since it is quick and non-invasive in nature.Due to increasing number of patients in recent years,the classifier efficiency gets reduced due to high variances observed in ECG signal patterns obtained from patients.In such scenario computer-assisted automated diagnostic tools are important for classification of ECG signals.The current study devises an Improved Bat Algorithm with Deep Learning Based Biomedical ECGSignal Classification(IBADL-BECGC)approach.To accomplish this,the proposed IBADL-BECGC model initially pre-processes the input signals.Besides,IBADL-BECGC model applies NasNet model to derive the features from test ECG signals.In addition,Improved Bat Algorithm(IBA)is employed to optimally fine-tune the hyperparameters related to NasNet approach.Finally,Extreme Learning Machine(ELM)classification algorithm is executed to perform ECG classification method.The presented IBADL-BECGC model was experimentally validated utilizing benchmark dataset.The comparison study outcomes established the improved performance of IBADL-BECGC model over other existing methodologies since the former achieved a maximum accuracy of 97.49%.展开更多
In a rechargeable wireless sensor network,utilizing the unmanned aerial vehicle(UAV)as a mobile base station(BS)to charge sensors and collect data effectively prolongs the network’s lifetime.In this paper,we jointly ...In a rechargeable wireless sensor network,utilizing the unmanned aerial vehicle(UAV)as a mobile base station(BS)to charge sensors and collect data effectively prolongs the network’s lifetime.In this paper,we jointly optimize the UAV’s flight trajectory and the sensor selection and operation modes to maximize the average data traffic of all sensors within a wireless sensor network(WSN)during finite UAV’s flight time,while ensuring the energy required for each sensor by wireless power transfer(WPT).We consider a practical scenario,where the UAV has no prior knowledge of sensor locations.The UAV performs autonomous navigation based on the status information obtained within the coverage area,which is modeled as a Markov decision process(MDP).The deep Q-network(DQN)is employed to execute the navigation based on the UAV position,the battery level state,channel conditions and current data traffic of sensors within the UAV’s coverage area.Our simulation results demonstrate that the DQN algorithm significantly improves the network performance in terms of the average data traffic and trajectory design.展开更多
Hyperspectral(HS)image classification is a hot research area due to challenging issues such as existence of high dimensionality,restricted training data,etc.Precise recognition of features from the HS images is importa...Hyperspectral(HS)image classification is a hot research area due to challenging issues such as existence of high dimensionality,restricted training data,etc.Precise recognition of features from the HS images is important for effective classification outcomes.Additionally,the recent advancements of deep learning(DL)models make it possible in several application areas.In addition,the performance of the DL models is mainly based on the hyperparameter setting which can be resolved by the design of metaheuristics.In this view,this article develops an automated red deer algorithm with deep learning enabled hyperspec-tral image(HSI)classification(RDADL-HIC)technique.The proposed RDADL-HIC technique aims to effectively determine the HSI images.In addition,the RDADL-HIC technique comprises a NASNetLarge model with Adagrad optimi-zer.Moreover,RDA with gated recurrent unit(GRU)approach is used for the identification and classification of HSIs.The design of Adagrad optimizer with RDA helps to optimally tune the hyperparameters of the NASNetLarge and GRU models respectively.The experimental results stated the supremacy of the RDADL-HIC model and the results are inspected interms of different measures.The comparison study of the RDADL-HIC model demonstrated the enhanced per-formance over its recent state of art approaches.展开更多
基金via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2023/R/1444).
文摘Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.
基金supported in part by the Pioneer and Leading Goose R&D Program of Zhejiang Province under Grant 2022C01083 (Dr.Yu Li,https://zjnsf.kjt.zj.gov.cn/)Pioneer and Leading Goose R&D Program of Zhejiang Province under Grant 2023C01217 (Dr.Yu Li,https://zjnsf.kjt.zj.gov.cn/).
文摘With the rapid development ofmobile Internet,spatial crowdsourcing has becomemore andmore popular.Spatial crowdsourcing consists of many different types of applications,such as spatial crowd-sensing services.In terms of spatial crowd-sensing,it collects and analyzes traffic sensing data from clients like vehicles and traffic lights to construct intelligent traffic prediction models.Besides collecting sensing data,spatial crowdsourcing also includes spatial delivery services like DiDi and Uber.Appropriate task assignment and worker selection dominate the service quality for spatial crowdsourcing applications.Previous research conducted task assignments via traditional matching approaches or using simple network models.However,advanced mining methods are lacking to explore the relationship between workers,task publishers,and the spatio-temporal attributes in tasks.Therefore,in this paper,we propose a Deep Double Dueling Spatial-temporal Q Network(D3SQN)to adaptively learn the spatialtemporal relationship between task,task publishers,and workers in a dynamic environment to achieve optimal allocation.Specifically,D3SQNis revised through reinforcement learning by adding a spatial-temporal transformer that can estimate the expected state values and action advantages so as to improve the accuracy of task assignments.Extensive experiments are conducted over real data collected fromDiDi and ELM,and the simulation results verify the effectiveness of our proposed models.
文摘This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations.
基金provided through research grant No.0035/2019/A1 from the Science and Technology Development Fund,Macao SARthe assistantship from the Faculty of Science and Technology,University of Macao。
文摘Surface wave inversion is a key step in the application of surface waves to soil velocity profiling.Currently,a common practice for the process of inversion is that the number of soil layers is assumed to be known before using heuristic search algorithms to compute the shear wave velocity profile or the number of soil layers is considered as an optimization variable.However,an improper selection of the number of layers may lead to an incorrect shear wave velocity profile.In this study,a deep learning and genetic algorithm hybrid learning procedure is proposed to perform the surface wave inversion without the need to assume the number of soil layers.First,a deep neural network is adapted to learn from a large number of synthetic dispersion curves for inferring the layer number.Then,the shear-wave velocity profile is determined by a genetic algorithm with the known layer number.By applying this procedure to both simulated and real-world cases,the results indicate that the proposed method is reliable and efficient for surface wave inversion.
基金funded by Researchers Supporting Program at King Saud University,(RSPD2024R809).
文摘In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.
基金supported by the National Natural Science of Foundation of China(41825011,42030608,42105128,and 42075079)the Opening Foundation of Key Laboratory of Atmospheric Sounding,the CMA and the CMA Research Center on Meteorological Observation Engineering Technology(U2021Z03).
文摘The Advanced Geosynchronous Radiation Imager(AGRI)is a mission-critical instrument for the Fengyun series of satellites.AGRI acquires full-disk images every 15 min and views East Asia every 5 min through 14 spectral bands,enabling the detection of highly variable aerosol optical depth(AOD).Quantitative retrieval of AOD has hitherto been challenging,especially over land.In this study,an AOD retrieval algorithm is proposed that combines deep learning and transfer learning.The algorithm uses core concepts from both the Dark Target(DT)and Deep Blue(DB)algorithms to select features for the machinelearning(ML)algorithm,allowing for AOD retrieval at 550 nm over both dark and bright surfaces.The algorithm consists of two steps:①A baseline deep neural network(DNN)with skip connections is developed using 10 min Advanced Himawari Imager(AHI)AODs as the target variable,and②sunphotometer AODs from 89 ground-based stations are used to fine-tune the DNN parameters.Out-of-station validation shows that the retrieved AOD attains high accuracy,characterized by a coefficient of determination(R2)of 0.70,a mean bias error(MBE)of 0.03,and a percentage of data within the expected error(EE)of 70.7%.A sensitivity study reveals that the top-of-atmosphere reflectance at 650 and 470 nm,as well as the surface reflectance at 650 nm,are the two largest sources of uncertainty impacting the retrieval.In a case study of monitoring an extreme aerosol event,the AGRI AOD is found to be able to capture the detailed temporal evolution of the event.This work demonstrates the superiority of the transfer-learning technique in satellite AOD retrievals and the applicability of the retrieved AGRI AOD in monitoring extreme pollution events.
基金The Deanship of Scientific Research(DSR)at King Abdulaziz University(KAU),Jeddah,Saudi Arabia has funded this project under Grant No.(G:651-135-1443).
文摘Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects.
基金supported by the National Key R&D Program of China (No.2021YFC2801202)the National Natural Science Foundation of China (No.42076224)the Fundamental Research Funds for the Central Universities (No.202262012)。
文摘The chirp sub-bottom profiler,for its high resolution,easy accessibility and cost-effectiveness,has been widely used in acoustic detection.In this paper,the acoustic impedance and grain size compositions were obtained based on the chirp sub-bottom profiler data collected in the Chukchi Plateau area during the 11th Arctic Expedition of China.The time-domain adaptive search matching algorithm was used and validated on our established theoretical model.The misfit between the inversion result and the theoretical model is less than 0.067%.The grain size was calculated according to the empirical relationship between the acoustic impedance and the grain size of the sediment.The average acoustic impedance of sub-seafloor strata is 2.5026×10^(6) kg(s m^(2))^(-1)and the average grain size(θvalue)of the seafloor surface sediment is 7.1498,indicating the predominant occurrence of very fine silt sediment in the study area.Comparison of the inversion results and the laboratory measurements of nearby borehole samples shows that they are in general agreement.
基金supported by Project No.R-2023-23 of the Deanship of Scientific Research at Majmaah University.
文摘At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated.
基金supported in part by the Anhui Province Natural Science Foundation(No.2108085UD02)the National Natural Science Foundation of China(No.51577047)111 Project(No.BP0719039)。
文摘High penetration of distributed renewable energy sources and electric vehicles(EVs)makes future active distribution network(ADN)highly variable.These characteristics put great challenges to traditional voltage control methods.Voltage control based on the deep Q-network(DQN)algorithm offers a potential solution to this problem because it possesses humanlevel control performance.However,the traditional DQN methods may produce overestimation of action reward values,resulting in degradation of obtained solutions.In this paper,an intelligent voltage control method based on averaged weighted double deep Q-network(AWDDQN)algorithm is proposed to overcome the shortcomings of overestimation of action reward values in DQN algorithm and underestimation of action reward values in double deep Q-network(DDQN)algorithm.Using the proposed method,the voltage control objective is incorporated into the designed action reward values and normalized to form a Markov decision process(MDP)model which is solved by the AWDDQN algorithm.The designed AWDDQN-based intelligent voltage control agent is trained offline and used as online intelligent dynamic voltage regulator for the ADN.The proposed voltage control method is validated using the IEEE 33-bus and 123-bus systems containing renewable energy sources and EVs,and compared with the DQN and DDQN algorithms based methods,and traditional mixed-integer nonlinear program based methods.The simulation results show that the proposed method has better convergence and less voltage volatility than the other ones.
文摘With the advent of Reinforcement Learning(RL)and its continuous progress,state-of-the-art RL systems have come up for many challenging and real-world tasks.Given the scope of this area,various techniques are found in the literature.One such notable technique,Multiple Deep Q-Network(DQN)based RL systems use multiple DQN-based-entities,which learn together and communicate with each other.The learning has to be distributed wisely among all entities in such a scheme and the inter-entity communication protocol has to be carefully designed.As more complex DQNs come to the fore,the overall complexity of these multi-entity systems has increased many folds leading to issues like difficulty in training,need for high resources,more training time,and difficulty in fine-tuning leading to performance issues.Taking a cue from the parallel processing found in the nature and its efficacy,we propose a lightweight ensemble based approach for solving the core RL tasks.It uses multiple binary action DQNs having shared state and reward.The benefits of the proposed approach are overall simplicity,faster convergence and better performance compared to conventional DQN based approaches.The approach can potentially be extended to any type of DQN by forming its ensemble.Conducting extensive experimentation,promising results are obtained using the proposed ensemble approach on OpenAI Gym tasks,and Atari 2600 games as compared to recent techniques.The proposed approach gives a stateof-the-art score of 500 on the Cartpole-v1 task,259.2 on the LunarLander-v2 task,and state-of-the-art results on four out of five Atari 2600 games.
文摘极深因子分解机(eXtreme deep factorization machine,xDeepFM)是一种基于上下文感知的推荐模型,它提出了一种压缩交叉网络对特征进行阶数可控的特征交叉,并将该网络与深度神经网络进行结合以优化推荐效果。为了进一步提升xDeepFM在推荐场景下的表现,提出一种基于场因子分解的xDeepFM改进模型。该模型通过场信息增强了特征的表达能力,并建立了多个交叉压缩网络以学习高阶组合特征。最后分析了用户场、项目场设定的合理性,并在3个不同规模的MovieLens系列数据集上通过受试者工作特征曲线下面积、对数似然损失指标进行性能评估,验证了该改进模型的有效性。
基金The Deanship of Scientific Research(DSR)at King Abdulaziz University(KAU),Jeddah,Saudi Arabia has funded this project,under Grant No.KEP-3-120-42.
文摘In recent years,the demand for biometric-based human recog-nition methods has drastically increased to meet the privacy and security requirements.Palm prints,palm veins,finger veins,fingerprints,hand veins and other anatomic and behavioral features are utilized in the development of different biometric recognition techniques.Amongst the available biometric recognition techniques,Finger Vein Recognition(FVR)is a general technique that analyzes the patterns of finger veins to authenticate the individuals.Deep Learning(DL)-based techniques have gained immense attention in the recent years,since it accomplishes excellent outcomes in various challenging domains such as computer vision,speech detection and Natural Language Processing(NLP).This technique is a natural fit to overcome the ever-increasing biomet-ric detection problems and cell phone authentication issues in airport security techniques.The current study presents an Automated Biometric Finger Vein Recognition using Evolutionary Algorithm with Deep Learning(ABFVR-EADL)model.The presented ABFVR-EADL model aims to accomplish bio-metric recognition using the patterns of the finger veins.Initially,the presented ABFVR-EADL model employs the histogram equalization technique to pre-process the input images.For feature extraction,the Salp Swarm Algorithm(SSA)with Densely-connected Networks(DenseNet-201)model is exploited,showing the proposed method’s novelty.Finally,the Deep-Stacked Denoising Autoencoder(DSAE)is utilized for biometric recognition.The proposed ABFVR-EADL method was experimentally validated using the benchmark databases,and the outcomes confirmed the productive performance of the proposed ABFVR-EADL model over other DL models.
文摘Pavement crack detection plays a crucial role in ensuring road safety and reducing maintenance expenses.Recent advancements in deep learning(DL)techniques have shown promising results in detecting pavement cracks;however,the selection of relevant features for classification remains challenging.In this study,we propose a new approach for pavement crack detection that integrates deep learning for feature extraction,the whale optimization algorithm(WOA)for feature selection,and random forest(RF)for classification.The performance of the models was evaluated using accuracy,recall,precision,F1 score,and area under the receiver operating characteristic curve(AUC).Our findings reveal that Model 2,which incorporates RF into the ResNet-18 architecture,outperforms baseline Model 1 across all evaluation metrics.Nevertheless,our proposed model,which combines ResNet-18 with both WOA and RF,achieves significantly higher accuracy,recall,precision,and F1 score compared to the other two models.These results underscore the effectiveness of integrating RF and WOA into ResNet-18 for pavement crack detection applications.We applied the proposed approach to a dataset of pavement images,achieving an accuracy of 97.16%and an AUC of 0.984.Our results demonstrate that the proposed approach surpasses existing methods for pavement crack detection,offering a promising solution for the automatic identification of pavement cracks.By leveraging this approach,potential safety hazards can be identified more effectively,enabling timely repairs and maintenance measures.Lastly,the findings of this study also emphasize the potential of integrating RF and WOA with deep learning for pavement crack detection,providing road authorities with the necessary tools to make informed decisions regarding road infrastructure maintenance.
文摘Now a days,Remote Sensing(RS)techniques are used for earth observation and for detection of soil types with high accuracy and better reliability.This technique provides perspective view of spatial resolution and aids in instantaneous measurement of soil’s minerals and its characteristics.There are a few challenges that is present in soil classification using image enhancement such as,locating and plotting soil boundaries,slopes,hazardous areas,drainage condition,land use,vegetation etc.There are some traditional approaches which involves few drawbacks such as,manual involvement which results in inaccuracy due to human interference,time consuming,inconsistent prediction etc.To overcome these draw backs and to improve the predictive analysis of soil characteristics,we propose a Hybrid Deep Learning improved BAT optimization algorithm(HDIB)for soil classification using remote sensing hyperspectral features.In HDIB,we propose a spontaneous BAT optimization algorithm for feature extraction of both spectral-spatial features by choosing pure pixels from the Hyper Spectral(HS)image.Spectral-spatial vector as training illustrations is attained by merging spatial and spectral vector by means of priority stacking methodology.Then,a recurring Deep Learning(DL)Neural Network(NN)is used for classifying the HS images,considering the datasets of Pavia University,Salinas and Tamil Nadu Hill Scene,which in turn improves the reliability of classification.Finally,the performance of the proposed HDIB based soil classifier is compared and analyzed with existing methodologies like Single Layer Perceptron(SLP),Convolutional Neural Networks(CNN)and Deep Metric Learning(DML)and it shows an improved classification accuracy of 99.87%,98.34%and 99.9%for Tamil Nadu Hills dataset,Pavia University and Salinas scene datasets respectively.
基金the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under Grant Number(71/43)Princess Nourah Bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R203)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR29).
文摘With new developments experienced in Internet of Things(IoT),wearable,and sensing technology,the value of healthcare services has enhanced.This evolution has brought significant changes from conventional medicine-based healthcare to real-time observation-based healthcare.Biomedical Electrocardiogram(ECG)signals are generally utilized in examination and diagnosis of Cardiovascular Diseases(CVDs)since it is quick and non-invasive in nature.Due to increasing number of patients in recent years,the classifier efficiency gets reduced due to high variances observed in ECG signal patterns obtained from patients.In such scenario computer-assisted automated diagnostic tools are important for classification of ECG signals.The current study devises an Improved Bat Algorithm with Deep Learning Based Biomedical ECGSignal Classification(IBADL-BECGC)approach.To accomplish this,the proposed IBADL-BECGC model initially pre-processes the input signals.Besides,IBADL-BECGC model applies NasNet model to derive the features from test ECG signals.In addition,Improved Bat Algorithm(IBA)is employed to optimally fine-tune the hyperparameters related to NasNet approach.Finally,Extreme Learning Machine(ELM)classification algorithm is executed to perform ECG classification method.The presented IBADL-BECGC model was experimentally validated utilizing benchmark dataset.The comparison study outcomes established the improved performance of IBADL-BECGC model over other existing methodologies since the former achieved a maximum accuracy of 97.49%.
文摘In a rechargeable wireless sensor network,utilizing the unmanned aerial vehicle(UAV)as a mobile base station(BS)to charge sensors and collect data effectively prolongs the network’s lifetime.In this paper,we jointly optimize the UAV’s flight trajectory and the sensor selection and operation modes to maximize the average data traffic of all sensors within a wireless sensor network(WSN)during finite UAV’s flight time,while ensuring the energy required for each sensor by wireless power transfer(WPT).We consider a practical scenario,where the UAV has no prior knowledge of sensor locations.The UAV performs autonomous navigation based on the status information obtained within the coverage area,which is modeled as a Markov decision process(MDP).The deep Q-network(DQN)is employed to execute the navigation based on the UAV position,the battery level state,channel conditions and current data traffic of sensors within the UAV’s coverage area.Our simulation results demonstrate that the DQN algorithm significantly improves the network performance in terms of the average data traffic and trajectory design.
文摘Hyperspectral(HS)image classification is a hot research area due to challenging issues such as existence of high dimensionality,restricted training data,etc.Precise recognition of features from the HS images is important for effective classification outcomes.Additionally,the recent advancements of deep learning(DL)models make it possible in several application areas.In addition,the performance of the DL models is mainly based on the hyperparameter setting which can be resolved by the design of metaheuristics.In this view,this article develops an automated red deer algorithm with deep learning enabled hyperspec-tral image(HSI)classification(RDADL-HIC)technique.The proposed RDADL-HIC technique aims to effectively determine the HSI images.In addition,the RDADL-HIC technique comprises a NASNetLarge model with Adagrad optimi-zer.Moreover,RDA with gated recurrent unit(GRU)approach is used for the identification and classification of HSIs.The design of Adagrad optimizer with RDA helps to optimally tune the hyperparameters of the NASNetLarge and GRU models respectively.The experimental results stated the supremacy of the RDADL-HIC model and the results are inspected interms of different measures.The comparison study of the RDADL-HIC model demonstrated the enhanced per-formance over its recent state of art approaches.