Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the i...Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs.展开更多
Feature optimization is important to agricultural text mining. Usually, the vector space model is used to represent text documents. However, this basic approach still suffers from two drawbacks: thecurse of dimension...Feature optimization is important to agricultural text mining. Usually, the vector space model is used to represent text documents. However, this basic approach still suffers from two drawbacks: thecurse of dimension and the lack of semantic information. In this paper, a novel ontology-based feature optimization method for agricultural text was proposed. First, terms of vector space model were mapped into concepts of agricultural ontology, which concept frequency weights are computed statistically by term frequency weights; second, weights of concept similarity were assigned to the concept features according to the structure of the agricultural ontology. By combining feature frequency weights and feature similarity weights based on the agricultural ontology, the dimensionality of feature space can be reduced drastically. Moreover, the semantic information can be incorporated into this method. The results showed that this method yields a significant improvement on agricultural text clustering by the feature optimization.展开更多
Gesture recognition plays an increasingly important role as the requirements of intelligent systems for human-computer interaction methods increase.To improve the accuracy of the millimeter-wave radar gesture detectio...Gesture recognition plays an increasingly important role as the requirements of intelligent systems for human-computer interaction methods increase.To improve the accuracy of the millimeter-wave radar gesture detection algorithm with limited computational resources,this study improves the detection performance in terms of optimized features and interference filtering.The accuracy of the algorithm is improved by refining the combination of gesture features using a self-constructed dataset,and biometric filtering is introduced to reduce the interference of inanimate object motion.Finally,experiments demonstrate the effectiveness of the proposed algorithm in both mitigating interference from inanimate objects and accurately recognizing gestures.Results show a notable 93.29%average reduction in false detections achieved through the integration of biometric filtering into the algorithm’s interpretation of target movements.Additionally,the algorithm adeptly identifies the six gestures with an average accuracy of 96.84%on embedded systems.展开更多
Image feature optimization is an important means to deal with high-dimensional image data in image semantic understanding and its applications. We formulate image feature optimization as the establishment of a mapping...Image feature optimization is an important means to deal with high-dimensional image data in image semantic understanding and its applications. We formulate image feature optimization as the establishment of a mapping between highand low-dimensional space via a five-tuple model. Nonlinear dimensionality reduction based on manifold learning provides a feasible way for solving such a problem. We propose a novel globular neighborhood based locally linear embedding (GNLLE) algorithm using neighborhood update and an incremental neighbor search scheme, which not only can handle sparse datasets but also has strong anti-noise capability and good topological stability. Given that the distance measure adopted in nonlinear dimensionality reduction is usually based on pairwise similarity calculation, we also present a globular neighborhood and path clustering based locally linear embedding (GNPCLLE) algorithm based on path-based clustering. Due to its full consideration of correlations between image data, GNPCLLE can eliminate the distortion of the overall topological structure within the dataset on the manifold. Experimental results on two image sets show the effectiveness and efficiency of the proposed algorithms.展开更多
Automated Facial Expression Recognition(FER)serves as the backbone of patient monitoring systems,security,and surveillance systems.Real-time FER is a challenging task,due to the uncontrolled nature of the environment ...Automated Facial Expression Recognition(FER)serves as the backbone of patient monitoring systems,security,and surveillance systems.Real-time FER is a challenging task,due to the uncontrolled nature of the environment and poor quality of input frames.In this paper,a novel FER framework has been proposed for patient monitoring.Preprocessing is performed using contrast-limited adaptive enhancement and the dataset is balanced using augmentation.Two lightweight efficient Convolution Neural Network(CNN)models MobileNetV2 and Neural search Architecture Network Mobile(NasNetMobile)are trained,and feature vectors are extracted.The Whale Optimization Algorithm(WOA)is utilized to remove irrelevant features from these vectors.Finally,the optimized features are serially fused to pass them to the classifier.A comprehensive set of experiments were carried out for the evaluation of real-time image datasets FER-2013,MMA,and CK+to report performance based on various metrics.Accuracy results show that the proposed model has achieved 82.5%accuracy and performed better in comparison to the state-of-the-art classification techniques in terms of accuracy.We would like to highlight that the proposed technique has achieved better accuracy by using 2.8 times lesser number of features.展开更多
In 2020,COVID-19 started spreading throughout the world.This deadly infection was identified as a virus that may affect the lungs and,in severe cases,could be the cause of death.The polymerase chain reaction(PCR)test ...In 2020,COVID-19 started spreading throughout the world.This deadly infection was identified as a virus that may affect the lungs and,in severe cases,could be the cause of death.The polymerase chain reaction(PCR)test is commonly used to detect this virus through the nasal passage or throat.However,the PCR test exposes health workers to this deadly virus.To limit human exposure while detecting COVID-19,image processing techniques using deep learning have been successfully applied.In this paper,a strategy based on deep learning is employed to classify the COVID-19 virus.To extract features,two deep learning models have been used,the DenseNet201 and the SqueezeNet.Transfer learning is used in feature extraction,and models are fine-tuned.A publicly available computerized tomography(CT)scan dataset has been used in this study.The extracted features from the deep learning models are optimized using the Ant Colony Optimization algorithm.The proposed technique is validated through multiple evaluation parameters.Several classifiers have been employed to classify the optimized features.The cubic support vector machine(Cubic SVM)classifier shows superiority over other commonly used classifiers and attained an accuracy of 98.72%.The proposed technique achieves state-of-the-art accuracy,a sensitivity of 98.80%,and a specificity of 96.64%.展开更多
In the area of medical image processing,stomach cancer is one of the most important cancers which need to be diagnose at the early stage.In this paper,an optimized deep learning method is presented for multiple stomac...In the area of medical image processing,stomach cancer is one of the most important cancers which need to be diagnose at the early stage.In this paper,an optimized deep learning method is presented for multiple stomach disease classication.The proposed method work in few important steps—preprocessing using the fusion of ltering images along with Ant Colony Optimization(ACO),deep transfer learning-based features extraction,optimization of deep extracted features using nature-inspired algorithms,and nally fusion of optimal vectors and classication using Multi-Layered Perceptron Neural Network(MLNN).In the feature extraction step,pretrained Inception V3 is utilized and retrained on selected stomach infection classes using the deep transfer learning step.Later on,the activation function is applied to Global Average Pool(GAP)for feature extraction.However,the extracted features are optimized through two different nature-inspired algorithms—Particle Swarm Optimization(PSO)with dynamic tness function and Crow Search Algorithm(CSA).Hence,both methods’output is fused by a maximal value approach and classied the fused feature vector by MLNN.Two datasets are used to evaluate the proposed method—CUI WahStomach Diseases and Combined dataset and achieved an average accuracy of 99.5%.The comparison with existing techniques,it is shown that the proposed method shows signicant performance.展开更多
Considering that the surface defects of cold rolled strips are hard to be recognized by human eyes under high-speed circumstances, an automatic recognition technique was discussed. Spectrum images of defects can be go...Considering that the surface defects of cold rolled strips are hard to be recognized by human eyes under high-speed circumstances, an automatic recognition technique was discussed. Spectrum images of defects can be got by fast Fourier transform (FFF) and sum of valid pixels (SVP), and its optimized center region, which concentrates nearly all energies, are extracted as an original feature set. Using genetic algorithm to optimize the feature set, an optimized feature set with 51 features can be achieved. Using the optimized feature set as an input vector of neural networks, the recognition effects of LVQ neural networks have been studied. Experiment results show that the new method can get a higher classification rate and can settle the automatic recognition problem of surface defects on cold rolled strips ideally.展开更多
Manual diagnosis of crops diseases is not an easy process;thus,a computerized method is widely used.Froma couple of years,advancements in the domain ofmachine learning,such as deep learning,have shown substantial succ...Manual diagnosis of crops diseases is not an easy process;thus,a computerized method is widely used.Froma couple of years,advancements in the domain ofmachine learning,such as deep learning,have shown substantial success.However,they still faced some challenges such as similarity in disease symptoms and irrelevant features extraction.In this article,we proposed a new deep learning architecture with optimization algorithm for cucumber and potato leaf diseases recognition.The proposed architecture consists of five steps.In the first step,data augmentation is performed to increase the numbers of training samples.In the second step,pre-trained DarkNet19 deep model is opted and fine-tuned that later utilized for the training of fine-tuned model through transfer learning.Deep features are extracted from the global pooling layer in the next step that is refined using Improved Cuckoo search algorithm.The best selected features are finally classified using machine learning classifiers such as SVM,and named a few more for final classification results.The proposed architecture is tested using publicly available datasets–Cucumber National Dataset and Plant Village.The proposed architecture achieved an accuracy of 100.0%,92.9%,and 99.2%,respectively.Acomparison with recent techniques is also performed,revealing that the proposed method achieved improved accuracy while consuming less computational time.展开更多
Identifying fruit disease manually is time-consuming, expertrequired,and expensive;thus, a computer-based automated system is widelyrequired. Fruit diseases affect not only the quality but also the quantity.As a resul...Identifying fruit disease manually is time-consuming, expertrequired,and expensive;thus, a computer-based automated system is widelyrequired. Fruit diseases affect not only the quality but also the quantity.As a result, it is possible to detect the disease early on and cure the fruitsusing computer-based techniques. However, computer-based methods faceseveral challenges, including low contrast, a lack of dataset for training amodel, and inappropriate feature extraction for final classification. In thispaper, we proposed an automated framework for detecting apple fruit leafdiseases usingCNNand a hybrid optimization algorithm. Data augmentationis performed initially to balance the selected apple dataset. After that, twopre-trained deep models are fine-tuning and trained using transfer learning.Then, a fusion technique is proposed named Parallel Correlation Threshold(PCT). The fused feature vector is optimized in the next step using a hybridoptimization algorithm. The selected features are finally classified usingmachine learning algorithms. Four different experiments have been carriedout on the augmented Plant Village dataset and yielded the best accuracy of99.8%. The accuracy of the proposed framework is also compared to that ofseveral neural nets, and it outperforms them all.展开更多
Manual diagnosis of brain tumors usingmagnetic resonance images(MRI)is a hectic process and time-consuming.Also,it always requires an expert person for the diagnosis.Therefore,many computer-controlled methods for diag...Manual diagnosis of brain tumors usingmagnetic resonance images(MRI)is a hectic process and time-consuming.Also,it always requires an expert person for the diagnosis.Therefore,many computer-controlled methods for diagnosing and classifying brain tumors have been introduced in the literature.This paper proposes a novel multimodal brain tumor classification framework based on two-way deep learning feature extraction and a hybrid feature optimization algorithm.NasNet-Mobile,a pre-trained deep learning model,has been fine-tuned and twoway trained on original and enhancedMRI images.The haze-convolutional neural network(haze-CNN)approach is developed and employed on the original images for contrast enhancement.Next,transfer learning(TL)is utilized for training two-way fine-tuned models and extracting feature vectors from the global average pooling layer.Then,using a multiset canonical correlation analysis(CCA)method,features of both deep learning models are fused into a single feature matrix—this technique aims to enhance the information in terms of features for better classification.Although the information was increased,computational time also jumped.This issue is resolved using a hybrid feature optimization algorithm that chooses the best classification features.The experiments were done on two publicly available datasets—BraTs2018 and BraTs2019—and yielded accuracy rates of 94.8%and 95.7%,respectively.The proposedmethod is comparedwith several recent studies andoutperformed inaccuracy.In addition,we analyze the performance of each middle step of the proposed approach and find the selection technique strengthens the proposed framework.展开更多
The increasing prevalence of Internet of Things(IoT)devices has introduced a new phase of connectivity in recent years and,concurrently,has opened the floodgates for growing cyber threats.Among the myriad of potential...The increasing prevalence of Internet of Things(IoT)devices has introduced a new phase of connectivity in recent years and,concurrently,has opened the floodgates for growing cyber threats.Among the myriad of potential attacks,Denial of Service(DoS)attacks and Distributed Denial of Service(DDoS)attacks remain a dominant concern due to their capability to render services inoperable by overwhelming systems with an influx of traffic.As IoT devices often lack the inherent security measures found in more mature computing platforms,the need for robust DoS/DDoS detection systems tailored to IoT is paramount for the sustainable development of every domain that IoT serves.In this study,we investigate the effectiveness of three machine learning(ML)algorithms:extreme gradient boosting(XGB),multilayer perceptron(MLP)and random forest(RF),for the detection of IoTtargeted DoS/DDoS attacks and three feature engineering methods that have not been used in the existing stateof-the-art,and then employed the best performing algorithm to design a prototype of a novel real-time system towards detection of such DoS/DDoS attacks.The CICIoT2023 dataset was derived from the latest real-world IoT traffic,incorporates both benign and malicious network traffic patterns and after data preprocessing and feature engineering,the data was fed into our models for both training and validation,where findings suggest that while all threemodels exhibit commendable accuracy in detectingDoS/DDoS attacks,the use of particle swarmoptimization(PSO)for feature selection has made great improvements in the performance(accuracy,precsion recall and F1-score of 99.93%for XGB)of the ML models and their execution time(491.023 sceonds for XGB)compared to recursive feature elimination(RFE)and randomforest feature importance(RFI)methods.The proposed real-time system for DoS/DDoS attack detection entails the implementation of an platform capable of effectively processing and analyzing network traffic in real-time.This involvesemploying the best-performing ML algorithmfor detection and the integration of warning mechanisms.We believe this approach will significantly enhance the field of security research and continue to refine it based on future insights and developments.展开更多
Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particula...Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particularly in regions with limited diagnostic resources like Pakistan.This study aims to conduct an extensive comparative analysis of various machine learning classifiers for ASD detection using facial images to identify an accurate and cost-effective solution tailored to the local context.The research involves experimentation with VGG16 and MobileNet models,exploring different batch sizes,optimizers,and learning rate schedulers.In addition,the“Orange”machine learning tool is employed to evaluate classifier performance and automated image processing capabilities are utilized within the tool.The findings unequivocally establish VGG16 as the most effective classifier with a 5-fold cross-validation approach.Specifically,VGG16,with a batch size of 2 and the Adam optimizer,trained for 100 epochs,achieves a remarkable validation accuracy of 99% and a testing accuracy of 87%.Furthermore,the model achieves an F1 score of 88%,precision of 85%,and recall of 90% on test images.To validate the practical applicability of the VGG16 model with 5-fold cross-validation,the study conducts further testing on a dataset sourced fromautism centers in Pakistan,resulting in an accuracy rate of 85%.This reaffirms the model’s suitability for real-world ASD detection.This research offers valuable insights into classifier performance,emphasizing the potential of machine learning to deliver precise and accessible ASD diagnoses via facial image analysis.展开更多
Feature selection(FS)(or feature dimensional reduction,or feature optimization)is an essential process in pattern recognition and machine learning because of its enhanced classification speed and accuracy and reduced ...Feature selection(FS)(or feature dimensional reduction,or feature optimization)is an essential process in pattern recognition and machine learning because of its enhanced classification speed and accuracy and reduced system complexity.FS reduces the number of features extracted in the feature extraction phase by reducing highly correlated features,retaining features with high information gain,and removing features with no weights in classification.In this work,an FS filter-type statistical method is designed and implemented,utilizing a t-test to decrease the convergence between feature subsets by calculating the quality of performance value(QoPV).The approach utilizes the well-designed fitness function to calculate the strength of recognition value(SoRV).The two values are used to rank all features according to the final weight(FW)calculated for each feature subset using a function that prioritizes feature subsets with high SoRV values.An FW is assigned to each feature subset,and those with FWs less than a predefined threshold are removed from the feature subset domain.Experiments are implemented on three datasets:Ryerson Audio-Visual Database of Emotional Speech and Song,Berlin,and Surrey Audio-Visual Expressed Emotion.The performance of the F-test and F-score FS methods are compared to those of the proposed method.Tests are also conducted on a system before and after deploying the FS methods.Results demonstrate the comparative efficiency of the proposed method.The complexity of the system is calculated based on the time overhead required before and after FS.Results show that the proposed method can reduce system complexity.展开更多
Since the beginning of time,humans have relied on plants for food,energy,and medicine.Plants are recognized by leaf,flower,or fruit and linked to their suitable cluster.Classification methods are used to extract and s...Since the beginning of time,humans have relied on plants for food,energy,and medicine.Plants are recognized by leaf,flower,or fruit and linked to their suitable cluster.Classification methods are used to extract and select traits that are helpful in identifying a plant.In plant leaf image categorization,each plant is assigned a label according to its classification.The purpose of classifying plant leaf images is to enable farmers to recognize plants,leading to the management of plants in several aspects.This study aims to present a modified whale optimization algorithm and categorizes plant leaf images into classes.This modified algorithm works on different sets of plant leaves.The proposed algorithm examines several benchmark functions with adequate performance.On ten plant leaf images,this classification method was validated.The proposed model calculates precision,recall,F-measurement,and accuracy for ten different plant leaf image datasets and compares these parameters with other existing algorithms.Based on experimental data,it is observed that the accuracy of the proposed method outperforms the accuracy of different algorithms under consideration and improves accuracy by 5%.展开更多
Person re-identification(Re-ID) is integral to intelligent monitoring systems.However,due to the variability in viewing angles and illumination,it is easy to cause visual ambiguities,affecting the accuracy of person r...Person re-identification(Re-ID) is integral to intelligent monitoring systems.However,due to the variability in viewing angles and illumination,it is easy to cause visual ambiguities,affecting the accuracy of person re-identification.An approach for person re-identification based on feature mapping space and sample determination is proposed.At first,a weight fusion model,including mean and maximum value of the horizontal occurrence in local features,is introduced into the mapping space to optimize local features.Then,the Gaussian distribution model with hierarchical mean and covariance of pixel features is introduced to enhance feature expression.Finally,considering the influence of the size of samples on metric learning performance,the appropriate metric learning is selected by sample determination method to further improve the performance of person re-identification.Experimental results on the VIPeR,PRID450 S and CUHK01 datasets demonstrate that the proposed method is better than the traditional methods.展开更多
Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of...Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of memory,vision,and name.In 2020,approximately 18,020 deaths occurred due to brain tumors.These cases can be minimized if a brain tumor is diagnosed at a very early stage.Computer vision researchers have introduced several techniques for brain tumor detection and classification.However,owing to many factors,this is still a challenging task.These challenges relate to the tumor size,the shape of a tumor,location of the tumor,selection of important features,among others.In this study,we proposed a framework for multimodal brain tumor classification using an ensemble of optimal deep learning features.In the proposed framework,initially,a database is normalized in the form of high-grade glioma(HGG)and low-grade glioma(LGG)patients and then two pre-trained deep learning models(ResNet50 and Densenet201)are chosen.The deep learning models were modified and trained using transfer learning.Subsequently,the enhanced ant colony optimization algorithm is proposed for best feature selection from both deep models.The selected features are fused using a serial-based approach and classified using a cubic support vector machine.The experimental process was conducted on the BraTs2019 dataset and achieved accuracies of 87.8%and 84.6%for HGG and LGG,respectively.The comparison is performed using several classification methods,and it shows the significance of our proposed technique.展开更多
Human gait recognition(HGR)has received a lot of attention in the last decade as an alternative biometric technique.The main challenges in gait recognition are the change in in-person view angle and covariant factors....Human gait recognition(HGR)has received a lot of attention in the last decade as an alternative biometric technique.The main challenges in gait recognition are the change in in-person view angle and covariant factors.The major covariant factors are walking while carrying a bag and walking while wearing a coat.Deep learning is a new machine learning technique that is gaining popularity.Many techniques for HGR based on deep learning are presented in the literature.The requirement of an efficient framework is always required for correct and quick gait recognition.We proposed a fully automated deep learning and improved ant colony optimization(IACO)framework for HGR using video sequences in this work.The proposed framework consists of four primary steps.In the first step,the database is normalized in a video frame.In the second step,two pre-trained models named ResNet101 and InceptionV3 are selected andmodified according to the dataset’s nature.After that,we trained both modified models using transfer learning and extracted the features.The IACO algorithm is used to improve the extracted features.IACO is used to select the best features,which are then passed to the Cubic SVM for final classification.The cubic SVM employs a multiclass method.The experiment was carried out on three angles(0,18,and 180)of the CASIA B dataset,and the accuracy was 95.2,93.9,and 98.2 percent,respectively.A comparison with existing techniques is also performed,and the proposed method outperforms in terms of accuracy and computational time.展开更多
Malaria is a critical health condition that affects both sultry and frigid region worldwide,giving rise to millions of cases of disease and thousands of deaths over the years.Malaria is caused by parasites that enter ...Malaria is a critical health condition that affects both sultry and frigid region worldwide,giving rise to millions of cases of disease and thousands of deaths over the years.Malaria is caused by parasites that enter the human red blood cells,grow there,and damage them over time.Therefore,it is diagnosed by a detailed examination of blood cells under the microscope.This is the most extensively used malaria diagnosis technique,but it yields limited and unreliable results due to the manual human involvement.In this work,an automated malaria blood smear classification model is proposed,which takes images of both infected and healthy cells and preprocesses themin the L^(*)a^(*)b^(*)color space by employing several contrast enhancement methods.Feature extraction is performed using two pretrained deep convolutional neural networks,DarkNet-53 and DenseNet-201.The features are subsequently agglutinated to be optimized through a nature-based feature reduction method called the whale optimization algorithm.Several classifiers are effectuated on the reduced features,and the achieved results excel in both accuracy and time compared to previously proposed methods.展开更多
Inverse lithography technology(ILT)is intended to achieve optimal mask design to print a lithography target for a given lithography process.Full chip implementation of rigorous inverse lithography remains a challengin...Inverse lithography technology(ILT)is intended to achieve optimal mask design to print a lithography target for a given lithography process.Full chip implementation of rigorous inverse lithography remains a challenging task because of enormous computational resource requirements and long computational time.To achieve full chip ILT solution,attempts have been made by using machine learning techniques based on deep convolution neural network(DCNN).The reported input for such DCNN is the rasterized images of the lithography target;such pure geometrical input requires DCNN to possess considerable number of layers to learn the optical properties of the mask,the nonlinear imaging process,and the rigorous ILT algorithm as well.To alleviate the difficulties,we have proposed the physics based optimal feature vector design for machine learning ILT in our early report.Although physics based feature vector followed by feedforward neural network can provide the solution to machine learning ILT,the feature vector is long and it can consume considerable amount of memory resource in practical implementation.To improve the resource efficiency,we proposed a hybrid approach in this study by combining first few physics based feature maps with a specially designed DCNN structure to learn the rigorous ILT algorithm.Our results show that this approach can make machine learning ILT easy,fast and more accurate.展开更多
基金funded by the Natural Science Foundation of Shandong Province (ZR2021MD061ZR2023QD025)+3 种基金China Postdoctoral Science Foundation (2022M721972)National Natural Science Foundation of China (41174098)Young Talents Foundation of Inner Mongolia University (10000-23112101/055)Qingdao Postdoctoral Science Foundation (QDBSH20230102094)。
文摘Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs.
基金supported by the National Natural Science Foundation of China (60774096)the National HighTech R&D Program of China (2008BAK49B05)
文摘Feature optimization is important to agricultural text mining. Usually, the vector space model is used to represent text documents. However, this basic approach still suffers from two drawbacks: thecurse of dimension and the lack of semantic information. In this paper, a novel ontology-based feature optimization method for agricultural text was proposed. First, terms of vector space model were mapped into concepts of agricultural ontology, which concept frequency weights are computed statistically by term frequency weights; second, weights of concept similarity were assigned to the concept features according to the structure of the agricultural ontology. By combining feature frequency weights and feature similarity weights based on the agricultural ontology, the dimensionality of feature space can be reduced drastically. Moreover, the semantic information can be incorporated into this method. The results showed that this method yields a significant improvement on agricultural text clustering by the feature optimization.
基金supported by the National Natural Science Foundation of China(No.12172076)。
文摘Gesture recognition plays an increasingly important role as the requirements of intelligent systems for human-computer interaction methods increase.To improve the accuracy of the millimeter-wave radar gesture detection algorithm with limited computational resources,this study improves the detection performance in terms of optimized features and interference filtering.The accuracy of the algorithm is improved by refining the combination of gesture features using a self-constructed dataset,and biometric filtering is introduced to reduce the interference of inanimate object motion.Finally,experiments demonstrate the effectiveness of the proposed algorithm in both mitigating interference from inanimate objects and accurately recognizing gestures.Results show a notable 93.29%average reduction in false detections achieved through the integration of biometric filtering into the algorithm’s interpretation of target movements.Additionally,the algorithm adeptly identifies the six gestures with an average accuracy of 96.84%on embedded systems.
基金Project (No 2008AA01Z132) supported by the National High-Tech Research and Development Program of China
文摘Image feature optimization is an important means to deal with high-dimensional image data in image semantic understanding and its applications. We formulate image feature optimization as the establishment of a mapping between highand low-dimensional space via a five-tuple model. Nonlinear dimensionality reduction based on manifold learning provides a feasible way for solving such a problem. We propose a novel globular neighborhood based locally linear embedding (GNLLE) algorithm using neighborhood update and an incremental neighbor search scheme, which not only can handle sparse datasets but also has strong anti-noise capability and good topological stability. Given that the distance measure adopted in nonlinear dimensionality reduction is usually based on pairwise similarity calculation, we also present a globular neighborhood and path clustering based locally linear embedding (GNPCLLE) algorithm based on path-based clustering. Due to its full consideration of correlations between image data, GNPCLLE can eliminate the distortion of the overall topological structure within the dataset on the manifold. Experimental results on two image sets show the effectiveness and efficiency of the proposed algorithms.
基金Researchers Supporting Project Number(RSP2022R458),King Saud University,Riyadh,Saudi Arabia.
文摘Automated Facial Expression Recognition(FER)serves as the backbone of patient monitoring systems,security,and surveillance systems.Real-time FER is a challenging task,due to the uncontrolled nature of the environment and poor quality of input frames.In this paper,a novel FER framework has been proposed for patient monitoring.Preprocessing is performed using contrast-limited adaptive enhancement and the dataset is balanced using augmentation.Two lightweight efficient Convolution Neural Network(CNN)models MobileNetV2 and Neural search Architecture Network Mobile(NasNetMobile)are trained,and feature vectors are extracted.The Whale Optimization Algorithm(WOA)is utilized to remove irrelevant features from these vectors.Finally,the optimized features are serially fused to pass them to the classifier.A comprehensive set of experiments were carried out for the evaluation of real-time image datasets FER-2013,MMA,and CK+to report performance based on various metrics.Accuracy results show that the proposed model has achieved 82.5%accuracy and performed better in comparison to the state-of-the-art classification techniques in terms of accuracy.We would like to highlight that the proposed technique has achieved better accuracy by using 2.8 times lesser number of features.
文摘In 2020,COVID-19 started spreading throughout the world.This deadly infection was identified as a virus that may affect the lungs and,in severe cases,could be the cause of death.The polymerase chain reaction(PCR)test is commonly used to detect this virus through the nasal passage or throat.However,the PCR test exposes health workers to this deadly virus.To limit human exposure while detecting COVID-19,image processing techniques using deep learning have been successfully applied.In this paper,a strategy based on deep learning is employed to classify the COVID-19 virus.To extract features,two deep learning models have been used,the DenseNet201 and the SqueezeNet.Transfer learning is used in feature extraction,and models are fine-tuned.A publicly available computerized tomography(CT)scan dataset has been used in this study.The extracted features from the deep learning models are optimized using the Ant Colony Optimization algorithm.The proposed technique is validated through multiple evaluation parameters.Several classifiers have been employed to classify the optimized features.The cubic support vector machine(Cubic SVM)classifier shows superiority over other commonly used classifiers and attained an accuracy of 98.72%.The proposed technique achieves state-of-the-art accuracy,a sensitivity of 98.80%,and a specificity of 96.64%.
基金supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘In the area of medical image processing,stomach cancer is one of the most important cancers which need to be diagnose at the early stage.In this paper,an optimized deep learning method is presented for multiple stomach disease classication.The proposed method work in few important steps—preprocessing using the fusion of ltering images along with Ant Colony Optimization(ACO),deep transfer learning-based features extraction,optimization of deep extracted features using nature-inspired algorithms,and nally fusion of optimal vectors and classication using Multi-Layered Perceptron Neural Network(MLNN).In the feature extraction step,pretrained Inception V3 is utilized and retrained on selected stomach infection classes using the deep transfer learning step.Later on,the activation function is applied to Global Average Pool(GAP)for feature extraction.However,the extracted features are optimized through two different nature-inspired algorithms—Particle Swarm Optimization(PSO)with dynamic tness function and Crow Search Algorithm(CSA).Hence,both methods’output is fused by a maximal value approach and classied the fused feature vector by MLNN.Two datasets are used to evaluate the proposed method—CUI WahStomach Diseases and Combined dataset and achieved an average accuracy of 99.5%.The comparison with existing techniques,it is shown that the proposed method shows signicant performance.
基金This work was financially supported by the National High Technology Research and Development Program of China (No.2003AA331080 and 2001AA339030)the Talent Science Research Foundation of Henan University of Science & Technology (No.09001121).
文摘Considering that the surface defects of cold rolled strips are hard to be recognized by human eyes under high-speed circumstances, an automatic recognition technique was discussed. Spectrum images of defects can be got by fast Fourier transform (FFF) and sum of valid pixels (SVP), and its optimized center region, which concentrates nearly all energies, are extracted as an original feature set. Using genetic algorithm to optimize the feature set, an optimized feature set with 51 features can be achieved. Using the optimized feature set as an input vector of neural networks, the recognition effects of LVQ neural networks have been studied. Experiment results show that the new method can get a higher classification rate and can settle the automatic recognition problem of surface defects on cold rolled strips ideally.
文摘Manual diagnosis of crops diseases is not an easy process;thus,a computerized method is widely used.Froma couple of years,advancements in the domain ofmachine learning,such as deep learning,have shown substantial success.However,they still faced some challenges such as similarity in disease symptoms and irrelevant features extraction.In this article,we proposed a new deep learning architecture with optimization algorithm for cucumber and potato leaf diseases recognition.The proposed architecture consists of five steps.In the first step,data augmentation is performed to increase the numbers of training samples.In the second step,pre-trained DarkNet19 deep model is opted and fine-tuned that later utilized for the training of fine-tuned model through transfer learning.Deep features are extracted from the global pooling layer in the next step that is refined using Improved Cuckoo search algorithm.The best selected features are finally classified using machine learning classifiers such as SVM,and named a few more for final classification results.The proposed architecture is tested using publicly available datasets–Cucumber National Dataset and Plant Village.The proposed architecture achieved an accuracy of 100.0%,92.9%,and 99.2%,respectively.Acomparison with recent techniques is also performed,revealing that the proposed method achieved improved accuracy while consuming less computational time.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning (KETEP)granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea. (No.20204010600090).
文摘Identifying fruit disease manually is time-consuming, expertrequired,and expensive;thus, a computer-based automated system is widelyrequired. Fruit diseases affect not only the quality but also the quantity.As a result, it is possible to detect the disease early on and cure the fruitsusing computer-based techniques. However, computer-based methods faceseveral challenges, including low contrast, a lack of dataset for training amodel, and inappropriate feature extraction for final classification. In thispaper, we proposed an automated framework for detecting apple fruit leafdiseases usingCNNand a hybrid optimization algorithm. Data augmentationis performed initially to balance the selected apple dataset. After that, twopre-trained deep models are fine-tuning and trained using transfer learning.Then, a fusion technique is proposed named Parallel Correlation Threshold(PCT). The fused feature vector is optimized in the next step using a hybridoptimization algorithm. The selected features are finally classified usingmachine learning algorithms. Four different experiments have been carriedout on the augmented Plant Village dataset and yielded the best accuracy of99.8%. The accuracy of the proposed framework is also compared to that ofseveral neural nets, and it outperforms them all.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP)Granted Financial Resources from theMinistry of Trade,Industry&Energy,Republic of Korea(No.20204010600090).
文摘Manual diagnosis of brain tumors usingmagnetic resonance images(MRI)is a hectic process and time-consuming.Also,it always requires an expert person for the diagnosis.Therefore,many computer-controlled methods for diagnosing and classifying brain tumors have been introduced in the literature.This paper proposes a novel multimodal brain tumor classification framework based on two-way deep learning feature extraction and a hybrid feature optimization algorithm.NasNet-Mobile,a pre-trained deep learning model,has been fine-tuned and twoway trained on original and enhancedMRI images.The haze-convolutional neural network(haze-CNN)approach is developed and employed on the original images for contrast enhancement.Next,transfer learning(TL)is utilized for training two-way fine-tuned models and extracting feature vectors from the global average pooling layer.Then,using a multiset canonical correlation analysis(CCA)method,features of both deep learning models are fused into a single feature matrix—this technique aims to enhance the information in terms of features for better classification.Although the information was increased,computational time also jumped.This issue is resolved using a hybrid feature optimization algorithm that chooses the best classification features.The experiments were done on two publicly available datasets—BraTs2018 and BraTs2019—and yielded accuracy rates of 94.8%and 95.7%,respectively.The proposedmethod is comparedwith several recent studies andoutperformed inaccuracy.In addition,we analyze the performance of each middle step of the proposed approach and find the selection technique strengthens the proposed framework.
文摘The increasing prevalence of Internet of Things(IoT)devices has introduced a new phase of connectivity in recent years and,concurrently,has opened the floodgates for growing cyber threats.Among the myriad of potential attacks,Denial of Service(DoS)attacks and Distributed Denial of Service(DDoS)attacks remain a dominant concern due to their capability to render services inoperable by overwhelming systems with an influx of traffic.As IoT devices often lack the inherent security measures found in more mature computing platforms,the need for robust DoS/DDoS detection systems tailored to IoT is paramount for the sustainable development of every domain that IoT serves.In this study,we investigate the effectiveness of three machine learning(ML)algorithms:extreme gradient boosting(XGB),multilayer perceptron(MLP)and random forest(RF),for the detection of IoTtargeted DoS/DDoS attacks and three feature engineering methods that have not been used in the existing stateof-the-art,and then employed the best performing algorithm to design a prototype of a novel real-time system towards detection of such DoS/DDoS attacks.The CICIoT2023 dataset was derived from the latest real-world IoT traffic,incorporates both benign and malicious network traffic patterns and after data preprocessing and feature engineering,the data was fed into our models for both training and validation,where findings suggest that while all threemodels exhibit commendable accuracy in detectingDoS/DDoS attacks,the use of particle swarmoptimization(PSO)for feature selection has made great improvements in the performance(accuracy,precsion recall and F1-score of 99.93%for XGB)of the ML models and their execution time(491.023 sceonds for XGB)compared to recursive feature elimination(RFE)and randomforest feature importance(RFI)methods.The proposed real-time system for DoS/DDoS attack detection entails the implementation of an platform capable of effectively processing and analyzing network traffic in real-time.This involvesemploying the best-performing ML algorithmfor detection and the integration of warning mechanisms.We believe this approach will significantly enhance the field of security research and continue to refine it based on future insights and developments.
文摘Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particularly in regions with limited diagnostic resources like Pakistan.This study aims to conduct an extensive comparative analysis of various machine learning classifiers for ASD detection using facial images to identify an accurate and cost-effective solution tailored to the local context.The research involves experimentation with VGG16 and MobileNet models,exploring different batch sizes,optimizers,and learning rate schedulers.In addition,the“Orange”machine learning tool is employed to evaluate classifier performance and automated image processing capabilities are utilized within the tool.The findings unequivocally establish VGG16 as the most effective classifier with a 5-fold cross-validation approach.Specifically,VGG16,with a batch size of 2 and the Adam optimizer,trained for 100 epochs,achieves a remarkable validation accuracy of 99% and a testing accuracy of 87%.Furthermore,the model achieves an F1 score of 88%,precision of 85%,and recall of 90% on test images.To validate the practical applicability of the VGG16 model with 5-fold cross-validation,the study conducts further testing on a dataset sourced fromautism centers in Pakistan,resulting in an accuracy rate of 85%.This reaffirms the model’s suitability for real-world ASD detection.This research offers valuable insights into classifier performance,emphasizing the potential of machine learning to deliver precise and accessible ASD diagnoses via facial image analysis.
文摘Feature selection(FS)(or feature dimensional reduction,or feature optimization)is an essential process in pattern recognition and machine learning because of its enhanced classification speed and accuracy and reduced system complexity.FS reduces the number of features extracted in the feature extraction phase by reducing highly correlated features,retaining features with high information gain,and removing features with no weights in classification.In this work,an FS filter-type statistical method is designed and implemented,utilizing a t-test to decrease the convergence between feature subsets by calculating the quality of performance value(QoPV).The approach utilizes the well-designed fitness function to calculate the strength of recognition value(SoRV).The two values are used to rank all features according to the final weight(FW)calculated for each feature subset using a function that prioritizes feature subsets with high SoRV values.An FW is assigned to each feature subset,and those with FWs less than a predefined threshold are removed from the feature subset domain.Experiments are implemented on three datasets:Ryerson Audio-Visual Database of Emotional Speech and Song,Berlin,and Surrey Audio-Visual Expressed Emotion.The performance of the F-test and F-score FS methods are compared to those of the proposed method.Tests are also conducted on a system before and after deploying the FS methods.Results demonstrate the comparative efficiency of the proposed method.The complexity of the system is calculated based on the time overhead required before and after FS.Results show that the proposed method can reduce system complexity.
基金This work was supported by the Deanship of Scientific Research,King Saud University,Saudi Arabia.
文摘Since the beginning of time,humans have relied on plants for food,energy,and medicine.Plants are recognized by leaf,flower,or fruit and linked to their suitable cluster.Classification methods are used to extract and select traits that are helpful in identifying a plant.In plant leaf image categorization,each plant is assigned a label according to its classification.The purpose of classifying plant leaf images is to enable farmers to recognize plants,leading to the management of plants in several aspects.This study aims to present a modified whale optimization algorithm and categorizes plant leaf images into classes.This modified algorithm works on different sets of plant leaves.The proposed algorithm examines several benchmark functions with adequate performance.On ten plant leaf images,this classification method was validated.The proposed model calculates precision,recall,F-measurement,and accuracy for ten different plant leaf image datasets and compares these parameters with other existing algorithms.Based on experimental data,it is observed that the accuracy of the proposed method outperforms the accuracy of different algorithms under consideration and improves accuracy by 5%.
基金Supported by the National Natural Science Foundation of China (No.61976080)the Science and Technology Key Project of Science and Technology Department of Henan Province (No.212102310298)+1 种基金the Innovation and Quality Improvement Project for Graduate Education of Henan University (No.SYL20010101)the Academic Degress&Graduate Education Reform Project of Henan Province (2021SJLX195Y)。
文摘Person re-identification(Re-ID) is integral to intelligent monitoring systems.However,due to the variability in viewing angles and illumination,it is easy to cause visual ambiguities,affecting the accuracy of person re-identification.An approach for person re-identification based on feature mapping space and sample determination is proposed.At first,a weight fusion model,including mean and maximum value of the horizontal occurrence in local features,is introduced into the mapping space to optimize local features.Then,the Gaussian distribution model with hierarchical mean and covariance of pixel features is introduced to enhance feature expression.Finally,considering the influence of the size of samples on metric learning performance,the appropriate metric learning is selected by sample determination method to further improve the performance of person re-identification.Experimental results on the VIPeR,PRID450 S and CUHK01 datasets demonstrate that the proposed method is better than the traditional methods.
基金This study was supported by the grants of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),funded by the Ministry of Health&Welfare(HI18C1216)the grant of the National Research Foundation of Korea(NRF-2020R1I1A1A01074256)the Soonchunhyang University Research Fund.
文摘Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of memory,vision,and name.In 2020,approximately 18,020 deaths occurred due to brain tumors.These cases can be minimized if a brain tumor is diagnosed at a very early stage.Computer vision researchers have introduced several techniques for brain tumor detection and classification.However,owing to many factors,this is still a challenging task.These challenges relate to the tumor size,the shape of a tumor,location of the tumor,selection of important features,among others.In this study,we proposed a framework for multimodal brain tumor classification using an ensemble of optimal deep learning features.In the proposed framework,initially,a database is normalized in the form of high-grade glioma(HGG)and low-grade glioma(LGG)patients and then two pre-trained deep learning models(ResNet50 and Densenet201)are chosen.The deep learning models were modified and trained using transfer learning.Subsequently,the enhanced ant colony optimization algorithm is proposed for best feature selection from both deep models.The selected features are fused using a serial-based approach and classified using a cubic support vector machine.The experimental process was conducted on the BraTs2019 dataset and achieved accuracies of 87.8%and 84.6%for HGG and LGG,respectively.The comparison is performed using several classification methods,and it shows the significance of our proposed technique.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2018R1D1A1B07042967)and the Soonchunhyang University Research Fund.
文摘Human gait recognition(HGR)has received a lot of attention in the last decade as an alternative biometric technique.The main challenges in gait recognition are the change in in-person view angle and covariant factors.The major covariant factors are walking while carrying a bag and walking while wearing a coat.Deep learning is a new machine learning technique that is gaining popularity.Many techniques for HGR based on deep learning are presented in the literature.The requirement of an efficient framework is always required for correct and quick gait recognition.We proposed a fully automated deep learning and improved ant colony optimization(IACO)framework for HGR using video sequences in this work.The proposed framework consists of four primary steps.In the first step,the database is normalized in a video frame.In the second step,two pre-trained models named ResNet101 and InceptionV3 are selected andmodified according to the dataset’s nature.After that,we trained both modified models using transfer learning and extracted the features.The IACO algorithm is used to improve the extracted features.IACO is used to select the best features,which are then passed to the Cubic SVM for final classification.The cubic SVM employs a multiclass method.The experiment was carried out on three angles(0,18,and 180)of the CASIA B dataset,and the accuracy was 95.2,93.9,and 98.2 percent,respectively.A comparison with existing techniques is also performed,and the proposed method outperforms in terms of accuracy and computational time.
基金This research was supported by the MSIT(Ministry of Science and ICT),Korea,under the ICAN(ICT Challenge and Advanced Network of HRD)program(IITP-2021-2020-0-01832)supervised by the IITP(Institute of Information&Communications Technology Planning&Evaluation)and the Soonchunhyang University Research Fund.
文摘Malaria is a critical health condition that affects both sultry and frigid region worldwide,giving rise to millions of cases of disease and thousands of deaths over the years.Malaria is caused by parasites that enter the human red blood cells,grow there,and damage them over time.Therefore,it is diagnosed by a detailed examination of blood cells under the microscope.This is the most extensively used malaria diagnosis technique,but it yields limited and unreliable results due to the manual human involvement.In this work,an automated malaria blood smear classification model is proposed,which takes images of both infected and healthy cells and preprocesses themin the L^(*)a^(*)b^(*)color space by employing several contrast enhancement methods.Feature extraction is performed using two pretrained deep convolutional neural networks,DarkNet-53 and DenseNet-201.The features are subsequently agglutinated to be optimized through a nature-based feature reduction method called the whale optimization algorithm.Several classifiers are effectuated on the reduced features,and the achieved results excel in both accuracy and time compared to previously proposed methods.
文摘Inverse lithography technology(ILT)is intended to achieve optimal mask design to print a lithography target for a given lithography process.Full chip implementation of rigorous inverse lithography remains a challenging task because of enormous computational resource requirements and long computational time.To achieve full chip ILT solution,attempts have been made by using machine learning techniques based on deep convolution neural network(DCNN).The reported input for such DCNN is the rasterized images of the lithography target;such pure geometrical input requires DCNN to possess considerable number of layers to learn the optical properties of the mask,the nonlinear imaging process,and the rigorous ILT algorithm as well.To alleviate the difficulties,we have proposed the physics based optimal feature vector design for machine learning ILT in our early report.Although physics based feature vector followed by feedforward neural network can provide the solution to machine learning ILT,the feature vector is long and it can consume considerable amount of memory resource in practical implementation.To improve the resource efficiency,we proposed a hybrid approach in this study by combining first few physics based feature maps with a specially designed DCNN structure to learn the rigorous ILT algorithm.Our results show that this approach can make machine learning ILT easy,fast and more accurate.