We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantu...We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantum circuit, thereby propose a novel hybrid quantum deep neural network(HQDNN) used for image classification. After bilinear interpolation reduces the original image to a suitable size, an improved novel enhanced quantum representation(INEQR) is used to encode it into quantum states as the input of the HQDNN. Multi-layer parameterized quantum circuits are used as the main structure to implement feature extraction and classification. The output results of parameterized quantum circuits are converted into classical data through quantum measurements and then optimized on a classical computer. To verify the performance of the HQDNN, we conduct binary classification and three classification experiments on the MNIST(Modified National Institute of Standards and Technology) data set. In the first binary classification, the accuracy of 0 and 4 exceeds98%. Then we compare the performance of three classification with other algorithms, the results on two datasets show that the classification accuracy is higher than that of quantum deep neural network and general quantum convolutional neural network.展开更多
Desertification has become a global threat and caused a crisis,especially in Middle Eastern countries,such as Saudi Arabia.Makkah is one of the most important cities in Saudi Arabia that needs to be protected from des...Desertification has become a global threat and caused a crisis,especially in Middle Eastern countries,such as Saudi Arabia.Makkah is one of the most important cities in Saudi Arabia that needs to be protected from desertification.The vegetation area in Makkah has been damaged because of desertification through wind,floods,overgrazing,and global climate change.The damage caused by desertification can be recovered provided urgent action is taken to prevent further degradation of the vegetation area.In this paper,we propose an automatic desertification detection system based on Deep Learning techniques.Aerial images are classified using Convolutional Neural Networks(CNN)to detect land state variation in real-time.CNNs have been widely used for computer vision applications,such as image classification,image segmentation,and quality enhancement.The proposed CNN model was trained and evaluated on the Arial Image Dataset(AID).Compared to state-of-the-art methods,the proposed model has better performance while being suitable for embedded implementation.It has achieved high efficiency with 96.47% accuracy.In light of the current research,we assert the appropriateness of the proposed CNN model in detecting desertification from aerial images.展开更多
To study the scene classification in the Synthetic Aperture Radar (SAR) image, a novel method based on kernel estimate, with the Maxkov context and Dempster-Shafer evidence theory is proposed. Initially, a nonpaxame...To study the scene classification in the Synthetic Aperture Radar (SAR) image, a novel method based on kernel estimate, with the Maxkov context and Dempster-Shafer evidence theory is proposed. Initially, a nonpaxametric Probability Density Function (PDF) estimate method is introduced, to describe the scene of SAR images. And then under the Maxkov context, both the determinate PDF and the kernel estimate method axe adopted respectively, to form a primary classification. Next, the primary classification results are fused using the evidence theory in an unsupervised way to get the scene classification. Finally, a regularization step is used, in which an iterated maximum selecting approach is introduced to control the fragments and modify the errors of the classification. Use of the kernel estimate and evidence theory can describe the complicated scenes with little prior knowledge and eliminate the ambiguities of the primary classification results. Experimental results on real SAR images illustrate a rather impressive performance.展开更多
In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia...In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.展开更多
In the domain ofmedical imaging,the accurate detection and classification of brain tumors is very important.This study introduces an advanced method for identifying camouflaged brain tumors within images.Our proposed ...In the domain ofmedical imaging,the accurate detection and classification of brain tumors is very important.This study introduces an advanced method for identifying camouflaged brain tumors within images.Our proposed model consists of three steps:Feature extraction,feature fusion,and then classification.The core of this model revolves around a feature extraction framework that combines color-transformed images with deep learning techniques,using the ResNet50 Convolutional Neural Network(CNN)architecture.So the focus is to extract robust feature fromMRI images,particularly emphasizingweighted average features extracted fromthe first convolutional layer renowned for their discriminative power.To enhance model robustness,we introduced a novel feature fusion technique based on the Marine Predator Algorithm(MPA),inspired by the hunting behavior of marine predators and has shown promise in optimizing complex problems.The proposed methodology can accurately classify and detect brain tumors in camouflage images by combining the power of color transformations,deep learning,and feature fusion via MPA,and achieved an accuracy of 98.72%on a more complex dataset surpassing the existing state-of-the-art methods,highlighting the effectiveness of the proposed model.The importance of this research is in its potential to advance the field ofmedical image analysis,particularly in brain tumor diagnosis,where diagnoses early,and accurate classification are critical for improved patient results.展开更多
AIM:To conduct a classification study of high myopic maculopathy(HMM)using limited datasets,including tessellated fundus,diffuse chorioretinal atrophy,patchy chorioretinal atrophy,and macular atrophy,and minimize anno...AIM:To conduct a classification study of high myopic maculopathy(HMM)using limited datasets,including tessellated fundus,diffuse chorioretinal atrophy,patchy chorioretinal atrophy,and macular atrophy,and minimize annotation costs,and to optimize the ALFA-Mix active learning algorithm and apply it to HMM classification.METHODS:The optimized ALFA-Mix algorithm(ALFAMix+)was compared with five algorithms,including ALFA-Mix.Four models,including Res Net18,were established.Each algorithm was combined with four models for experiments on the HMM dataset.Each experiment consisted of 20 active learning rounds,with 100 images selected per round.The algorithm was evaluated by comparing the number of rounds in which ALFA-Mix+outperformed other algorithms.Finally,this study employed six models,including Efficient Former,to classify HMM.The best-performing model among these models was selected as the baseline model and combined with the ALFA-Mix+algorithm to achieve satisfactor y classification results with a small dataset.RESULTS:ALFA-Mix+outperforms other algorithms with an average superiority of 16.6,14.75,16.8,and 16.7 rounds in terms of accuracy,sensitivity,specificity,and Kappa value,respectively.This study conducted experiments on classifying HMM using several advanced deep learning models with a complete training set of 4252 images.The Efficient Former achieved the best results with an accuracy,sensitivity,specificity,and Kappa value of 0.8821,0.8334,0.9693,and 0.8339,respectively.Therefore,by combining ALFA-Mix+with Efficient Former,this study achieved results with an accuracy,sensitivity,specificity,and Kappa value of 0.8964,0.8643,0.9721,and 0.8537,respectively.CONCLUSION:The ALFA-Mix+algorithm reduces the required samples without compromising accuracy.Compared to other algorithms,ALFA-Mix+outperforms in more rounds of experiments.It effectively selects valuable samples compared to other algorithms.In HMM classification,combining ALFA-Mix+with Efficient Former enhances model performance,further demonstrating the effectiveness of ALFA-Mix+.展开更多
The results of the development of the new fast-speed method of classification images using a structural approach are presented.The method is based on the system of hierarchical features,based on the bitwise data distr...The results of the development of the new fast-speed method of classification images using a structural approach are presented.The method is based on the system of hierarchical features,based on the bitwise data distribution for the set of descriptors of image description.The article also proposes the use of the spatial data processing apparatus,which simplifies and accelerates the classification process.Experiments have shown that the time of calculation of the relevance for two descriptions according to their distributions is about 1000 times less than for the traditional voting procedure,for which the sets of descriptors are compared.The introduction of the system of hierarchical features allows to further reduce the calculation time by 2–3 times while ensuring high efficiency of classification.The noise immunity of the method to additive noise has been experimentally studied.According to the results of the research,the marginal degree of the hierarchy of features for reliable classification with the standard deviation of noise less than 30 is the 8-bit distribution.Computing costs increase proportionally with decreasing bit distribution.The method can be used for application tasks where object identification time is critical.展开更多
Recently,the convolutional neural network(CNN)has been dom-inant in studies on interpreting remote sensing images(RSI).However,it appears that training optimization strategies have received less attention in relevant ...Recently,the convolutional neural network(CNN)has been dom-inant in studies on interpreting remote sensing images(RSI).However,it appears that training optimization strategies have received less attention in relevant research.To evaluate this problem,the author proposes a novel algo-rithm named the Fast Training CNN(FST-CNN).To verify the algorithm’s effectiveness,twenty methods,including six classic models and thirty archi-tectures from previous studies,are included in a performance comparison.The overall accuracy(OA)trained by the FST-CNN algorithm on the same model architecture and dataset is treated as an evaluation baseline.Results show that there is a maximal OA gap of 8.35%between the FST-CNN and those methods in the literature,which means a 10%margin in performance.Meanwhile,all those complex roadmaps,e.g.,deep feature fusion,model combination,model ensembles,and human feature engineering,are not as effective as expected.It reveals that there was systemic suboptimal perfor-mance in the previous studies.Most of the CNN-based methods proposed in the previous studies show a consistent mistake,which has made the model’s accuracy lower than its potential value.The most important reasons seem to be the inappropriate training strategy and the shift in data distribution introduced by data augmentation(DA).As a result,most of the performance evaluation was conducted based on an inaccurate,suboptimal,and unfair result.It has made most of the previous research findings questionable to some extent.However,all these confusing results also exactly demonstrate the effectiveness of FST-CNN.This novel algorithm is model-agnostic and can be employed on any image classification model to potentially boost performance.In addition,the results also show that a standardized training strategy is indeed very meaningful for the research tasks of the RSI-SC.展开更多
The problem of image recognition in the computer vision systems is being studied.The results of the development of efficient classification methods,given the figure of processing speed,based on the analysis of the seg...The problem of image recognition in the computer vision systems is being studied.The results of the development of efficient classification methods,given the figure of processing speed,based on the analysis of the segment representation of the structural description in the form of a set of descriptors are provided.We propose three versions of the classifier according to the following principles:“object-etalon”,“object descriptor-etalon”and“vector description of the object-etalon”,which are not similar in level of integration of researched data analysis.The options for constructing clusters over the whole set of descriptions of the etalon database,separately for each of the etalons,as well as the optimal method to compare sets of segment centers for the etalons and object,are implemented.An experimental rating of the efficiency of the created classifiers in terms of productivity,processing time,and classification quality has been realized of the applied.The proposed methods classify the set of etalons without error.We have formed the inference about the efficiency of classification approaches based on segment centers.The time of image processing according to the developedmethods is hundreds of times less than according to the traditional one,without reducing the accuracy.展开更多
Today, many eye diseases jeopardize our everyday lives, such as Diabetic Retinopathy (DR), Age-related Macular Degeneration (AMD), and Glaucoma.Glaucoma is an incurable and unavoidable eye disease that damages the vi...Today, many eye diseases jeopardize our everyday lives, such as Diabetic Retinopathy (DR), Age-related Macular Degeneration (AMD), and Glaucoma.Glaucoma is an incurable and unavoidable eye disease that damages the vision ofoptic nerves and quality of life. Classification of Glaucoma has been an active fieldof research for the past ten years. Several approaches for Glaucoma classification areestablished, beginning with conventional segmentation methods and feature-extraction to deep-learning techniques such as Convolution Neural Networks (CNN). Incontrast, CNN classifies the input images directly using tuned parameters of convolution and pooling layers by extracting features. But, the volume of training datasetsdetermines the performance of the CNN;the model trained with small datasets,overfit issues arise. CNN has therefore developed with transfer learning. The primary aim of this study is to explore the potential of EfficientNet with transfer learning for the classification of Glaucoma. The performance of the current workcompares with other models, namely VGG16, InceptionV3, and Xception usingpublic datasets such as RIM-ONEV2 & V3, ORIGA, DRISHTI-GS1, HRF, andACRIMA. The dataset has split into training, validation, and testing with the ratioof 70:15:15. The assessment of the test dataset shows that the pre-trained EfficientNetB4 has achieved the highest performance value compared to other models listedabove. The proposed method achieved 99.38% accuracy and also better results forother metrics, such as sensitivity, specificity, precision, F1_score, Kappa score, andArea Under Curve (AUC) compared to other models.展开更多
A brain tumor is a mass of abnormal cells in the brain. Brain tumors can be benign (noncancerous) or malignant (cancerous). Conventional diagnosis of a brain tumor by the radiologist is done by examining a set of imag...A brain tumor is a mass of abnormal cells in the brain. Brain tumors can be benign (noncancerous) or malignant (cancerous). Conventional diagnosis of a brain tumor by the radiologist is done by examining a set of images produced by magnetic resonance imaging (MRI). Many computer-aided detection (CAD) systems have been developed in order to help the radiologists reach their goal of correctly classifying the MRI image. Convolutional neural networks (CNNs) have been widely used in the classification of medical images. This paper presents a novel CAD technique for the classification of brain tumors in MRI images. The proposed system extracts features from the brain MRI images by utilizing the strong energy compactness property exhibited by the Discrete Wavelet Transform (DWT). The Wavelet features are then applied to a CNN to classify the input MRI image. Experimental results indicate that the proposed approach outperforms other commonly used methods and gives an overall accuracy of 99.3%.展开更多
The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and ...The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.展开更多
The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of ...The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.展开更多
This paper presents a novelmulticlass systemdesigned to detect pleural effusion and pulmonary edema on chest Xray images,addressing the critical need for early detection in healthcare.A new comprehensive dataset was f...This paper presents a novelmulticlass systemdesigned to detect pleural effusion and pulmonary edema on chest Xray images,addressing the critical need for early detection in healthcare.A new comprehensive dataset was formed by combining 28,309 samples from the ChestX-ray14,PadChest,and CheXpert databases,with 10,287,6022,and 12,000 samples representing Pleural Effusion,Pulmonary Edema,and Normal cases,respectively.Consequently,the preprocessing step involves applying the Contrast Limited Adaptive Histogram Equalization(CLAHE)method to boost the local contrast of the X-ray samples,then resizing the images to 380×380 dimensions,followed by using the data augmentation technique.The classification task employs a deep learning model based on the EfficientNet-V1-B4 architecture and is trained using the AdamW optimizer.The proposed multiclass system achieved an accuracy(ACC)of 98.3%,recall of 98.3%,precision of 98.7%,and F1-score of 98.7%.Moreover,the robustness of the model was revealed by the Receiver Operating Characteristic(ROC)analysis,which demonstrated an Area Under the Curve(AUC)of 1.00 for edema and normal cases and 0.99 for effusion.The experimental results demonstrate the superiority of the proposedmulti-class system,which has the potential to assist clinicians in timely and accurate diagnosis,leading to improved patient outcomes.Notably,ablation-CAM visualization at the last convolutional layer portrayed further enhanced diagnostic capabilities with heat maps on X-ray images,which will aid clinicians in interpreting and localizing abnormalities more effectively.展开更多
The utilization of visual attention enhances the performance of image classification tasks.Previous attentionbased models have demonstrated notable performance,but many of these models exhibit reduced accuracy when co...The utilization of visual attention enhances the performance of image classification tasks.Previous attentionbased models have demonstrated notable performance,but many of these models exhibit reduced accuracy when confronted with inter-class and intra-class similarities and differences.Neural-Controlled Differential Equations(N-CDE’s)and Neural Ordinary Differential Equations(NODE’s)are extensively utilized within this context.NCDE’s possesses the capacity to effectively illustrate both inter-class and intra-class similarities and differences with enhanced clarity.To this end,an attentive neural network has been proposed to generate attention maps,which uses two different types of N-CDE’s,one for adopting hidden layers and the other to generate attention values.Two distinct attention techniques are implemented including time-wise attention,also referred to as bottom N-CDE’s;and element-wise attention,called topN-CDE’s.Additionally,a trainingmethodology is proposed to guarantee that the training problem is sufficiently presented.Two classification tasks including fine-grained visual classification andmulti-label classification,are utilized to evaluate the proposedmodel.The proposedmethodology is employed on five publicly available datasets,including CUB-200-2011,ImageNet-1K,PASCAL VOC 2007,PASCAL VOC 2012,and MS COCO.The obtained visualizations have demonstrated that N-CDE’s are better appropriate for attention-based activities in comparison to conventional NODE’s.展开更多
The ocean plays an important role in maintaining the equilibrium of Earth’s ecology and providing humans access to a wealth of resources.To obtain a high-precision underwater image classification model,we propose a c...The ocean plays an important role in maintaining the equilibrium of Earth’s ecology and providing humans access to a wealth of resources.To obtain a high-precision underwater image classification model,we propose a classification model that combines an EfficientnetB0 neural network and a two-hidden-layer random vector functional link network(EfficientnetB0-TRVFL).The features of underwater images were extracted using the EfficientnetB0 neural network pretrained via ImageNet,and a new fully connected layer was trained on the underwater image dataset using the transfer learning method.Transfer learning ensures the initial performance of the network and helps in the development of a high-precision classification model.Subsequently,a TRVFL was proposed to improve the classification property of the model.Net construction of the two hidden layers exhibited a high accuracy when the same hidden layer nodes were used.The parameters of the second hidden layer were obtained using a novel calculation method,which reduced the outcome error to improve the performance instability caused by the random generation of parameters of RVFL.Finally,the TRVFL classifier was used to classify features and obtain classification results.The proposed EfficientnetB0-TRVFL classification model achieved 87.28%,74.06%,and 99.59%accuracy on the MLC2008,MLC2009,and Fish-gres datasets,respectively.The best convolutional neural networks and existing methods were stacked up through box plots and Kolmogorov-Smirnov tests,respectively.The increases imply improved systematization properties in underwater image classification tasks.The image classification model offers important performance advantages and better stability compared with existing methods.展开更多
Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convol...Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.展开更多
Hyperspectral(HS)image classification plays a crucial role in numerous areas including remote sensing(RS),agriculture,and the monitoring of the environment.Optimal band selection in HS images is crucial for improving ...Hyperspectral(HS)image classification plays a crucial role in numerous areas including remote sensing(RS),agriculture,and the monitoring of the environment.Optimal band selection in HS images is crucial for improving the efficiency and accuracy of image classification.This process involves selecting the most informative spectral bands,which leads to a reduction in data volume.Focusing on these key bands also enhances the accuracy of classification algorithms,as redundant or irrelevant bands,which can introduce noise and lower model performance,are excluded.In this paper,we propose an approach for HS image classification using deep Q learning(DQL)and a novel multi-objective binary grey wolf optimizer(MOBGWO).We investigate the MOBGWO for optimal band selection to further enhance the accuracy of HS image classification.In the suggested MOBGWO,a new sigmoid function is introduced as a transfer function to modify the wolves’position.The primary objective of this classification is to reduce the number of bands while maximizing classification accuracy.To evaluate the effectiveness of our approach,we conducted experiments on publicly available HS image datasets,including Pavia University,Washington Mall,and Indian Pines datasets.We compared the performance of our proposed method with several state-of-the-art deep learning(DL)and machine learning(ML)algorithms,including long short-term memory(LSTM),deep neural network(DNN),recurrent neural network(RNN),support vector machine(SVM),and random forest(RF).Our experimental results demonstrate that the Hybrid MOBGWO-DQL significantly improves classification accuracy compared to traditional optimization and DL techniques.MOBGWO-DQL shows greater accuracy in classifying most categories in both datasets used.For the Indian Pine dataset,the MOBGWO-DQL architecture achieved a kappa coefficient(KC)of 97.68%and an overall accuracy(OA)of 94.32%.This was accompanied by the lowest root mean square error(RMSE)of 0.94,indicating very precise predictions with minimal error.In the case of the Pavia University dataset,the MOBGWO-DQL model demonstrated outstanding performance with the highest KC of 98.72%and an impressive OA of 96.01%.It also recorded the lowest RMSE at 0.63,reinforcing its accuracy in predictions.The results clearly demonstrate that the proposed MOBGWO-DQL architecture not only reaches a highly accurate model more quickly but also maintains superior performance throughout the training process.展开更多
This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include pictu...This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms.展开更多
Few‐shot image classification is the task of classifying novel classes using extremely limited labelled samples.To perform classification using the limited samples,one solution is to learn the feature alignment(FA)in...Few‐shot image classification is the task of classifying novel classes using extremely limited labelled samples.To perform classification using the limited samples,one solution is to learn the feature alignment(FA)information between the labelled and unlabelled sample features.Most FA methods use the feature mean as the class prototype and calculate the correlation between prototype and unlabelled features to learn an alignment strategy.However,mean prototypes tend to degenerate informative features because spatial features at the same position may not be equally important for the final classification,leading to inaccurate correlation calculations.Therefore,the authors propose an effective intraclass FA strategy that aggregates semantically similar spatial features from an adaptive reference prototype in low‐dimensional feature space to obtain an informative prototype feature map for precise correlation computation.Moreover,a dual correlation module to learn the hard and soft correlations was developed by the authors.This module combines the correlation information between the prototype and unlabelled features in both the original and learnable feature spaces,aiming to produce a comprehensive cross‐correlation between the prototypes and unlabelled features.Using both FA and cross‐attention modules,our model can maintain informative class features and capture important shared features for classification.Experimental results on three few‐shot classification benchmarks show that the proposed method outperformed related methods and resulted in a 3%performance boost in the 1‐shot setting by inserting the proposed module into the related methods.展开更多
基金Project supported by the Natural Science Foundation of Shandong Province,China (Grant No. ZR2021MF049)the Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos. ZR2022LLZ012 and ZR2021LLZ001)。
文摘We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantum circuit, thereby propose a novel hybrid quantum deep neural network(HQDNN) used for image classification. After bilinear interpolation reduces the original image to a suitable size, an improved novel enhanced quantum representation(INEQR) is used to encode it into quantum states as the input of the HQDNN. Multi-layer parameterized quantum circuits are used as the main structure to implement feature extraction and classification. The output results of parameterized quantum circuits are converted into classical data through quantum measurements and then optimized on a classical computer. To verify the performance of the HQDNN, we conduct binary classification and three classification experiments on the MNIST(Modified National Institute of Standards and Technology) data set. In the first binary classification, the accuracy of 0 and 4 exceeds98%. Then we compare the performance of three classification with other algorithms, the results on two datasets show that the classification accuracy is higher than that of quantum deep neural network and general quantum convolutional neural network.
基金by Makkah Digital Gate Initiative under grant no.(MDP-IRI-3-2020).
文摘Desertification has become a global threat and caused a crisis,especially in Middle Eastern countries,such as Saudi Arabia.Makkah is one of the most important cities in Saudi Arabia that needs to be protected from desertification.The vegetation area in Makkah has been damaged because of desertification through wind,floods,overgrazing,and global climate change.The damage caused by desertification can be recovered provided urgent action is taken to prevent further degradation of the vegetation area.In this paper,we propose an automatic desertification detection system based on Deep Learning techniques.Aerial images are classified using Convolutional Neural Networks(CNN)to detect land state variation in real-time.CNNs have been widely used for computer vision applications,such as image classification,image segmentation,and quality enhancement.The proposed CNN model was trained and evaluated on the Arial Image Dataset(AID).Compared to state-of-the-art methods,the proposed model has better performance while being suitable for embedded implementation.It has achieved high efficiency with 96.47% accuracy.In light of the current research,we assert the appropriateness of the proposed CNN model in detecting desertification from aerial images.
基金the National Nature Science Foundation of China (60372057).
文摘To study the scene classification in the Synthetic Aperture Radar (SAR) image, a novel method based on kernel estimate, with the Maxkov context and Dempster-Shafer evidence theory is proposed. Initially, a nonpaxametric Probability Density Function (PDF) estimate method is introduced, to describe the scene of SAR images. And then under the Maxkov context, both the determinate PDF and the kernel estimate method axe adopted respectively, to form a primary classification. Next, the primary classification results are fused using the evidence theory in an unsupervised way to get the scene classification. Finally, a regularization step is used, in which an iterated maximum selecting approach is introduced to control the fragments and modify the errors of the classification. Use of the kernel estimate and evidence theory can describe the complicated scenes with little prior knowledge and eliminate the ambiguities of the primary classification results. Experimental results on real SAR images illustrate a rather impressive performance.
基金funded by Researchers Supporting Program at King Saud University,(RSPD2024R809).
文摘In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.
基金funding from Prince Sattam bin Abdulaziz University through the Project Number(PSAU/2023/01/24607).
文摘In the domain ofmedical imaging,the accurate detection and classification of brain tumors is very important.This study introduces an advanced method for identifying camouflaged brain tumors within images.Our proposed model consists of three steps:Feature extraction,feature fusion,and then classification.The core of this model revolves around a feature extraction framework that combines color-transformed images with deep learning techniques,using the ResNet50 Convolutional Neural Network(CNN)architecture.So the focus is to extract robust feature fromMRI images,particularly emphasizingweighted average features extracted fromthe first convolutional layer renowned for their discriminative power.To enhance model robustness,we introduced a novel feature fusion technique based on the Marine Predator Algorithm(MPA),inspired by the hunting behavior of marine predators and has shown promise in optimizing complex problems.The proposed methodology can accurately classify and detect brain tumors in camouflage images by combining the power of color transformations,deep learning,and feature fusion via MPA,and achieved an accuracy of 98.72%on a more complex dataset surpassing the existing state-of-the-art methods,highlighting the effectiveness of the proposed model.The importance of this research is in its potential to advance the field ofmedical image analysis,particularly in brain tumor diagnosis,where diagnoses early,and accurate classification are critical for improved patient results.
基金Supported by the National Natural Science Foundation of China(No.61906066)the Zhejiang Provincial Philosophy and Social Science Planning Project(No.21NDJC021Z)+4 种基金Shenzhen Fund for Guangdong Provincial High-level Clinical Key Specialties(No.SZGSP014)Sanming Project of Medicine in Shenzhen(No.SZSM202011015)Shenzhen Science and Technology Planning Project(No.KCXFZ20211020163813019)the Natural Science Foundation of Ningbo City(No.202003N4072)the Postgraduate Research and Innovation Project of Huzhou University(No.2023KYCX52)。
文摘AIM:To conduct a classification study of high myopic maculopathy(HMM)using limited datasets,including tessellated fundus,diffuse chorioretinal atrophy,patchy chorioretinal atrophy,and macular atrophy,and minimize annotation costs,and to optimize the ALFA-Mix active learning algorithm and apply it to HMM classification.METHODS:The optimized ALFA-Mix algorithm(ALFAMix+)was compared with five algorithms,including ALFA-Mix.Four models,including Res Net18,were established.Each algorithm was combined with four models for experiments on the HMM dataset.Each experiment consisted of 20 active learning rounds,with 100 images selected per round.The algorithm was evaluated by comparing the number of rounds in which ALFA-Mix+outperformed other algorithms.Finally,this study employed six models,including Efficient Former,to classify HMM.The best-performing model among these models was selected as the baseline model and combined with the ALFA-Mix+algorithm to achieve satisfactor y classification results with a small dataset.RESULTS:ALFA-Mix+outperforms other algorithms with an average superiority of 16.6,14.75,16.8,and 16.7 rounds in terms of accuracy,sensitivity,specificity,and Kappa value,respectively.This study conducted experiments on classifying HMM using several advanced deep learning models with a complete training set of 4252 images.The Efficient Former achieved the best results with an accuracy,sensitivity,specificity,and Kappa value of 0.8821,0.8334,0.9693,and 0.8339,respectively.Therefore,by combining ALFA-Mix+with Efficient Former,this study achieved results with an accuracy,sensitivity,specificity,and Kappa value of 0.8964,0.8643,0.9721,and 0.8537,respectively.CONCLUSION:The ALFA-Mix+algorithm reduces the required samples without compromising accuracy.Compared to other algorithms,ALFA-Mix+outperforms in more rounds of experiments.It effectively selects valuable samples compared to other algorithms.In HMM classification,combining ALFA-Mix+with Efficient Former enhances model performance,further demonstrating the effectiveness of ALFA-Mix+.
文摘The results of the development of the new fast-speed method of classification images using a structural approach are presented.The method is based on the system of hierarchical features,based on the bitwise data distribution for the set of descriptors of image description.The article also proposes the use of the spatial data processing apparatus,which simplifies and accelerates the classification process.Experiments have shown that the time of calculation of the relevance for two descriptions according to their distributions is about 1000 times less than for the traditional voting procedure,for which the sets of descriptors are compared.The introduction of the system of hierarchical features allows to further reduce the calculation time by 2–3 times while ensuring high efficiency of classification.The noise immunity of the method to additive noise has been experimentally studied.According to the results of the research,the marginal degree of the hierarchy of features for reliable classification with the standard deviation of noise less than 30 is the 8-bit distribution.Computing costs increase proportionally with decreasing bit distribution.The method can be used for application tasks where object identification time is critical.
基金Hunan University of Arts and Science provided doctoral research funding for this study (grant number 16BSQD23)Fund of Geography Subject ([2022]351)also provided funding.
文摘Recently,the convolutional neural network(CNN)has been dom-inant in studies on interpreting remote sensing images(RSI).However,it appears that training optimization strategies have received less attention in relevant research.To evaluate this problem,the author proposes a novel algo-rithm named the Fast Training CNN(FST-CNN).To verify the algorithm’s effectiveness,twenty methods,including six classic models and thirty archi-tectures from previous studies,are included in a performance comparison.The overall accuracy(OA)trained by the FST-CNN algorithm on the same model architecture and dataset is treated as an evaluation baseline.Results show that there is a maximal OA gap of 8.35%between the FST-CNN and those methods in the literature,which means a 10%margin in performance.Meanwhile,all those complex roadmaps,e.g.,deep feature fusion,model combination,model ensembles,and human feature engineering,are not as effective as expected.It reveals that there was systemic suboptimal perfor-mance in the previous studies.Most of the CNN-based methods proposed in the previous studies show a consistent mistake,which has made the model’s accuracy lower than its potential value.The most important reasons seem to be the inappropriate training strategy and the shift in data distribution introduced by data augmentation(DA).As a result,most of the performance evaluation was conducted based on an inaccurate,suboptimal,and unfair result.It has made most of the previous research findings questionable to some extent.However,all these confusing results also exactly demonstrate the effectiveness of FST-CNN.This novel algorithm is model-agnostic and can be employed on any image classification model to potentially boost performance.In addition,the results also show that a standardized training strategy is indeed very meaningful for the research tasks of the RSI-SC.
基金The authors received specific funding for this research-Project Number IF-PSAU-2021/01/18487.
文摘The problem of image recognition in the computer vision systems is being studied.The results of the development of efficient classification methods,given the figure of processing speed,based on the analysis of the segment representation of the structural description in the form of a set of descriptors are provided.We propose three versions of the classifier according to the following principles:“object-etalon”,“object descriptor-etalon”and“vector description of the object-etalon”,which are not similar in level of integration of researched data analysis.The options for constructing clusters over the whole set of descriptions of the etalon database,separately for each of the etalons,as well as the optimal method to compare sets of segment centers for the etalons and object,are implemented.An experimental rating of the efficiency of the created classifiers in terms of productivity,processing time,and classification quality has been realized of the applied.The proposed methods classify the set of etalons without error.We have formed the inference about the efficiency of classification approaches based on segment centers.The time of image processing according to the developedmethods is hundreds of times less than according to the traditional one,without reducing the accuracy.
文摘Today, many eye diseases jeopardize our everyday lives, such as Diabetic Retinopathy (DR), Age-related Macular Degeneration (AMD), and Glaucoma.Glaucoma is an incurable and unavoidable eye disease that damages the vision ofoptic nerves and quality of life. Classification of Glaucoma has been an active fieldof research for the past ten years. Several approaches for Glaucoma classification areestablished, beginning with conventional segmentation methods and feature-extraction to deep-learning techniques such as Convolution Neural Networks (CNN). Incontrast, CNN classifies the input images directly using tuned parameters of convolution and pooling layers by extracting features. But, the volume of training datasetsdetermines the performance of the CNN;the model trained with small datasets,overfit issues arise. CNN has therefore developed with transfer learning. The primary aim of this study is to explore the potential of EfficientNet with transfer learning for the classification of Glaucoma. The performance of the current workcompares with other models, namely VGG16, InceptionV3, and Xception usingpublic datasets such as RIM-ONEV2 & V3, ORIGA, DRISHTI-GS1, HRF, andACRIMA. The dataset has split into training, validation, and testing with the ratioof 70:15:15. The assessment of the test dataset shows that the pre-trained EfficientNetB4 has achieved the highest performance value compared to other models listedabove. The proposed method achieved 99.38% accuracy and also better results forother metrics, such as sensitivity, specificity, precision, F1_score, Kappa score, andArea Under Curve (AUC) compared to other models.
文摘A brain tumor is a mass of abnormal cells in the brain. Brain tumors can be benign (noncancerous) or malignant (cancerous). Conventional diagnosis of a brain tumor by the radiologist is done by examining a set of images produced by magnetic resonance imaging (MRI). Many computer-aided detection (CAD) systems have been developed in order to help the radiologists reach their goal of correctly classifying the MRI image. Convolutional neural networks (CNNs) have been widely used in the classification of medical images. This paper presents a novel CAD technique for the classification of brain tumors in MRI images. The proposed system extracts features from the brain MRI images by utilizing the strong energy compactness property exhibited by the Discrete Wavelet Transform (DWT). The Wavelet features are then applied to a CNN to classify the input MRI image. Experimental results indicate that the proposed approach outperforms other commonly used methods and gives an overall accuracy of 99.3%.
基金supported by theCONAHCYT(Consejo Nacional deHumanidades,Ciencias y Tecnologias).
文摘The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.
基金This research was funded by Prince Sattam bin Abdulaziz University(Project Number PSAU/2023/01/25387).
文摘The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.
文摘This paper presents a novelmulticlass systemdesigned to detect pleural effusion and pulmonary edema on chest Xray images,addressing the critical need for early detection in healthcare.A new comprehensive dataset was formed by combining 28,309 samples from the ChestX-ray14,PadChest,and CheXpert databases,with 10,287,6022,and 12,000 samples representing Pleural Effusion,Pulmonary Edema,and Normal cases,respectively.Consequently,the preprocessing step involves applying the Contrast Limited Adaptive Histogram Equalization(CLAHE)method to boost the local contrast of the X-ray samples,then resizing the images to 380×380 dimensions,followed by using the data augmentation technique.The classification task employs a deep learning model based on the EfficientNet-V1-B4 architecture and is trained using the AdamW optimizer.The proposed multiclass system achieved an accuracy(ACC)of 98.3%,recall of 98.3%,precision of 98.7%,and F1-score of 98.7%.Moreover,the robustness of the model was revealed by the Receiver Operating Characteristic(ROC)analysis,which demonstrated an Area Under the Curve(AUC)of 1.00 for edema and normal cases and 0.99 for effusion.The experimental results demonstrate the superiority of the proposedmulti-class system,which has the potential to assist clinicians in timely and accurate diagnosis,leading to improved patient outcomes.Notably,ablation-CAM visualization at the last convolutional layer portrayed further enhanced diagnostic capabilities with heat maps on X-ray images,which will aid clinicians in interpreting and localizing abnormalities more effectively.
基金Institutional Fund Projects under Grant No.(IFPIP:638-830-1443).
文摘The utilization of visual attention enhances the performance of image classification tasks.Previous attentionbased models have demonstrated notable performance,but many of these models exhibit reduced accuracy when confronted with inter-class and intra-class similarities and differences.Neural-Controlled Differential Equations(N-CDE’s)and Neural Ordinary Differential Equations(NODE’s)are extensively utilized within this context.NCDE’s possesses the capacity to effectively illustrate both inter-class and intra-class similarities and differences with enhanced clarity.To this end,an attentive neural network has been proposed to generate attention maps,which uses two different types of N-CDE’s,one for adopting hidden layers and the other to generate attention values.Two distinct attention techniques are implemented including time-wise attention,also referred to as bottom N-CDE’s;and element-wise attention,called topN-CDE’s.Additionally,a trainingmethodology is proposed to guarantee that the training problem is sufficiently presented.Two classification tasks including fine-grained visual classification andmulti-label classification,are utilized to evaluate the proposedmodel.The proposedmethodology is employed on five publicly available datasets,including CUB-200-2011,ImageNet-1K,PASCAL VOC 2007,PASCAL VOC 2012,and MS COCO.The obtained visualizations have demonstrated that N-CDE’s are better appropriate for attention-based activities in comparison to conventional NODE’s.
基金support of the National Key R&D Program of China(No.2022YFC2803903)the Key R&D Program of Zhejiang Province(No.2021C03013)the Zhejiang Provincial Natural Science Foundation of China(No.LZ20F020003).
文摘The ocean plays an important role in maintaining the equilibrium of Earth’s ecology and providing humans access to a wealth of resources.To obtain a high-precision underwater image classification model,we propose a classification model that combines an EfficientnetB0 neural network and a two-hidden-layer random vector functional link network(EfficientnetB0-TRVFL).The features of underwater images were extracted using the EfficientnetB0 neural network pretrained via ImageNet,and a new fully connected layer was trained on the underwater image dataset using the transfer learning method.Transfer learning ensures the initial performance of the network and helps in the development of a high-precision classification model.Subsequently,a TRVFL was proposed to improve the classification property of the model.Net construction of the two hidden layers exhibited a high accuracy when the same hidden layer nodes were used.The parameters of the second hidden layer were obtained using a novel calculation method,which reduced the outcome error to improve the performance instability caused by the random generation of parameters of RVFL.Finally,the TRVFL classifier was used to classify features and obtain classification results.The proposed EfficientnetB0-TRVFL classification model achieved 87.28%,74.06%,and 99.59%accuracy on the MLC2008,MLC2009,and Fish-gres datasets,respectively.The best convolutional neural networks and existing methods were stacked up through box plots and Kolmogorov-Smirnov tests,respectively.The increases imply improved systematization properties in underwater image classification tasks.The image classification model offers important performance advantages and better stability compared with existing methods.
基金Natural Science Foundation of Shandong Province,China(Grant No.ZR202111230202).
文摘Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.
文摘Hyperspectral(HS)image classification plays a crucial role in numerous areas including remote sensing(RS),agriculture,and the monitoring of the environment.Optimal band selection in HS images is crucial for improving the efficiency and accuracy of image classification.This process involves selecting the most informative spectral bands,which leads to a reduction in data volume.Focusing on these key bands also enhances the accuracy of classification algorithms,as redundant or irrelevant bands,which can introduce noise and lower model performance,are excluded.In this paper,we propose an approach for HS image classification using deep Q learning(DQL)and a novel multi-objective binary grey wolf optimizer(MOBGWO).We investigate the MOBGWO for optimal band selection to further enhance the accuracy of HS image classification.In the suggested MOBGWO,a new sigmoid function is introduced as a transfer function to modify the wolves’position.The primary objective of this classification is to reduce the number of bands while maximizing classification accuracy.To evaluate the effectiveness of our approach,we conducted experiments on publicly available HS image datasets,including Pavia University,Washington Mall,and Indian Pines datasets.We compared the performance of our proposed method with several state-of-the-art deep learning(DL)and machine learning(ML)algorithms,including long short-term memory(LSTM),deep neural network(DNN),recurrent neural network(RNN),support vector machine(SVM),and random forest(RF).Our experimental results demonstrate that the Hybrid MOBGWO-DQL significantly improves classification accuracy compared to traditional optimization and DL techniques.MOBGWO-DQL shows greater accuracy in classifying most categories in both datasets used.For the Indian Pine dataset,the MOBGWO-DQL architecture achieved a kappa coefficient(KC)of 97.68%and an overall accuracy(OA)of 94.32%.This was accompanied by the lowest root mean square error(RMSE)of 0.94,indicating very precise predictions with minimal error.In the case of the Pavia University dataset,the MOBGWO-DQL model demonstrated outstanding performance with the highest KC of 98.72%and an impressive OA of 96.01%.It also recorded the lowest RMSE at 0.63,reinforcing its accuracy in predictions.The results clearly demonstrate that the proposed MOBGWO-DQL architecture not only reaches a highly accurate model more quickly but also maintains superior performance throughout the training process.
基金the appreciation to the Deanship of Postgraduate Studies and ScientificResearch atMajmaah University for funding this research work through the Project Number R-2024-922.
文摘This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms.
基金Institute of Information&Communications Technology Planning&Evaluation,Grant/Award Number:2022-0-00074。
文摘Few‐shot image classification is the task of classifying novel classes using extremely limited labelled samples.To perform classification using the limited samples,one solution is to learn the feature alignment(FA)information between the labelled and unlabelled sample features.Most FA methods use the feature mean as the class prototype and calculate the correlation between prototype and unlabelled features to learn an alignment strategy.However,mean prototypes tend to degenerate informative features because spatial features at the same position may not be equally important for the final classification,leading to inaccurate correlation calculations.Therefore,the authors propose an effective intraclass FA strategy that aggregates semantically similar spatial features from an adaptive reference prototype in low‐dimensional feature space to obtain an informative prototype feature map for precise correlation computation.Moreover,a dual correlation module to learn the hard and soft correlations was developed by the authors.This module combines the correlation information between the prototype and unlabelled features in both the original and learnable feature spaces,aiming to produce a comprehensive cross‐correlation between the prototypes and unlabelled features.Using both FA and cross‐attention modules,our model can maintain informative class features and capture important shared features for classification.Experimental results on three few‐shot classification benchmarks show that the proposed method outperformed related methods and resulted in a 3%performance boost in the 1‐shot setting by inserting the proposed module into the related methods.