Few‐shot image classification is the task of classifying novel classes using extremely limited labelled samples.To perform classification using the limited samples,one solution is to learn the feature alignment(FA)in...Few‐shot image classification is the task of classifying novel classes using extremely limited labelled samples.To perform classification using the limited samples,one solution is to learn the feature alignment(FA)information between the labelled and unlabelled sample features.Most FA methods use the feature mean as the class prototype and calculate the correlation between prototype and unlabelled features to learn an alignment strategy.However,mean prototypes tend to degenerate informative features because spatial features at the same position may not be equally important for the final classification,leading to inaccurate correlation calculations.Therefore,the authors propose an effective intraclass FA strategy that aggregates semantically similar spatial features from an adaptive reference prototype in low‐dimensional feature space to obtain an informative prototype feature map for precise correlation computation.Moreover,a dual correlation module to learn the hard and soft correlations was developed by the authors.This module combines the correlation information between the prototype and unlabelled features in both the original and learnable feature spaces,aiming to produce a comprehensive cross‐correlation between the prototypes and unlabelled features.Using both FA and cross‐attention modules,our model can maintain informative class features and capture important shared features for classification.Experimental results on three few‐shot classification benchmarks show that the proposed method outperformed related methods and resulted in a 3%performance boost in the 1‐shot setting by inserting the proposed module into the related methods.展开更多
The utilization of visual attention enhances the performance of image classification tasks.Previous attentionbased models have demonstrated notable performance,but many of these models exhibit reduced accuracy when co...The utilization of visual attention enhances the performance of image classification tasks.Previous attentionbased models have demonstrated notable performance,but many of these models exhibit reduced accuracy when confronted with inter-class and intra-class similarities and differences.Neural-Controlled Differential Equations(N-CDE’s)and Neural Ordinary Differential Equations(NODE’s)are extensively utilized within this context.NCDE’s possesses the capacity to effectively illustrate both inter-class and intra-class similarities and differences with enhanced clarity.To this end,an attentive neural network has been proposed to generate attention maps,which uses two different types of N-CDE’s,one for adopting hidden layers and the other to generate attention values.Two distinct attention techniques are implemented including time-wise attention,also referred to as bottom N-CDE’s;and element-wise attention,called topN-CDE’s.Additionally,a trainingmethodology is proposed to guarantee that the training problem is sufficiently presented.Two classification tasks including fine-grained visual classification andmulti-label classification,are utilized to evaluate the proposedmodel.The proposedmethodology is employed on five publicly available datasets,including CUB-200-2011,ImageNet-1K,PASCAL VOC 2007,PASCAL VOC 2012,and MS COCO.The obtained visualizations have demonstrated that N-CDE’s are better appropriate for attention-based activities in comparison to conventional NODE’s.展开更多
Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification...Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification accuracy of hyperspectral images.To address this problem,this article proposes an algorithm based on multiscale fusion and transformer network for hyperspectral image classification.Firstly,the low-level spatial-spectral features are extracted by multi-scale residual structure.Secondly,an attention module is introduced to focus on the more important spatialspectral information.Finally,high-level semantic features are represented and learned by a token learner and an improved transformer encoder.The proposed algorithm is compared with six classical hyperspectral classification algorithms on real hyperspectral images.The experimental results show that the proposed algorithm effectively improves the land cover classification accuracy of hyperspectral images.展开更多
The ocean plays an important role in maintaining the equilibrium of Earth’s ecology and providing humans access to a wealth of resources.To obtain a high-precision underwater image classification model,we propose a c...The ocean plays an important role in maintaining the equilibrium of Earth’s ecology and providing humans access to a wealth of resources.To obtain a high-precision underwater image classification model,we propose a classification model that combines an EfficientnetB0 neural network and a two-hidden-layer random vector functional link network(EfficientnetB0-TRVFL).The features of underwater images were extracted using the EfficientnetB0 neural network pretrained via ImageNet,and a new fully connected layer was trained on the underwater image dataset using the transfer learning method.Transfer learning ensures the initial performance of the network and helps in the development of a high-precision classification model.Subsequently,a TRVFL was proposed to improve the classification property of the model.Net construction of the two hidden layers exhibited a high accuracy when the same hidden layer nodes were used.The parameters of the second hidden layer were obtained using a novel calculation method,which reduced the outcome error to improve the performance instability caused by the random generation of parameters of RVFL.Finally,the TRVFL classifier was used to classify features and obtain classification results.The proposed EfficientnetB0-TRVFL classification model achieved 87.28%,74.06%,and 99.59%accuracy on the MLC2008,MLC2009,and Fish-gres datasets,respectively.The best convolutional neural networks and existing methods were stacked up through box plots and Kolmogorov-Smirnov tests,respectively.The increases imply improved systematization properties in underwater image classification tasks.The image classification model offers important performance advantages and better stability compared with existing methods.展开更多
Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convol...Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include pictu...This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms.展开更多
Breast cancer detection heavily relies on medical imaging, particularly ultrasound, for early diagnosis and effectivetreatment. This research addresses the challenges associated with computer-aided diagnosis (CAD) of ...Breast cancer detection heavily relies on medical imaging, particularly ultrasound, for early diagnosis and effectivetreatment. This research addresses the challenges associated with computer-aided diagnosis (CAD) of breastcancer fromultrasound images. The primary challenge is accurately distinguishing between malignant and benigntumors, complicated by factors such as speckle noise, variable image quality, and the need for precise segmentationand classification. The main objective of the research paper is to develop an advanced methodology for breastultrasound image classification, focusing on speckle noise reduction, precise segmentation, feature extraction, andmachine learning-based classification. A unique approach is introduced that combines Enhanced Speckle ReducedAnisotropic Diffusion (SRAD) filters for speckle noise reduction, U-NET-based segmentation, Genetic Algorithm(GA)-based feature selection, and Random Forest and Bagging Tree classifiers, resulting in a novel and efficientmodel. To test and validate the hybrid model, rigorous experimentations were performed and results state thatthe proposed hybrid model achieved accuracy rate of 99.9%, outperforming other existing techniques, and alsosignificantly reducing computational time. This enhanced accuracy, along with improved sensitivity and specificity,makes the proposed hybrid model a valuable addition to CAD systems in breast cancer diagnosis, ultimatelyenhancing diagnostic accuracy in clinical applications.展开更多
Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particula...Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particularly in regions with limited diagnostic resources like Pakistan.This study aims to conduct an extensive comparative analysis of various machine learning classifiers for ASD detection using facial images to identify an accurate and cost-effective solution tailored to the local context.The research involves experimentation with VGG16 and MobileNet models,exploring different batch sizes,optimizers,and learning rate schedulers.In addition,the“Orange”machine learning tool is employed to evaluate classifier performance and automated image processing capabilities are utilized within the tool.The findings unequivocally establish VGG16 as the most effective classifier with a 5-fold cross-validation approach.Specifically,VGG16,with a batch size of 2 and the Adam optimizer,trained for 100 epochs,achieves a remarkable validation accuracy of 99% and a testing accuracy of 87%.Furthermore,the model achieves an F1 score of 88%,precision of 85%,and recall of 90% on test images.To validate the practical applicability of the VGG16 model with 5-fold cross-validation,the study conducts further testing on a dataset sourced fromautism centers in Pakistan,resulting in an accuracy rate of 85%.This reaffirms the model’s suitability for real-world ASD detection.This research offers valuable insights into classifier performance,emphasizing the potential of machine learning to deliver precise and accessible ASD diagnoses via facial image analysis.展开更多
Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR ...Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.展开更多
With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and th...With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.展开更多
To create a green and healthy living environment,people have put forward higher requirements for the refined management of ecological resources.A variety of technologies,including satellite remote sensing,Internet of ...To create a green and healthy living environment,people have put forward higher requirements for the refined management of ecological resources.A variety of technologies,including satellite remote sensing,Internet of Things,artificial intelligence,and big data,can build a smart environmental monitoring system.Remote sensing image classification is an important research content in ecological environmental monitoring.Remote sensing images contain rich spatial information andmulti-temporal information,but also bring challenges such as difficulty in obtaining classification labels and low classification accuracy.To solve this problem,this study develops a transductive transfer dictionary learning(TTDL)algorithm.In the TTDL,the source and target domains are transformed fromthe original sample space to a common subspace.TTDL trains a shared discriminative dictionary in this subspace,establishes associations between domains,and also obtains sparse representations of source and target domain data.To obtain an effective shared discriminative dictionary,triple-induced ordinal locality preserving term,Fisher discriminant term,and graph Laplacian regularization termare introduced into the TTDL.The triplet-induced ordinal locality preserving term on sub-space projection preserves the local structure of data in low-dimensional subspaces.The Fisher discriminant term on dictionary improves differences among different sub-dictionaries through intra-class and inter-class scatters.The graph Laplacian regularization term on sparse representation maintains the manifold structure using a semi-supervised weight graphmatrix,which can indirectly improve the discriminative performance of the dictionary.The TTDL is tested on several remote sensing image datasets and has strong discrimination classification performance.展开更多
Remote sensing image(RSI)classifier roles a vital play in earth observation technology utilizing Remote sensing(RS)data are extremely exploited from both military and civil fields.More recently,as novel DL approaches ...Remote sensing image(RSI)classifier roles a vital play in earth observation technology utilizing Remote sensing(RS)data are extremely exploited from both military and civil fields.More recently,as novel DL approaches develop,techniques for RSI classifiers with DL have attained important breakthroughs,providing a new opportunity for the research and development of RSI classifiers.This study introduces an Improved Slime Mould Optimization with a graph convolutional network for the hyperspectral remote sensing image classification(ISMOGCN-HRSC)model.The ISMOGCN-HRSC model majorly concentrates on identifying and classifying distinct kinds of RSIs.In the presented ISMOGCN-HRSC model,the synergic deep learning(SDL)model is exploited to produce feature vectors.The GCN model is utilized for image classification purposes to identify the proper class labels of the RSIs.The ISMO algorithm is used to enhance the classification efficiency of the GCN method,which is derived by integrating chaotic concepts into the SMO algorithm.The experimental assessment of the ISMOGCN-HRSC method is tested using a benchmark dataset.展开更多
The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information...The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information technology has increased.A sensing system is mandatory to detect rice diseases using Artificial Intelligence(AI).It is being adopted in all medical and plant sciences fields to access and measure the accuracy of results and detection while lowering the risk of diseases.Deep Neural Network(DNN)is a novel technique that will help detect disease present on a rice leave because DNN is also considered a state-of-the-art solution in image detection using sensing nodes.Further in this paper,the adoption of the mixed-method approach Deep Convolutional Neural Network(Deep CNN)has assisted the research in increasing the effectiveness of the proposed method.Deep CNN is used for image recognition and is a class of deep-learning neural networks.CNN is popular and mostly used in the field of image recognition.A dataset of images with three main leaf diseases is selected for training and testing the proposed model.After the image acquisition and preprocessing process,the Deep CNN model was trained to detect and classify three rice diseases(Brown spot,bacterial blight,and blast disease).The proposed model achieved 98.3%accuracy in comparison with similar state-of-the-art techniques.展开更多
AIM:To conduct a classification study of high myopic maculopathy(HMM)using limited datasets,including tessellated fundus,diffuse chorioretinal atrophy,patchy chorioretinal atrophy,and macular atrophy,and minimize anno...AIM:To conduct a classification study of high myopic maculopathy(HMM)using limited datasets,including tessellated fundus,diffuse chorioretinal atrophy,patchy chorioretinal atrophy,and macular atrophy,and minimize annotation costs,and to optimize the ALFA-Mix active learning algorithm and apply it to HMM classification.METHODS:The optimized ALFA-Mix algorithm(ALFAMix+)was compared with five algorithms,including ALFA-Mix.Four models,including Res Net18,were established.Each algorithm was combined with four models for experiments on the HMM dataset.Each experiment consisted of 20 active learning rounds,with 100 images selected per round.The algorithm was evaluated by comparing the number of rounds in which ALFA-Mix+outperformed other algorithms.Finally,this study employed six models,including Efficient Former,to classify HMM.The best-performing model among these models was selected as the baseline model and combined with the ALFA-Mix+algorithm to achieve satisfactor y classification results with a small dataset.RESULTS:ALFA-Mix+outperforms other algorithms with an average superiority of 16.6,14.75,16.8,and 16.7 rounds in terms of accuracy,sensitivity,specificity,and Kappa value,respectively.This study conducted experiments on classifying HMM using several advanced deep learning models with a complete training set of 4252 images.The Efficient Former achieved the best results with an accuracy,sensitivity,specificity,and Kappa value of 0.8821,0.8334,0.9693,and 0.8339,respectively.Therefore,by combining ALFA-Mix+with Efficient Former,this study achieved results with an accuracy,sensitivity,specificity,and Kappa value of 0.8964,0.8643,0.9721,and 0.8537,respectively.CONCLUSION:The ALFA-Mix+algorithm reduces the required samples without compromising accuracy.Compared to other algorithms,ALFA-Mix+outperforms in more rounds of experiments.It effectively selects valuable samples compared to other algorithms.In HMM classification,combining ALFA-Mix+with Efficient Former enhances model performance,further demonstrating the effectiveness of ALFA-Mix+.展开更多
Accurate histopathology classification is a crucial factor in the diagnosis and treatment of Cholangiocarcinoma(CCA).Hyperspectral images(HSI)provide rich spectral information than ordinary RGB images,making them more...Accurate histopathology classification is a crucial factor in the diagnosis and treatment of Cholangiocarcinoma(CCA).Hyperspectral images(HSI)provide rich spectral information than ordinary RGB images,making them more useful for medical diagnosis.The Convolutional Neural Network(CNN)is commonly employed in hyperspectral image classification due to its remarkable capacity for feature extraction and image classification.However,many existing CNN-based HSI classification methods tend to ignore the importance of image spatial context information and the interdependence between spectral channels,leading to unsatisfied classification performance.Thus,to address these issues,this paper proposes a Spatial-Spectral Joint Network(SSJN)model for hyperspectral image classification that utilizes spatial self-attention and spectral feature extraction.The SSJN model is derived from the ResNet18 network and implemented with the non-local and Coordinate Attention(CA)modules,which extract long-range dependencies on image space and enhance spatial features through the Branch Attention(BA)module to emphasize the region of interest.Furthermore,the SSJN model employs Conv-LSTM modules to extract long-range depen-dencies in the image spectral domain.This addresses the gradient disappearance/explosion phenom-ena and enhances the model classification accuracy.The experimental results show that the pro-posed SSJN model is more efficient in leveraging the spatial and spectral information of hyperspec-tral images on multidimensional microspectral datasets of CCA,leading to higher classification accuracy,and may have useful references for medical diagnosis of CCA.展开更多
The remote sensing ships’fine-grained classification technology makes it possible to identify certain ship types in remote sensing images,and it has broad application prospects in civil and military fields.However,th...The remote sensing ships’fine-grained classification technology makes it possible to identify certain ship types in remote sensing images,and it has broad application prospects in civil and military fields.However,the current model does not examine the properties of ship targets in remote sensing images with mixed multi-granularity features and a complicated backdrop.There is still an opportunity for future enhancement of the classification impact.To solve the challenges brought by the above characteristics,this paper proposes a Metaformer and Residual fusion network based on Visual Attention Network(VAN-MR)for fine-grained classification tasks.For the complex background of remote sensing images,the VAN-MR model adopts the parallel structure of large kernel attention and spatial attention to enhance the model’s feature extraction ability of interest targets and improve the classification performance of remote sensing ship targets.For the problem of multi-grained feature mixing in remote sensing images,the VAN-MR model uses a Metaformer structure and a parallel network of residual modules to extract ship features.The parallel network has different depths,considering both high-level and lowlevel semantic information.The model achieves better classification performance in remote sensing ship images with multi-granularity mixing.Finally,the model achieves 88.73%and 94.56%accuracy on the public fine-grained ship collection-23(FGSC-23)and FGSCR-42 datasets,respectively,while the parameter size is only 53.47 M,the floating point operations is 9.9 G.The experimental results show that the classification effect of VAN-MR is superior to that of traditional CNNs model and visual model with Transformer structure under the same parameter quantity.展开更多
Recently,deep learning has achieved considerable results in the hyperspectral image(HSI)classification.However,most available deep networks require ample and authentic samples to better train the models,which is expen...Recently,deep learning has achieved considerable results in the hyperspectral image(HSI)classification.However,most available deep networks require ample and authentic samples to better train the models,which is expensive and inefficient in practical tasks.Existing few‐shot learning(FSL)methods generally ignore the potential relationships between non‐local spatial samples that would better represent the underlying features of HSI.To solve the above issues,a novel deep transformer and few‐shot learning(DTFSL)classification framework is proposed,attempting to realize fine‐grained classification of HSI with only a few‐shot instances.Specifically,the spatial attention and spectral query modules are introduced to overcome the constraint of the convolution kernel and consider the information between long‐distance location(non‐local)samples to reduce the uncertainty of classes.Next,the network is trained with episodes and task‐based learning strategies to learn a metric space,which can continuously enhance its modelling capability.Furthermore,the developed approach combines the advantages of domain adaptation to reduce the variation in inter‐domain distribution and realize distribution alignment.On three publicly available HSI data,extensive experiments have indicated that the proposed DT‐FSL yields better results concerning state‐of‐the‐art algorithms.展开更多
A brain tumor is the uncharacteristic progression of tissues in the brain.These are very deadly,and if it is not diagnosed at an early stage,it might shorten the affected patient’s life span.Hence,their classificatio...A brain tumor is the uncharacteristic progression of tissues in the brain.These are very deadly,and if it is not diagnosed at an early stage,it might shorten the affected patient’s life span.Hence,their classification and detection play a critical role in treatment.Traditional Brain tumor detection is done by biopsy which is quite challenging.It is usually not preferred at an early stage of the disease.The detection involvesMagneticResonance Imaging(MRI),which is essential for evaluating the tumor.This paper aims to identify and detect brain tumors based on their location in the brain.In order to achieve this,the paper proposes a model that uses an extended deep Convolutional Neural Network(CNN)named Contour Extraction based Extended EfficientNet-B0(CE-EEN-B0)which is a feed-forward neural network with the efficient net layers;three convolutional layers and max-pooling layers;and finally,the global average pooling layer.The site of tumors in the brain is one feature that determines its effect on the functioning of an individual.Thus,this CNN architecture classifies brain tumors into four categories:No tumor,Pituitary tumor,Meningioma tumor,andGlioma tumor.This network provides an accuracy of 97.24%,a precision of 96.65%,and an F1 score of 96.86%which is better than already existing pre-trained networks and aims to help health professionals to cross-diagnose an MRI image.This model will undoubtedly reduce the complications in detection and aid radiologists without taking invasive steps.展开更多
Computational intelligence(CI)is a group of nature-simulated computationalmodels and processes for addressing difficult real-life problems.The CI is useful in the UAV domain as it produces efficient,precise,and rapid ...Computational intelligence(CI)is a group of nature-simulated computationalmodels and processes for addressing difficult real-life problems.The CI is useful in the UAV domain as it produces efficient,precise,and rapid solutions.Besides,unmanned aerial vehicles(UAV)developed a hot research topic in the smart city environment.Despite the benefits of UAVs,security remains a major challenging issue.In addition,deep learning(DL)enabled image classification is useful for several applications such as land cover classification,smart buildings,etc.This paper proposes novel meta-heuristics with a deep learning-driven secure UAV image classification(MDLS-UAVIC)model in a smart city environment.Themajor purpose of the MDLS-UAVIC algorithm is to securely encrypt the images and classify them into distinct class labels.The proposedMDLS-UAVIC model follows a two-stage process:encryption and image classification.The encryption technique for image encryption effectively encrypts the UAV images.Next,the image classification process involves anXception-based deep convolutional neural network for the feature extraction process.Finally,shuffled shepherd optimization(SSO)with a recurrent neural network(RNN)model is applied for UAV image classification,showing the novelty of the work.The experimental validation of the MDLS-UAVIC approach is tested utilizing a benchmark dataset,and the outcomes are examined in various measures.It achieved a high accuracy of 98%.展开更多
基金Institute of Information&Communications Technology Planning&Evaluation,Grant/Award Number:2022-0-00074。
文摘Few‐shot image classification is the task of classifying novel classes using extremely limited labelled samples.To perform classification using the limited samples,one solution is to learn the feature alignment(FA)information between the labelled and unlabelled sample features.Most FA methods use the feature mean as the class prototype and calculate the correlation between prototype and unlabelled features to learn an alignment strategy.However,mean prototypes tend to degenerate informative features because spatial features at the same position may not be equally important for the final classification,leading to inaccurate correlation calculations.Therefore,the authors propose an effective intraclass FA strategy that aggregates semantically similar spatial features from an adaptive reference prototype in low‐dimensional feature space to obtain an informative prototype feature map for precise correlation computation.Moreover,a dual correlation module to learn the hard and soft correlations was developed by the authors.This module combines the correlation information between the prototype and unlabelled features in both the original and learnable feature spaces,aiming to produce a comprehensive cross‐correlation between the prototypes and unlabelled features.Using both FA and cross‐attention modules,our model can maintain informative class features and capture important shared features for classification.Experimental results on three few‐shot classification benchmarks show that the proposed method outperformed related methods and resulted in a 3%performance boost in the 1‐shot setting by inserting the proposed module into the related methods.
基金Institutional Fund Projects under Grant No.(IFPIP:638-830-1443).
文摘The utilization of visual attention enhances the performance of image classification tasks.Previous attentionbased models have demonstrated notable performance,but many of these models exhibit reduced accuracy when confronted with inter-class and intra-class similarities and differences.Neural-Controlled Differential Equations(N-CDE’s)and Neural Ordinary Differential Equations(NODE’s)are extensively utilized within this context.NCDE’s possesses the capacity to effectively illustrate both inter-class and intra-class similarities and differences with enhanced clarity.To this end,an attentive neural network has been proposed to generate attention maps,which uses two different types of N-CDE’s,one for adopting hidden layers and the other to generate attention values.Two distinct attention techniques are implemented including time-wise attention,also referred to as bottom N-CDE’s;and element-wise attention,called topN-CDE’s.Additionally,a trainingmethodology is proposed to guarantee that the training problem is sufficiently presented.Two classification tasks including fine-grained visual classification andmulti-label classification,are utilized to evaluate the proposedmodel.The proposedmethodology is employed on five publicly available datasets,including CUB-200-2011,ImageNet-1K,PASCAL VOC 2007,PASCAL VOC 2012,and MS COCO.The obtained visualizations have demonstrated that N-CDE’s are better appropriate for attention-based activities in comparison to conventional NODE’s.
基金National Natural Science Foundation of China(No.62201457)Natural Science Foundation of Shaanxi Province(Nos.2022JQ-668,2022JQ-588)。
文摘Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification accuracy of hyperspectral images.To address this problem,this article proposes an algorithm based on multiscale fusion and transformer network for hyperspectral image classification.Firstly,the low-level spatial-spectral features are extracted by multi-scale residual structure.Secondly,an attention module is introduced to focus on the more important spatialspectral information.Finally,high-level semantic features are represented and learned by a token learner and an improved transformer encoder.The proposed algorithm is compared with six classical hyperspectral classification algorithms on real hyperspectral images.The experimental results show that the proposed algorithm effectively improves the land cover classification accuracy of hyperspectral images.
基金support of the National Key R&D Program of China(No.2022YFC2803903)the Key R&D Program of Zhejiang Province(No.2021C03013)the Zhejiang Provincial Natural Science Foundation of China(No.LZ20F020003).
文摘The ocean plays an important role in maintaining the equilibrium of Earth’s ecology and providing humans access to a wealth of resources.To obtain a high-precision underwater image classification model,we propose a classification model that combines an EfficientnetB0 neural network and a two-hidden-layer random vector functional link network(EfficientnetB0-TRVFL).The features of underwater images were extracted using the EfficientnetB0 neural network pretrained via ImageNet,and a new fully connected layer was trained on the underwater image dataset using the transfer learning method.Transfer learning ensures the initial performance of the network and helps in the development of a high-precision classification model.Subsequently,a TRVFL was proposed to improve the classification property of the model.Net construction of the two hidden layers exhibited a high accuracy when the same hidden layer nodes were used.The parameters of the second hidden layer were obtained using a novel calculation method,which reduced the outcome error to improve the performance instability caused by the random generation of parameters of RVFL.Finally,the TRVFL classifier was used to classify features and obtain classification results.The proposed EfficientnetB0-TRVFL classification model achieved 87.28%,74.06%,and 99.59%accuracy on the MLC2008,MLC2009,and Fish-gres datasets,respectively.The best convolutional neural networks and existing methods were stacked up through box plots and Kolmogorov-Smirnov tests,respectively.The increases imply improved systematization properties in underwater image classification tasks.The image classification model offers important performance advantages and better stability compared with existing methods.
基金Natural Science Foundation of Shandong Province,China(Grant No.ZR202111230202).
文摘Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
基金the appreciation to the Deanship of Postgraduate Studies and ScientificResearch atMajmaah University for funding this research work through the Project Number R-2024-922.
文摘This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms.
基金funded through Researchers Supporting Project Number(RSPD2024R996)King Saud University,Riyadh,Saudi Arabia。
文摘Breast cancer detection heavily relies on medical imaging, particularly ultrasound, for early diagnosis and effectivetreatment. This research addresses the challenges associated with computer-aided diagnosis (CAD) of breastcancer fromultrasound images. The primary challenge is accurately distinguishing between malignant and benigntumors, complicated by factors such as speckle noise, variable image quality, and the need for precise segmentationand classification. The main objective of the research paper is to develop an advanced methodology for breastultrasound image classification, focusing on speckle noise reduction, precise segmentation, feature extraction, andmachine learning-based classification. A unique approach is introduced that combines Enhanced Speckle ReducedAnisotropic Diffusion (SRAD) filters for speckle noise reduction, U-NET-based segmentation, Genetic Algorithm(GA)-based feature selection, and Random Forest and Bagging Tree classifiers, resulting in a novel and efficientmodel. To test and validate the hybrid model, rigorous experimentations were performed and results state thatthe proposed hybrid model achieved accuracy rate of 99.9%, outperforming other existing techniques, and alsosignificantly reducing computational time. This enhanced accuracy, along with improved sensitivity and specificity,makes the proposed hybrid model a valuable addition to CAD systems in breast cancer diagnosis, ultimatelyenhancing diagnostic accuracy in clinical applications.
文摘Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particularly in regions with limited diagnostic resources like Pakistan.This study aims to conduct an extensive comparative analysis of various machine learning classifiers for ASD detection using facial images to identify an accurate and cost-effective solution tailored to the local context.The research involves experimentation with VGG16 and MobileNet models,exploring different batch sizes,optimizers,and learning rate schedulers.In addition,the“Orange”machine learning tool is employed to evaluate classifier performance and automated image processing capabilities are utilized within the tool.The findings unequivocally establish VGG16 as the most effective classifier with a 5-fold cross-validation approach.Specifically,VGG16,with a batch size of 2 and the Adam optimizer,trained for 100 epochs,achieves a remarkable validation accuracy of 99% and a testing accuracy of 87%.Furthermore,the model achieves an F1 score of 88%,precision of 85%,and recall of 90% on test images.To validate the practical applicability of the VGG16 model with 5-fold cross-validation,the study conducts further testing on a dataset sourced fromautism centers in Pakistan,resulting in an accuracy rate of 85%.This reaffirms the model’s suitability for real-world ASD detection.This research offers valuable insights into classifier performance,emphasizing the potential of machine learning to deliver precise and accessible ASD diagnoses via facial image analysis.
基金This research was funded by the National Natural Science Foundation of China(Nos.71762010,62262019,62162025,61966013,12162012)the Hainan Provincial Natural Science Foundation of China(Nos.823RC488,623RC481,620RC603,621QN241,620RC602,121RC536)+1 种基金the Haikou Science and Technology Plan Project of China(No.2022-016)the Project supported by the Education Department of Hainan Province,No.Hnky2021-23.
文摘Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.
文摘With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.
基金This research was funded in part by the Natural Science Foundation of Jiangsu Province under Grant BK 20211333by the Science and Technology Project of Changzhou City(CE20215032).
文摘To create a green and healthy living environment,people have put forward higher requirements for the refined management of ecological resources.A variety of technologies,including satellite remote sensing,Internet of Things,artificial intelligence,and big data,can build a smart environmental monitoring system.Remote sensing image classification is an important research content in ecological environmental monitoring.Remote sensing images contain rich spatial information andmulti-temporal information,but also bring challenges such as difficulty in obtaining classification labels and low classification accuracy.To solve this problem,this study develops a transductive transfer dictionary learning(TTDL)algorithm.In the TTDL,the source and target domains are transformed fromthe original sample space to a common subspace.TTDL trains a shared discriminative dictionary in this subspace,establishes associations between domains,and also obtains sparse representations of source and target domain data.To obtain an effective shared discriminative dictionary,triple-induced ordinal locality preserving term,Fisher discriminant term,and graph Laplacian regularization termare introduced into the TTDL.The triplet-induced ordinal locality preserving term on sub-space projection preserves the local structure of data in low-dimensional subspaces.The Fisher discriminant term on dictionary improves differences among different sub-dictionaries through intra-class and inter-class scatters.The graph Laplacian regularization term on sparse representation maintains the manifold structure using a semi-supervised weight graphmatrix,which can indirectly improve the discriminative performance of the dictionary.The TTDL is tested on several remote sensing image datasets and has strong discrimination classification performance.
文摘Remote sensing image(RSI)classifier roles a vital play in earth observation technology utilizing Remote sensing(RS)data are extremely exploited from both military and civil fields.More recently,as novel DL approaches develop,techniques for RSI classifiers with DL have attained important breakthroughs,providing a new opportunity for the research and development of RSI classifiers.This study introduces an Improved Slime Mould Optimization with a graph convolutional network for the hyperspectral remote sensing image classification(ISMOGCN-HRSC)model.The ISMOGCN-HRSC model majorly concentrates on identifying and classifying distinct kinds of RSIs.In the presented ISMOGCN-HRSC model,the synergic deep learning(SDL)model is exploited to produce feature vectors.The GCN model is utilized for image classification purposes to identify the proper class labels of the RSIs.The ISMO algorithm is used to enhance the classification efficiency of the GCN method,which is derived by integrating chaotic concepts into the SMO algorithm.The experimental assessment of the ISMOGCN-HRSC method is tested using a benchmark dataset.
基金funded by the University of Haripur,KP Pakistan Researchers Supporting Project number (PKURFL2324L33)。
文摘The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information technology has increased.A sensing system is mandatory to detect rice diseases using Artificial Intelligence(AI).It is being adopted in all medical and plant sciences fields to access and measure the accuracy of results and detection while lowering the risk of diseases.Deep Neural Network(DNN)is a novel technique that will help detect disease present on a rice leave because DNN is also considered a state-of-the-art solution in image detection using sensing nodes.Further in this paper,the adoption of the mixed-method approach Deep Convolutional Neural Network(Deep CNN)has assisted the research in increasing the effectiveness of the proposed method.Deep CNN is used for image recognition and is a class of deep-learning neural networks.CNN is popular and mostly used in the field of image recognition.A dataset of images with three main leaf diseases is selected for training and testing the proposed model.After the image acquisition and preprocessing process,the Deep CNN model was trained to detect and classify three rice diseases(Brown spot,bacterial blight,and blast disease).The proposed model achieved 98.3%accuracy in comparison with similar state-of-the-art techniques.
基金Supported by the National Natural Science Foundation of China(No.61906066)the Zhejiang Provincial Philosophy and Social Science Planning Project(No.21NDJC021Z)+4 种基金Shenzhen Fund for Guangdong Provincial High-level Clinical Key Specialties(No.SZGSP014)Sanming Project of Medicine in Shenzhen(No.SZSM202011015)Shenzhen Science and Technology Planning Project(No.KCXFZ20211020163813019)the Natural Science Foundation of Ningbo City(No.202003N4072)the Postgraduate Research and Innovation Project of Huzhou University(No.2023KYCX52)。
文摘AIM:To conduct a classification study of high myopic maculopathy(HMM)using limited datasets,including tessellated fundus,diffuse chorioretinal atrophy,patchy chorioretinal atrophy,and macular atrophy,and minimize annotation costs,and to optimize the ALFA-Mix active learning algorithm and apply it to HMM classification.METHODS:The optimized ALFA-Mix algorithm(ALFAMix+)was compared with five algorithms,including ALFA-Mix.Four models,including Res Net18,were established.Each algorithm was combined with four models for experiments on the HMM dataset.Each experiment consisted of 20 active learning rounds,with 100 images selected per round.The algorithm was evaluated by comparing the number of rounds in which ALFA-Mix+outperformed other algorithms.Finally,this study employed six models,including Efficient Former,to classify HMM.The best-performing model among these models was selected as the baseline model and combined with the ALFA-Mix+algorithm to achieve satisfactor y classification results with a small dataset.RESULTS:ALFA-Mix+outperforms other algorithms with an average superiority of 16.6,14.75,16.8,and 16.7 rounds in terms of accuracy,sensitivity,specificity,and Kappa value,respectively.This study conducted experiments on classifying HMM using several advanced deep learning models with a complete training set of 4252 images.The Efficient Former achieved the best results with an accuracy,sensitivity,specificity,and Kappa value of 0.8821,0.8334,0.9693,and 0.8339,respectively.Therefore,by combining ALFA-Mix+with Efficient Former,this study achieved results with an accuracy,sensitivity,specificity,and Kappa value of 0.8964,0.8643,0.9721,and 0.8537,respectively.CONCLUSION:The ALFA-Mix+algorithm reduces the required samples without compromising accuracy.Compared to other algorithms,ALFA-Mix+outperforms in more rounds of experiments.It effectively selects valuable samples compared to other algorithms.In HMM classification,combining ALFA-Mix+with Efficient Former enhances model performance,further demonstrating the effectiveness of ALFA-Mix+.
基金supported by National Natural Science Foundation of China(No.62101040).
文摘Accurate histopathology classification is a crucial factor in the diagnosis and treatment of Cholangiocarcinoma(CCA).Hyperspectral images(HSI)provide rich spectral information than ordinary RGB images,making them more useful for medical diagnosis.The Convolutional Neural Network(CNN)is commonly employed in hyperspectral image classification due to its remarkable capacity for feature extraction and image classification.However,many existing CNN-based HSI classification methods tend to ignore the importance of image spatial context information and the interdependence between spectral channels,leading to unsatisfied classification performance.Thus,to address these issues,this paper proposes a Spatial-Spectral Joint Network(SSJN)model for hyperspectral image classification that utilizes spatial self-attention and spectral feature extraction.The SSJN model is derived from the ResNet18 network and implemented with the non-local and Coordinate Attention(CA)modules,which extract long-range dependencies on image space and enhance spatial features through the Branch Attention(BA)module to emphasize the region of interest.Furthermore,the SSJN model employs Conv-LSTM modules to extract long-range depen-dencies in the image spectral domain.This addresses the gradient disappearance/explosion phenom-ena and enhances the model classification accuracy.The experimental results show that the pro-posed SSJN model is more efficient in leveraging the spatial and spectral information of hyperspec-tral images on multidimensional microspectral datasets of CCA,leading to higher classification accuracy,and may have useful references for medical diagnosis of CCA.
文摘The remote sensing ships’fine-grained classification technology makes it possible to identify certain ship types in remote sensing images,and it has broad application prospects in civil and military fields.However,the current model does not examine the properties of ship targets in remote sensing images with mixed multi-granularity features and a complicated backdrop.There is still an opportunity for future enhancement of the classification impact.To solve the challenges brought by the above characteristics,this paper proposes a Metaformer and Residual fusion network based on Visual Attention Network(VAN-MR)for fine-grained classification tasks.For the complex background of remote sensing images,the VAN-MR model adopts the parallel structure of large kernel attention and spatial attention to enhance the model’s feature extraction ability of interest targets and improve the classification performance of remote sensing ship targets.For the problem of multi-grained feature mixing in remote sensing images,the VAN-MR model uses a Metaformer structure and a parallel network of residual modules to extract ship features.The parallel network has different depths,considering both high-level and lowlevel semantic information.The model achieves better classification performance in remote sensing ship images with multi-granularity mixing.Finally,the model achieves 88.73%and 94.56%accuracy on the public fine-grained ship collection-23(FGSC-23)and FGSCR-42 datasets,respectively,while the parameter size is only 53.47 M,the floating point operations is 9.9 G.The experimental results show that the classification effect of VAN-MR is superior to that of traditional CNNs model and visual model with Transformer structure under the same parameter quantity.
基金supported by the National Natural Science Foundation of China under Grant 62161160336 and Grant 42030111.
文摘Recently,deep learning has achieved considerable results in the hyperspectral image(HSI)classification.However,most available deep networks require ample and authentic samples to better train the models,which is expensive and inefficient in practical tasks.Existing few‐shot learning(FSL)methods generally ignore the potential relationships between non‐local spatial samples that would better represent the underlying features of HSI.To solve the above issues,a novel deep transformer and few‐shot learning(DTFSL)classification framework is proposed,attempting to realize fine‐grained classification of HSI with only a few‐shot instances.Specifically,the spatial attention and spectral query modules are introduced to overcome the constraint of the convolution kernel and consider the information between long‐distance location(non‐local)samples to reduce the uncertainty of classes.Next,the network is trained with episodes and task‐based learning strategies to learn a metric space,which can continuously enhance its modelling capability.Furthermore,the developed approach combines the advantages of domain adaptation to reduce the variation in inter‐domain distribution and realize distribution alignment.On three publicly available HSI data,extensive experiments have indicated that the proposed DT‐FSL yields better results concerning state‐of‐the‐art algorithms.
文摘A brain tumor is the uncharacteristic progression of tissues in the brain.These are very deadly,and if it is not diagnosed at an early stage,it might shorten the affected patient’s life span.Hence,their classification and detection play a critical role in treatment.Traditional Brain tumor detection is done by biopsy which is quite challenging.It is usually not preferred at an early stage of the disease.The detection involvesMagneticResonance Imaging(MRI),which is essential for evaluating the tumor.This paper aims to identify and detect brain tumors based on their location in the brain.In order to achieve this,the paper proposes a model that uses an extended deep Convolutional Neural Network(CNN)named Contour Extraction based Extended EfficientNet-B0(CE-EEN-B0)which is a feed-forward neural network with the efficient net layers;three convolutional layers and max-pooling layers;and finally,the global average pooling layer.The site of tumors in the brain is one feature that determines its effect on the functioning of an individual.Thus,this CNN architecture classifies brain tumors into four categories:No tumor,Pituitary tumor,Meningioma tumor,andGlioma tumor.This network provides an accuracy of 97.24%,a precision of 96.65%,and an F1 score of 96.86%which is better than already existing pre-trained networks and aims to help health professionals to cross-diagnose an MRI image.This model will undoubtedly reduce the complications in detection and aid radiologists without taking invasive steps.
基金Deputyship for Research&Inno-vation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number RI-44-0446.
文摘Computational intelligence(CI)is a group of nature-simulated computationalmodels and processes for addressing difficult real-life problems.The CI is useful in the UAV domain as it produces efficient,precise,and rapid solutions.Besides,unmanned aerial vehicles(UAV)developed a hot research topic in the smart city environment.Despite the benefits of UAVs,security remains a major challenging issue.In addition,deep learning(DL)enabled image classification is useful for several applications such as land cover classification,smart buildings,etc.This paper proposes novel meta-heuristics with a deep learning-driven secure UAV image classification(MDLS-UAVIC)model in a smart city environment.Themajor purpose of the MDLS-UAVIC algorithm is to securely encrypt the images and classify them into distinct class labels.The proposedMDLS-UAVIC model follows a two-stage process:encryption and image classification.The encryption technique for image encryption effectively encrypts the UAV images.Next,the image classification process involves anXception-based deep convolutional neural network for the feature extraction process.Finally,shuffled shepherd optimization(SSO)with a recurrent neural network(RNN)model is applied for UAV image classification,showing the novelty of the work.The experimental validation of the MDLS-UAVIC approach is tested utilizing a benchmark dataset,and the outcomes are examined in various measures.It achieved a high accuracy of 98%.