Fine-grained recognition of ships based on remote sensing images is crucial to safeguarding maritime rights and interests and maintaining national security.Currently,with the emergence of massive high-resolution multi...Fine-grained recognition of ships based on remote sensing images is crucial to safeguarding maritime rights and interests and maintaining national security.Currently,with the emergence of massive high-resolution multi-modality images,the use of multi-modality images for fine-grained recognition has become a promising technology.Fine-grained recognition of multi-modality images imposes higher requirements on the dataset samples.The key to the problem is how to extract and fuse the complementary features of multi-modality images to obtain more discriminative fusion features.The attention mechanism helps the model to pinpoint the key information in the image,resulting in a significant improvement in the model’s performance.In this paper,a dataset for fine-grained recognition of ships based on visible and near-infrared multi-modality remote sensing images has been proposed first,named Dataset for Multimodal Fine-grained Recognition of Ships(DMFGRS).It includes 1,635 pairs of visible and near-infrared remote sensing images divided into 20 categories,collated from digital orthophotos model provided by commercial remote sensing satellites.DMFGRS provides two types of annotation format files,as well as segmentation mask images corresponding to the ship targets.Then,a Multimodal Information Cross-Enhancement Network(MICE-Net)fusing features of visible and near-infrared remote sensing images,has been proposed.In the network,a dual-branch feature extraction and fusion module has been designed to obtain more expressive features.The Feature Cross Enhancement Module(FCEM)achieves the fusion enhancement of the two modal features by making the channel attention and spatial attention work cross-functionally on the feature map.A benchmark is established by evaluating state-of-the-art object recognition algorithms on DMFGRS.MICE-Net conducted experiments on DMFGRS,and the precision,recall,mAP0.5 and mAP0.5:0.95 reached 87%,77.1%,83.8%and 63.9%,respectively.Extensive experiments demonstrate that the proposed MICE-Net has more excellent performance on DMFGRS.Built on lightweight network YOLO,the model has excellent generalizability,and thus has good potential for application in real-life scenarios.展开更多
Asparagus stem blight,also known as“asparagus cancer”,is a serious plant disease with a regional distribution.The widespread occurrence of the disease has had a negative impact on the yield and quality of asparagus ...Asparagus stem blight,also known as“asparagus cancer”,is a serious plant disease with a regional distribution.The widespread occurrence of the disease has had a negative impact on the yield and quality of asparagus and has become one of the main problems threatening asparagus production.To improve the ability to accurately identify and localize phenotypic lesions of stem blight in asparagus and to enhance the accuracy of the test,a YOLOv8-CBAM detection algorithm for asparagus stem blight based on YOLOv8 was proposed.The algorithm aims to achieve rapid detection of phenotypic images of asparagus stem blight and to provide effective assistance in the control of asparagus stem blight.To enhance the model’s capacity to capture subtle lesion features,the Convolutional Block AttentionModule(CBAM)is added after C2f in the head.Simultaneously,the original CIoU loss function in YOLOv8 was replaced with the Focal-EIoU loss function,ensuring that the updated loss function emphasizes higher-quality bounding boxes.The YOLOv8-CBAM algorithm can effectively detect asparagus stem blight phenotypic images with a mean average precision(mAP)of 95.51%,which is 0.22%,14.99%,1.77%,and 5.71%higher than the YOLOv5,YOLOv7,YOLOv8,and Mask R-CNN models,respectively.This greatly enhances the efficiency of asparagus growers in identifying asparagus stem blight,aids in improving the prevention and control of asparagus stem blight,and is crucial for the application of computer vision in agriculture.展开更多
Complex plasma widely exists in thin film deposition,material surface modification,and waste gas treatment in industrial plasma processes.During complex plasma discharge,the configuration,distribution,and size of part...Complex plasma widely exists in thin film deposition,material surface modification,and waste gas treatment in industrial plasma processes.During complex plasma discharge,the configuration,distribution,and size of particles,as well as the discharge glow,strongly depend on discharge parameters.However,traditional manual diagnosis methods for recognizing discharge parameters from discharge images are complicated to operate with low accuracy,time-consuming and high requirement of instruments.To solve these problems,by combining the two mechanisms of attention mechanism(strengthening the extraction of the channel feature)and shortcut connection(enabling the input information to be directly transmitted to deep networks and avoiding the disappearance or explosion of gradients),the network of squeeze and excitation convolution with shortcut(SECS)for complex plasma image recognition is proposed to effectively improve the model performance.The results show that the accuracy,precision,recall and F1-Score of our model are superior to other models in complex plasma image recognition,and the recognition accuracy reaches 97.38%.Moreover,the recognition accuracy for the Flowers and Chest X-ray publicly available data sets reaches 97.85%and 98.65%,respectively,and our model has robustness.This study shows that the proposed model provides a new method for the diagnosis of complex plasma images and also provides technical support for the application of plasma in industrial production.展开更多
Expanding photovoltaic(PV)resources in rural-grid areas is an essential means to augment the share of solar energy in the energy landscape,aligning with the“carbon peaking and carbon neutrality”objectives.However,ru...Expanding photovoltaic(PV)resources in rural-grid areas is an essential means to augment the share of solar energy in the energy landscape,aligning with the“carbon peaking and carbon neutrality”objectives.However,rural power grids often lack digitalization;thus,the load distribution within these areas is not fully known.This hinders the calculation of the available PV capacity and deduction of node voltages.This study proposes a load-distribution modeling approach based on remote-sensing image recognition in pursuit of a scientific framework for developing distributed PV resources in rural grid areas.First,houses in remote-sensing images are accurately recognized using deep-learning techniques based on the YOLOv5 model.The distribution of the houses is then used to estimate the load distribution in the grid area.Next,equally spaced and clustered distribution models are used to adaptively determine the location of the nodes and load power in the distribution lines.Finally,by calculating the connectivity matrix of the nodes,a minimum spanning tree is extracted,the topology of the network is constructed,and the node parameters of the load-distribution model are calculated.The proposed scheme is implemented in a software package and its efficacy is demonstrated by analyzing typical remote-sensing images of rural grid areas.The results underscore the ability of the proposed approach to effectively discern the distribution-line structure and compute the node parameters,thereby offering vital support for determining PV access capability.展开更多
Objective To build a dataset encompassing a large number of stained tongue coating images and process it using deep learning to automatically recognize stained tongue coating images.Methods A total of 1001 images of s...Objective To build a dataset encompassing a large number of stained tongue coating images and process it using deep learning to automatically recognize stained tongue coating images.Methods A total of 1001 images of stained tongue coating from healthy students at Hunan University of Chinese Medicine and 1007 images of pathological(non-stained)tongue coat-ing from hospitalized patients at The First Hospital of Hunan University of Chinese Medicine withlungcancer;diabetes;andhypertensionwerecollected.Thetongueimageswererandomi-zed into the training;validation;and testing datasets in a 7:2:1 ratio.A deep learning model was constructed using the ResNet50 for recognizing stained tongue coating in the training and validation datasets.The training period was 90 epochs.The model’s performance was evaluated by its accuracy;loss curve;recall;F1 score;confusion matrix;receiver operating characteristic(ROC)curve;and precision-recall(PR)curve in the tasks of predicting stained tongue coating images in the testing dataset.The accuracy of the deep learning model was compared with that of attending physicians of traditional Chinese medicine(TCM).Results The training results showed that after 90 epochs;the model presented an excellent classification performance.The loss curve and accuracy were stable;showing no signs of overfitting.The model achieved an accuracy;recall;and F1 score of 92%;91%;and 92%;re-spectively.The confusion matrix revealed an accuracy of 92%for the model and 69%for TCM practitioners.The areas under the ROC and PR curves were 0.97 and 0.95;respectively.Conclusion The deep learning model constructed using ResNet50 can effectively recognize stained coating images with greater accuracy than visual inspection of TCM practitioners.This model has the potential to assist doctors in identifying false tongue coating and prevent-ing misdiagnosis.展开更多
This study delves into the applications,challenges,and future directions of deep learning techniques in the field of image recognition.Deep learning,particularly Convolutional Neural Networks(CNNs),Recurrent Neural Ne...This study delves into the applications,challenges,and future directions of deep learning techniques in the field of image recognition.Deep learning,particularly Convolutional Neural Networks(CNNs),Recurrent Neural Networks(RNNs),and Generative Adversarial Networks(GANs),has become key to enhancing the precision and efficiency of image recognition.These models are capable of processing complex visual data,facilitating efficient feature extraction and image classification.However,acquiring and annotating high-quality,diverse datasets,addressing imbalances in datasets,and model training and optimization remain significant challenges in this domain.The paper proposes strategies for improving data augmentation,optimizing model architectures,and employing automated model optimization tools to address these challenges,while also emphasizing the importance of considering ethical issues in technological advancements.As technology continues to evolve,the application of deep learning in image recognition will further demonstrate its potent capability to solve complex problems,driving society towards more inclusive and diverse development.展开更多
With the arrival of new data acquisition platforms derived from the Internet of Things(IoT),this paper goes beyond the understanding of traditional remote sensing technologies.Deep fusion of remote sensing and compute...With the arrival of new data acquisition platforms derived from the Internet of Things(IoT),this paper goes beyond the understanding of traditional remote sensing technologies.Deep fusion of remote sensing and computer vision has hit the industrial world and makes it possible to apply Artificial intelligence to solve problems such as automatic extraction of information and image interpretation.However,due to the complex architecture of IoT and the lack of a unified security protection mechanism,devices in remote sensing are vulnerable to privacy leaks when sharing data.It is necessary to design a security scheme suitable for computation‐limited devices in IoT,since traditional encryption methods are based on computational complexity.Visual Cryptography(VC)is a threshold scheme for images that can be decoded directly by the human visual system when superimposing encrypted images.The stacking‐to‐see feature and simple Boolean decryption operation make VC an ideal solution for privacy‐preserving recognition for large‐scale remote sensing images in IoT.In this study,the secure and efficient transmission of high‐resolution remote sensing images by meaningful VC is achieved.By diffusing the error between the encryption block and the original block to adjacent blocks,the degradation of quality in recovery images is mitigated.By fine‐tuning the pre‐trained model from large‐scale datasets,we improve the recognition performance of small encryption datasets for remote sensing images.The experimental results show that the proposed lightweight privacy‐preserving recognition framework maintains high recognition performance while enhancing security.展开更多
The challenge faced by the visually impaired persons in their day-today lives is to interpret text from documents.In this context,to help these people,the objective of this work is to develop an efficient text recogni...The challenge faced by the visually impaired persons in their day-today lives is to interpret text from documents.In this context,to help these people,the objective of this work is to develop an efficient text recognition system that allows the isolation,the extraction,and the recognition of text in the case of documents having a textured background,a degraded aspect of colors,and of poor quality,and to synthesize it into speech.This system basically consists of three algorithms:a text localization and detection algorithm based on mathematical morphology method(MMM);a text extraction algorithm based on the gamma correction method(GCM);and an optical character recognition(OCR)algorithm for text recognition.A detailed complexity study of the different blocks of this text recognition system has been realized.Following this study,an acceleration of the GCM algorithm(AGCM)is proposed.The AGCM algorithm has reduced the complexity in the text recognition system by 70%and kept the same quality of text recognition as that of the original method.To assist visually impaired persons,a graphical interface of the entire text recognition chain has been developed,allowing the capture of images from a camera,rapid and intuitive visualization of the recognized text from this image,and text-to-speech synthesis.Our text recognition system provides an improvement of 6.8%for the recognition rate and 7.6%for the F-measure relative to GCM and AGCM algorithms.展开更多
As the COVID-19 epidemic spread across the globe,people around the world were advised or mandated to wear masks in public places to prevent its spreading further.In some cases,not wearing a mask could result in a fine...As the COVID-19 epidemic spread across the globe,people around the world were advised or mandated to wear masks in public places to prevent its spreading further.In some cases,not wearing a mask could result in a fine.To monitor mask wearing,and to prevent the spread of future epidemics,this study proposes an image recognition system consisting of a camera,an infrared thermal array sensor,and a convolutional neural network trained in mask recognition.The infrared sensor monitors body temperature and displays the results in real-time on a liquid crystal display screen.The proposed system reduces the inefficiency of traditional object detection by providing training data according to the specific needs of the user and by applying You Only Look Once Version 4(YOLOv4)object detection technology,which experiments show has more efficient training parameters and a higher level of accuracy in object recognition.All datasets are uploaded to the cloud for storage using Google Colaboratory,saving human resources and achieving a high level of efficiency at a low cost.展开更多
Congenital heart defect,accounting for about 30%of congenital defects,is the most common one.Data shows that congenital heart defects have seriously affected the birth rate of healthy newborns.In Fetal andNeonatal Car...Congenital heart defect,accounting for about 30%of congenital defects,is the most common one.Data shows that congenital heart defects have seriously affected the birth rate of healthy newborns.In Fetal andNeonatal Cardiology,medical imaging technology(2D ultrasonic,MRI)has been proved to be helpful to detect congenital defects of the fetal heart and assists sonographers in prenatal diagnosis.It is a highly complex task to recognize 2D fetal heart ultrasonic standard plane(FHUSP)manually.Compared withmanual identification,automatic identification through artificial intelligence can save a lot of time,ensure the efficiency of diagnosis,and improve the accuracy of diagnosis.In this study,a feature extraction method based on texture features(Local Binary Pattern LBP and Histogram of Oriented Gradient HOG)and combined with Bag of Words(BOW)model is carried out,and then feature fusion is performed.Finally,it adopts Support VectorMachine(SVM)to realize automatic recognition and classification of FHUSP.The data includes 788 standard plane data sets and 448 normal and abnormal plane data sets.Compared with some other methods and the single method model,the classification accuracy of our model has been obviously improved,with the highest accuracy reaching 87.35%.Similarly,we also verify the performance of the model in normal and abnormal planes,and the average accuracy in classifying abnormal and normal planes is 84.92%.The experimental results show that thismethod can effectively classify and predict different FHUSP and can provide certain assistance for sonographers to diagnose fetal congenital heart disease.展开更多
Optical Character Recognition(OCR)refers to a technology that uses image processing technology and character recognition algorithms to identify characters on an image.This paper is a deep study on the recognition effe...Optical Character Recognition(OCR)refers to a technology that uses image processing technology and character recognition algorithms to identify characters on an image.This paper is a deep study on the recognition effect of OCR based on Artificial Intelligence(AI)algorithms,in which the different AI algorithms for OCR analysis are classified and reviewed.Firstly,the mechanisms and characteristics of artificial neural network-based OCR are summarized.Secondly,this paper explores machine learning-based OCR,and draws the conclusion that the algorithms available for this form of OCR are still in their infancy,with low generalization and fixed recognition errors,albeit with better recognition effect and higher recognition accuracy.Finally,this paper explores several of the latest algorithms such as deep learning and pattern recognition algorithms.This paper concludes that OCR requires algorithms with higher recognition accuracy.展开更多
In this paper, artificial intelligence image recognition technology is used to improve the recognition rate of individual domestic fish and reduce the recognition time, aiming at the problem that it is difficult to ea...In this paper, artificial intelligence image recognition technology is used to improve the recognition rate of individual domestic fish and reduce the recognition time, aiming at the problem that it is difficult to easily observe the species and growth of domestic fish in the underwater non-uniform light field environment. First, starting from the image data collected by polarizing imaging technology, this paper uses subpixel convolution reconstruction to enhance the image, uses image translation and fill technology to build the family fish database, builds the Adam-Dropout-CNN (A-D-CNN) network model, and its convolution kernel size is 3 × 3. The maximum pooling was used for downsampling, and the discarding operation was added after the full connection layer to avoid the phenomenon of network overfitting. The adaptive motion estimation algorithm was used to solve the gradient sparse problem. The experiment shows that the recognition rate of A-D-CNN is 96.97% when the model is trained under the domestic fish image database, which solves the problem of low recognition rate and slow recognition speed of domestic fish in non-uniform light field.展开更多
The monitoring system designed in this paper is on account of YOLOv5(You Only Look Once)to monitor foreign objects on railway tracks and can broadcast the monitoring information to the locomotive in real time.First,th...The monitoring system designed in this paper is on account of YOLOv5(You Only Look Once)to monitor foreign objects on railway tracks and can broadcast the monitoring information to the locomotive in real time.First,the general structure of the system is determined through demand analysis and feasibility analysis,the foreign object intrusion recognition algorithm is designed,and the data set required for foreign object intrusion recognition is made.Secondly,according to the functional demands,the system selects a suitable neural web,and the programming is reasonable.At last,the system is simulated to validate its functionality(identification and classification of track intrusion and determination of a safe operating zone).展开更多
The fine-grained ship image recognition task aims to identify various classes of ships.However,small inter-class,large intra-class differences between ships,and lacking of training samples are the reasons that make th...The fine-grained ship image recognition task aims to identify various classes of ships.However,small inter-class,large intra-class differences between ships,and lacking of training samples are the reasons that make the task difficult.Therefore,to enhance the accuracy of the fine-grained ship image recognition,we design a fine-grained ship image recognition network based on bilinear convolutional neural network(BCNN)with Inception and additive margin Softmax(AM-Softmax).This network improves the BCNN in two aspects.Firstly,by introducing Inception branches to the BCNN network,it is helpful to enhance the ability of extracting comprehensive features from ships.Secondly,by adding margin values to the decision boundary,the AM-Softmax function can better extend the inter-class differences and reduce the intra-class differences.In addition,as there are few publicly available datasets for fine-grained ship image recognition,we construct a Ship-43 dataset containing 47,300 ship images belonging to 43 categories.Experimental results on the constructed Ship-43 dataset demonstrate that our method can effectively improve the accuracy of ship image recognition,which is 4.08%higher than the BCNN model.Moreover,comparison results on the other three public fine-grained datasets(Cub,Cars,and Aircraft)further validate the effectiveness of the proposed method.展开更多
The traditional synthetic aperture radar(SAR) image recognition techniques focus on the electro magnetic (EM) scattering centers, ignoring the important role of the shadow information on the SAR image recognition....The traditional synthetic aperture radar(SAR) image recognition techniques focus on the electro magnetic (EM) scattering centers, ignoring the important role of the shadow information on the SAR image recognition. It is difficult to classify targets by the shadow information independently, because the shadow shape is dependent on the radar aspect angle, the depression angle and the resolution. Moreover, the shadow shapes of different targets are similar. When the multiple SAR images of one target from different aspects are available, the performance of the target recognition can be improved. Aimed at the problem, a multi-aspect SAR image recognition technique based on the shadow information is developed. It extracts shadow profiles from SAR images, and takes chain codes as the feature vectors of targets. Then, feature vectors on multiple aspects of the same target are combined with feature sequences, and the hidden Markov model (HMM) is applied to the feature sequences for the target recognition. The simulation result shows the effectiveness of the method.展开更多
The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregula...The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregular and multi-scale nature of food images.Addressing these complexities,our study introduces an advanced model that leverages multiple attention mechanisms and multi-stage local fusion,grounded in the ConvNeXt architecture.Our model employs hybrid attention(HA)mechanisms to pinpoint critical discriminative regions within images,substantially mitigating the influence of background noise.Furthermore,it introduces a multi-stage local fusion(MSLF)module,fostering long-distance dependencies between feature maps at varying stages.This approach facilitates the assimilation of complementary features across scales,significantly bolstering the model’s capacity for feature extraction.Furthermore,we constructed a dataset named Roushi60,which consists of 60 different categories of common meat dishes.Empirical evaluation of the ETH Food-101,ChineseFoodNet,and Roushi60 datasets reveals that our model achieves recognition accuracies of 91.12%,82.86%,and 92.50%,respectively.These figures not only mark an improvement of 1.04%,3.42%,and 1.36%over the foundational ConvNeXt network but also surpass the performance of most contemporary food image recognition methods.Such advancements underscore the efficacy of our proposed model in navigating the intricate landscape of food image recognition,setting a new benchmark for the field.展开更多
This research focuses on addressing the challenges associated with image detection in low-light environments,particularly by applying artificial intelligence techniques to machine vision and object recognition systems...This research focuses on addressing the challenges associated with image detection in low-light environments,particularly by applying artificial intelligence techniques to machine vision and object recognition systems.The primary goal is to tackle issues related to recognizing objects with low brightness levels.In this study,the Intel RealSense Lidar Camera L515 is used to simultaneously capture color information and 16-bit depth information images.The detection scenarios are categorized into normal brightness and low brightness situations.When the system determines a normal brightness environment,normal brightness images are recognized using deep learning methods.In low-brightness situations,three methods are proposed for recognition.The first method is the SegmentationwithDepth image(SD)methodwhich involves segmenting the depth image,creating amask from the segmented depth image,mapping the obtained mask onto the true color(RGB)image to obtain a backgroundreduced RGB image,and recognizing the segmented image.The second method is theHDVmethod(hue,depth,value)which combines RGB images converted to HSV images(hue,saturation,value)with depth images D to form HDV images for recognition.The third method is the HSD(hue,saturation,depth)method which similarly combines RGB images converted to HSV images with depth images D to form HSD images for recognition.In experimental results,in normal brightness environments,the average recognition rate obtained using image recognition methods is 91%.For low-brightness environments,using the SD method with original images for training and segmented images for recognition achieves an average recognition rate of over 82%.TheHDVmethod achieves an average recognition rate of over 70%,while the HSD method achieves an average recognition rate of over 84%.The HSD method allows for a quick and convenient low-light object recognition system.This research outcome can be applied to nighttime surveillance systems or nighttime road safety systems.展开更多
In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requ...In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requirement to the capture device.When these conditions are relaxed,the system’s performance significantly deteriorates due to segmentation and feature extraction problems.Herein,a novel segmentation algorithm is proposed to correctly detect the pupil and limbus boundaries of iris images captured in unconstrained environments.First,the algorithm scans the whole iris image in the Hue Saturation Value(HSV)color space for local maxima to detect the sclera region.The image quality is then assessed by computing global features in red,green and blue(RGB)space,as noisy images have heterogeneous characteristics.The iris images are accordingly classified into seven categories based on their global RGB intensities.After the classification process,the images are filtered,and adaptive thresholding is applied to enhance the global contrast and detect the outer iris ring.Finally,to characterize the pupil area,the algorithm scans the cropped outer ring region for local minima values to identify the darkest area in the iris ring.The experimental results show that our method outperforms existing segmentation techniques using the UBIRIS.v1 and v2 databases and achieved a segmentation accuracy of 99.32 on UBIRIS.v1 and an error rate of 1.59 on UBIRIS.v2.展开更多
Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in ...Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in sparse representation means that only a few of instances selected from all training samples can effectively convey the essential class-specific information of the test sample,which is very important for classification.For deformable images such as human faces,pixels at the same location of different images of the same subject usually have different intensities.Therefore,extracting features and correctly classifying such deformable objects is very hard.Moreover,the lighting,attitude and occlusion cause more difficulty.Considering the problems and challenges listed above,a novel image representation and classification algorithm is proposed.First,the authors’algorithm generates virtual samples by a non-linear variation method.This method can effectively extract the low-frequency information of space-domain features of the original image,which is very useful for representing deformable objects.The combination of the original and virtual samples is more beneficial to improve the clas-sification performance and robustness of the algorithm.Thereby,the authors’algorithm calculates the expression coefficients of the original and virtual samples separately using the sparse representation principle and obtains the final score by a designed efficient score fusion scheme.The weighting coefficients in the score fusion scheme are set entirely automatically.Finally,the algorithm classifies the samples based on the final scores.The experimental results show that our method performs better classification than conventional sparse representation algorithms.展开更多
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma...Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.展开更多
文摘Fine-grained recognition of ships based on remote sensing images is crucial to safeguarding maritime rights and interests and maintaining national security.Currently,with the emergence of massive high-resolution multi-modality images,the use of multi-modality images for fine-grained recognition has become a promising technology.Fine-grained recognition of multi-modality images imposes higher requirements on the dataset samples.The key to the problem is how to extract and fuse the complementary features of multi-modality images to obtain more discriminative fusion features.The attention mechanism helps the model to pinpoint the key information in the image,resulting in a significant improvement in the model’s performance.In this paper,a dataset for fine-grained recognition of ships based on visible and near-infrared multi-modality remote sensing images has been proposed first,named Dataset for Multimodal Fine-grained Recognition of Ships(DMFGRS).It includes 1,635 pairs of visible and near-infrared remote sensing images divided into 20 categories,collated from digital orthophotos model provided by commercial remote sensing satellites.DMFGRS provides two types of annotation format files,as well as segmentation mask images corresponding to the ship targets.Then,a Multimodal Information Cross-Enhancement Network(MICE-Net)fusing features of visible and near-infrared remote sensing images,has been proposed.In the network,a dual-branch feature extraction and fusion module has been designed to obtain more expressive features.The Feature Cross Enhancement Module(FCEM)achieves the fusion enhancement of the two modal features by making the channel attention and spatial attention work cross-functionally on the feature map.A benchmark is established by evaluating state-of-the-art object recognition algorithms on DMFGRS.MICE-Net conducted experiments on DMFGRS,and the precision,recall,mAP0.5 and mAP0.5:0.95 reached 87%,77.1%,83.8%and 63.9%,respectively.Extensive experiments demonstrate that the proposed MICE-Net has more excellent performance on DMFGRS.Built on lightweight network YOLO,the model has excellent generalizability,and thus has good potential for application in real-life scenarios.
基金supported by the Feicheng Artificial Intelligence Robot and Smart Agriculture Service Platform(381387).
文摘Asparagus stem blight,also known as“asparagus cancer”,is a serious plant disease with a regional distribution.The widespread occurrence of the disease has had a negative impact on the yield and quality of asparagus and has become one of the main problems threatening asparagus production.To improve the ability to accurately identify and localize phenotypic lesions of stem blight in asparagus and to enhance the accuracy of the test,a YOLOv8-CBAM detection algorithm for asparagus stem blight based on YOLOv8 was proposed.The algorithm aims to achieve rapid detection of phenotypic images of asparagus stem blight and to provide effective assistance in the control of asparagus stem blight.To enhance the model’s capacity to capture subtle lesion features,the Convolutional Block AttentionModule(CBAM)is added after C2f in the head.Simultaneously,the original CIoU loss function in YOLOv8 was replaced with the Focal-EIoU loss function,ensuring that the updated loss function emphasizes higher-quality bounding boxes.The YOLOv8-CBAM algorithm can effectively detect asparagus stem blight phenotypic images with a mean average precision(mAP)of 95.51%,which is 0.22%,14.99%,1.77%,and 5.71%higher than the YOLOv5,YOLOv7,YOLOv8,and Mask R-CNN models,respectively.This greatly enhances the efficiency of asparagus growers in identifying asparagus stem blight,aids in improving the prevention and control of asparagus stem blight,and is crucial for the application of computer vision in agriculture.
基金This study was supported by a grand from the National Natural Science Foundation of China(No.12075315).
文摘Complex plasma widely exists in thin film deposition,material surface modification,and waste gas treatment in industrial plasma processes.During complex plasma discharge,the configuration,distribution,and size of particles,as well as the discharge glow,strongly depend on discharge parameters.However,traditional manual diagnosis methods for recognizing discharge parameters from discharge images are complicated to operate with low accuracy,time-consuming and high requirement of instruments.To solve these problems,by combining the two mechanisms of attention mechanism(strengthening the extraction of the channel feature)and shortcut connection(enabling the input information to be directly transmitted to deep networks and avoiding the disappearance or explosion of gradients),the network of squeeze and excitation convolution with shortcut(SECS)for complex plasma image recognition is proposed to effectively improve the model performance.The results show that the accuracy,precision,recall and F1-Score of our model are superior to other models in complex plasma image recognition,and the recognition accuracy reaches 97.38%.Moreover,the recognition accuracy for the Flowers and Chest X-ray publicly available data sets reaches 97.85%and 98.65%,respectively,and our model has robustness.This study shows that the proposed model provides a new method for the diagnosis of complex plasma images and also provides technical support for the application of plasma in industrial production.
基金supported by the State Grid Science&Technology Project of China(5400-202224153A-1-1-ZN).
文摘Expanding photovoltaic(PV)resources in rural-grid areas is an essential means to augment the share of solar energy in the energy landscape,aligning with the“carbon peaking and carbon neutrality”objectives.However,rural power grids often lack digitalization;thus,the load distribution within these areas is not fully known.This hinders the calculation of the available PV capacity and deduction of node voltages.This study proposes a load-distribution modeling approach based on remote-sensing image recognition in pursuit of a scientific framework for developing distributed PV resources in rural grid areas.First,houses in remote-sensing images are accurately recognized using deep-learning techniques based on the YOLOv5 model.The distribution of the houses is then used to estimate the load distribution in the grid area.Next,equally spaced and clustered distribution models are used to adaptively determine the location of the nodes and load power in the distribution lines.Finally,by calculating the connectivity matrix of the nodes,a minimum spanning tree is extracted,the topology of the network is constructed,and the node parameters of the load-distribution model are calculated.The proposed scheme is implemented in a software package and its efficacy is demonstrated by analyzing typical remote-sensing images of rural grid areas.The results underscore the ability of the proposed approach to effectively discern the distribution-line structure and compute the node parameters,thereby offering vital support for determining PV access capability.
基金National Natural Science Foundation of China(82274411)Science and Technology Innovation Program of Hunan Province(2022RC1021)Leading Research Project of Hunan University of Chinese Medicine(2022XJJB002).
文摘Objective To build a dataset encompassing a large number of stained tongue coating images and process it using deep learning to automatically recognize stained tongue coating images.Methods A total of 1001 images of stained tongue coating from healthy students at Hunan University of Chinese Medicine and 1007 images of pathological(non-stained)tongue coat-ing from hospitalized patients at The First Hospital of Hunan University of Chinese Medicine withlungcancer;diabetes;andhypertensionwerecollected.Thetongueimageswererandomi-zed into the training;validation;and testing datasets in a 7:2:1 ratio.A deep learning model was constructed using the ResNet50 for recognizing stained tongue coating in the training and validation datasets.The training period was 90 epochs.The model’s performance was evaluated by its accuracy;loss curve;recall;F1 score;confusion matrix;receiver operating characteristic(ROC)curve;and precision-recall(PR)curve in the tasks of predicting stained tongue coating images in the testing dataset.The accuracy of the deep learning model was compared with that of attending physicians of traditional Chinese medicine(TCM).Results The training results showed that after 90 epochs;the model presented an excellent classification performance.The loss curve and accuracy were stable;showing no signs of overfitting.The model achieved an accuracy;recall;and F1 score of 92%;91%;and 92%;re-spectively.The confusion matrix revealed an accuracy of 92%for the model and 69%for TCM practitioners.The areas under the ROC and PR curves were 0.97 and 0.95;respectively.Conclusion The deep learning model constructed using ResNet50 can effectively recognize stained coating images with greater accuracy than visual inspection of TCM practitioners.This model has the potential to assist doctors in identifying false tongue coating and prevent-ing misdiagnosis.
文摘This study delves into the applications,challenges,and future directions of deep learning techniques in the field of image recognition.Deep learning,particularly Convolutional Neural Networks(CNNs),Recurrent Neural Networks(RNNs),and Generative Adversarial Networks(GANs),has become key to enhancing the precision and efficiency of image recognition.These models are capable of processing complex visual data,facilitating efficient feature extraction and image classification.However,acquiring and annotating high-quality,diverse datasets,addressing imbalances in datasets,and model training and optimization remain significant challenges in this domain.The paper proposes strategies for improving data augmentation,optimizing model architectures,and employing automated model optimization tools to address these challenges,while also emphasizing the importance of considering ethical issues in technological advancements.As technology continues to evolve,the application of deep learning in image recognition will further demonstrate its potent capability to solve complex problems,driving society towards more inclusive and diverse development.
基金supported in part by the National Natural Science Foundation of China under Grants(62250410365,62071084)the Guangdong Basic and Applied Basic Research Foundation of China(2022A1515011542)the Guangzhou Science and technology program of China(202201010606).
文摘With the arrival of new data acquisition platforms derived from the Internet of Things(IoT),this paper goes beyond the understanding of traditional remote sensing technologies.Deep fusion of remote sensing and computer vision has hit the industrial world and makes it possible to apply Artificial intelligence to solve problems such as automatic extraction of information and image interpretation.However,due to the complex architecture of IoT and the lack of a unified security protection mechanism,devices in remote sensing are vulnerable to privacy leaks when sharing data.It is necessary to design a security scheme suitable for computation‐limited devices in IoT,since traditional encryption methods are based on computational complexity.Visual Cryptography(VC)is a threshold scheme for images that can be decoded directly by the human visual system when superimposing encrypted images.The stacking‐to‐see feature and simple Boolean decryption operation make VC an ideal solution for privacy‐preserving recognition for large‐scale remote sensing images in IoT.In this study,the secure and efficient transmission of high‐resolution remote sensing images by meaningful VC is achieved.By diffusing the error between the encryption block and the original block to adjacent blocks,the degradation of quality in recovery images is mitigated.By fine‐tuning the pre‐trained model from large‐scale datasets,we improve the recognition performance of small encryption datasets for remote sensing images.The experimental results show that the proposed lightweight privacy‐preserving recognition framework maintains high recognition performance while enhancing security.
基金This work was funded by the Deanship of Scientific Research at Jouf University under Grant Number(DSR2022-RG-0114).
文摘The challenge faced by the visually impaired persons in their day-today lives is to interpret text from documents.In this context,to help these people,the objective of this work is to develop an efficient text recognition system that allows the isolation,the extraction,and the recognition of text in the case of documents having a textured background,a degraded aspect of colors,and of poor quality,and to synthesize it into speech.This system basically consists of three algorithms:a text localization and detection algorithm based on mathematical morphology method(MMM);a text extraction algorithm based on the gamma correction method(GCM);and an optical character recognition(OCR)algorithm for text recognition.A detailed complexity study of the different blocks of this text recognition system has been realized.Following this study,an acceleration of the GCM algorithm(AGCM)is proposed.The AGCM algorithm has reduced the complexity in the text recognition system by 70%and kept the same quality of text recognition as that of the original method.To assist visually impaired persons,a graphical interface of the entire text recognition chain has been developed,allowing the capture of images from a camera,rapid and intuitive visualization of the recognized text from this image,and text-to-speech synthesis.Our text recognition system provides an improvement of 6.8%for the recognition rate and 7.6%for the F-measure relative to GCM and AGCM algorithms.
文摘As the COVID-19 epidemic spread across the globe,people around the world were advised or mandated to wear masks in public places to prevent its spreading further.In some cases,not wearing a mask could result in a fine.To monitor mask wearing,and to prevent the spread of future epidemics,this study proposes an image recognition system consisting of a camera,an infrared thermal array sensor,and a convolutional neural network trained in mask recognition.The infrared sensor monitors body temperature and displays the results in real-time on a liquid crystal display screen.The proposed system reduces the inefficiency of traditional object detection by providing training data according to the specific needs of the user and by applying You Only Look Once Version 4(YOLOv4)object detection technology,which experiments show has more efficient training parameters and a higher level of accuracy in object recognition.All datasets are uploaded to the cloud for storage using Google Colaboratory,saving human resources and achieving a high level of efficiency at a low cost.
基金supported by Fujian Provincial Science and Technology Major Project(No.2020HZ02014)by the grants from National Natural Science Foundation of Fujian(2021J01133,2021J011404)by the Quanzhou Scientific and Technological Planning Projects(Nos.2018C113R,2019C028R,2019C029R,2019C076R and 2019C099R).
文摘Congenital heart defect,accounting for about 30%of congenital defects,is the most common one.Data shows that congenital heart defects have seriously affected the birth rate of healthy newborns.In Fetal andNeonatal Cardiology,medical imaging technology(2D ultrasonic,MRI)has been proved to be helpful to detect congenital defects of the fetal heart and assists sonographers in prenatal diagnosis.It is a highly complex task to recognize 2D fetal heart ultrasonic standard plane(FHUSP)manually.Compared withmanual identification,automatic identification through artificial intelligence can save a lot of time,ensure the efficiency of diagnosis,and improve the accuracy of diagnosis.In this study,a feature extraction method based on texture features(Local Binary Pattern LBP and Histogram of Oriented Gradient HOG)and combined with Bag of Words(BOW)model is carried out,and then feature fusion is performed.Finally,it adopts Support VectorMachine(SVM)to realize automatic recognition and classification of FHUSP.The data includes 788 standard plane data sets and 448 normal and abnormal plane data sets.Compared with some other methods and the single method model,the classification accuracy of our model has been obviously improved,with the highest accuracy reaching 87.35%.Similarly,we also verify the performance of the model in normal and abnormal planes,and the average accuracy in classifying abnormal and normal planes is 84.92%.The experimental results show that thismethod can effectively classify and predict different FHUSP and can provide certain assistance for sonographers to diagnose fetal congenital heart disease.
基金supported by science and technology projects of Gansu State Grid Corporation of China(52272220002U).
文摘Optical Character Recognition(OCR)refers to a technology that uses image processing technology and character recognition algorithms to identify characters on an image.This paper is a deep study on the recognition effect of OCR based on Artificial Intelligence(AI)algorithms,in which the different AI algorithms for OCR analysis are classified and reviewed.Firstly,the mechanisms and characteristics of artificial neural network-based OCR are summarized.Secondly,this paper explores machine learning-based OCR,and draws the conclusion that the algorithms available for this form of OCR are still in their infancy,with low generalization and fixed recognition errors,albeit with better recognition effect and higher recognition accuracy.Finally,this paper explores several of the latest algorithms such as deep learning and pattern recognition algorithms.This paper concludes that OCR requires algorithms with higher recognition accuracy.
文摘In this paper, artificial intelligence image recognition technology is used to improve the recognition rate of individual domestic fish and reduce the recognition time, aiming at the problem that it is difficult to easily observe the species and growth of domestic fish in the underwater non-uniform light field environment. First, starting from the image data collected by polarizing imaging technology, this paper uses subpixel convolution reconstruction to enhance the image, uses image translation and fill technology to build the family fish database, builds the Adam-Dropout-CNN (A-D-CNN) network model, and its convolution kernel size is 3 × 3. The maximum pooling was used for downsampling, and the discarding operation was added after the full connection layer to avoid the phenomenon of network overfitting. The adaptive motion estimation algorithm was used to solve the gradient sparse problem. The experiment shows that the recognition rate of A-D-CNN is 96.97% when the model is trained under the domestic fish image database, which solves the problem of low recognition rate and slow recognition speed of domestic fish in non-uniform light field.
文摘The monitoring system designed in this paper is on account of YOLOv5(You Only Look Once)to monitor foreign objects on railway tracks and can broadcast the monitoring information to the locomotive in real time.First,the general structure of the system is determined through demand analysis and feasibility analysis,the foreign object intrusion recognition algorithm is designed,and the data set required for foreign object intrusion recognition is made.Secondly,according to the functional demands,the system selects a suitable neural web,and the programming is reasonable.At last,the system is simulated to validate its functionality(identification and classification of track intrusion and determination of a safe operating zone).
基金This work is supported by the National Natural Science Foundation of China(61806013,61876010,62176009,and 61906005)General project of Science and Technology Planof Beijing Municipal Education Commission(KM202110005028)+2 种基金Beijing Municipal Education Commission Project(KZ201910005008)Project of Interdisciplinary Research Institute of Beijing University of Technology(2021020101)International Research Cooperation Seed Fund of Beijing University of Technology(2021A01).
文摘The fine-grained ship image recognition task aims to identify various classes of ships.However,small inter-class,large intra-class differences between ships,and lacking of training samples are the reasons that make the task difficult.Therefore,to enhance the accuracy of the fine-grained ship image recognition,we design a fine-grained ship image recognition network based on bilinear convolutional neural network(BCNN)with Inception and additive margin Softmax(AM-Softmax).This network improves the BCNN in two aspects.Firstly,by introducing Inception branches to the BCNN network,it is helpful to enhance the ability of extracting comprehensive features from ships.Secondly,by adding margin values to the decision boundary,the AM-Softmax function can better extend the inter-class differences and reduce the intra-class differences.In addition,as there are few publicly available datasets for fine-grained ship image recognition,we construct a Ship-43 dataset containing 47,300 ship images belonging to 43 categories.Experimental results on the constructed Ship-43 dataset demonstrate that our method can effectively improve the accuracy of ship image recognition,which is 4.08%higher than the BCNN model.Moreover,comparison results on the other three public fine-grained datasets(Cub,Cars,and Aircraft)further validate the effectiveness of the proposed method.
文摘The traditional synthetic aperture radar(SAR) image recognition techniques focus on the electro magnetic (EM) scattering centers, ignoring the important role of the shadow information on the SAR image recognition. It is difficult to classify targets by the shadow information independently, because the shadow shape is dependent on the radar aspect angle, the depression angle and the resolution. Moreover, the shadow shapes of different targets are similar. When the multiple SAR images of one target from different aspects are available, the performance of the target recognition can be improved. Aimed at the problem, a multi-aspect SAR image recognition technique based on the shadow information is developed. It extracts shadow profiles from SAR images, and takes chain codes as the feature vectors of targets. Then, feature vectors on multiple aspects of the same target are combined with feature sequences, and the hidden Markov model (HMM) is applied to the feature sequences for the target recognition. The simulation result shows the effectiveness of the method.
基金The support of this research was by Hubei Provincial Natural Science Foundation(2022CFB449)Science Research Foundation of Education Department of Hubei Province(B2020061),are gratefully acknowledged.
文摘The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregular and multi-scale nature of food images.Addressing these complexities,our study introduces an advanced model that leverages multiple attention mechanisms and multi-stage local fusion,grounded in the ConvNeXt architecture.Our model employs hybrid attention(HA)mechanisms to pinpoint critical discriminative regions within images,substantially mitigating the influence of background noise.Furthermore,it introduces a multi-stage local fusion(MSLF)module,fostering long-distance dependencies between feature maps at varying stages.This approach facilitates the assimilation of complementary features across scales,significantly bolstering the model’s capacity for feature extraction.Furthermore,we constructed a dataset named Roushi60,which consists of 60 different categories of common meat dishes.Empirical evaluation of the ETH Food-101,ChineseFoodNet,and Roushi60 datasets reveals that our model achieves recognition accuracies of 91.12%,82.86%,and 92.50%,respectively.These figures not only mark an improvement of 1.04%,3.42%,and 1.36%over the foundational ConvNeXt network but also surpass the performance of most contemporary food image recognition methods.Such advancements underscore the efficacy of our proposed model in navigating the intricate landscape of food image recognition,setting a new benchmark for the field.
基金the National Science and Technology Council of Taiwan under Grant NSTC 112-2221-E-130-005.
文摘This research focuses on addressing the challenges associated with image detection in low-light environments,particularly by applying artificial intelligence techniques to machine vision and object recognition systems.The primary goal is to tackle issues related to recognizing objects with low brightness levels.In this study,the Intel RealSense Lidar Camera L515 is used to simultaneously capture color information and 16-bit depth information images.The detection scenarios are categorized into normal brightness and low brightness situations.When the system determines a normal brightness environment,normal brightness images are recognized using deep learning methods.In low-brightness situations,three methods are proposed for recognition.The first method is the SegmentationwithDepth image(SD)methodwhich involves segmenting the depth image,creating amask from the segmented depth image,mapping the obtained mask onto the true color(RGB)image to obtain a backgroundreduced RGB image,and recognizing the segmented image.The second method is theHDVmethod(hue,depth,value)which combines RGB images converted to HSV images(hue,saturation,value)with depth images D to form HDV images for recognition.The third method is the HSD(hue,saturation,depth)method which similarly combines RGB images converted to HSV images with depth images D to form HSD images for recognition.In experimental results,in normal brightness environments,the average recognition rate obtained using image recognition methods is 91%.For low-brightness environments,using the SD method with original images for training and segmented images for recognition achieves an average recognition rate of over 82%.TheHDVmethod achieves an average recognition rate of over 70%,while the HSD method achieves an average recognition rate of over 84%.The HSD method allows for a quick and convenient low-light object recognition system.This research outcome can be applied to nighttime surveillance systems or nighttime road safety systems.
基金The authors extend their appreciation to the Arab Open University,Saudi Arabia,for funding this work through AOU research fund No.AOURG-2023-009.
文摘In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requirement to the capture device.When these conditions are relaxed,the system’s performance significantly deteriorates due to segmentation and feature extraction problems.Herein,a novel segmentation algorithm is proposed to correctly detect the pupil and limbus boundaries of iris images captured in unconstrained environments.First,the algorithm scans the whole iris image in the Hue Saturation Value(HSV)color space for local maxima to detect the sclera region.The image quality is then assessed by computing global features in red,green and blue(RGB)space,as noisy images have heterogeneous characteristics.The iris images are accordingly classified into seven categories based on their global RGB intensities.After the classification process,the images are filtered,and adaptive thresholding is applied to enhance the global contrast and detect the outer iris ring.Finally,to characterize the pupil area,the algorithm scans the cropped outer ring region for local minima values to identify the darkest area in the iris ring.The experimental results show that our method outperforms existing segmentation techniques using the UBIRIS.v1 and v2 databases and achieved a segmentation accuracy of 99.32 on UBIRIS.v1 and an error rate of 1.59 on UBIRIS.v2.
文摘Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in sparse representation means that only a few of instances selected from all training samples can effectively convey the essential class-specific information of the test sample,which is very important for classification.For deformable images such as human faces,pixels at the same location of different images of the same subject usually have different intensities.Therefore,extracting features and correctly classifying such deformable objects is very hard.Moreover,the lighting,attitude and occlusion cause more difficulty.Considering the problems and challenges listed above,a novel image representation and classification algorithm is proposed.First,the authors’algorithm generates virtual samples by a non-linear variation method.This method can effectively extract the low-frequency information of space-domain features of the original image,which is very useful for representing deformable objects.The combination of the original and virtual samples is more beneficial to improve the clas-sification performance and robustness of the algorithm.Thereby,the authors’algorithm calculates the expression coefficients of the original and virtual samples separately using the sparse representation principle and obtains the final score by a designed efficient score fusion scheme.The weighting coefficients in the score fusion scheme are set entirely automatically.Finally,the algorithm classifies the samples based on the final scores.The experimental results show that our method performs better classification than conventional sparse representation algorithms.
文摘Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.