Diabetic retinopathy(DR)is one of the major causes of visual impairment in adults with diabetes.Optical coherence tomography angiography(OCTA)is nowadays widely used as the golden criterion for diagnosing DR.Recently,...Diabetic retinopathy(DR)is one of the major causes of visual impairment in adults with diabetes.Optical coherence tomography angiography(OCTA)is nowadays widely used as the golden criterion for diagnosing DR.Recently,wide-field OCTA(WF-OCTA)provided more abundant information including that of the peripheral retinal degenerative changes and it can contribute in accurately diagnosing DR.The need for an automatic DR diagnostic system based on WF-OCTA pictures attracts more and more attention due to the large diabetic population and the prevalence of retinopathy cases.In this study,automatic diagnosis of DR using vision transformer was performed using WF-OCTA images(12 mm×12 mm single-scan)centered on the fovea as the dataset.WF-OCTA images were automatically classified into four classes:No DR,mild nonproliferative diabetic retinopathy(NPDR),moderate to severe NPDR,and proliferative diabetic retinopathy(PDR).The proposed method for detecting DR on the test set achieves accuracy of 99.55%,sensitivity of 99.49%,and specificity of 99.57%.The accuracy of the method for DR staging reaches up to 99.20%,which has been proven to be higher than that attained by classical convolutional neural network models.Results show that the automatic diagnosis of DR based on vision transformer and WF-OCTA pictures is more effective for detecting and staging DR.展开更多
Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly di...Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed.展开更多
针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL...针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL)的深度学习模型,ViTL模型集成了双路Vision-Transformer特征提取、时空特征融合和长短期记忆递归神经网络(LSTM)时序分类等3个关键模块,双路Vision-Transformer特征提取模块用于捕获图像的时空特征相关性,一路提取空间分类特征,一路提取时间变化特征;时空特征融合模块用于将多时特征信息进行交叉融合;LSTM时序分类模块捕捉多时序的依赖关系并进行输出分类。综合利用基于多时序卫星影像的遥感技术理论和方法,对黑龙江省齐齐哈尔市讷河市作物信息进行提取,研究结果表明,ViTL模型表现出色,其总体准确率(Overall Accuracy,OA)、平均交并比(Mean Intersection over Union,MIoU)和F1分数分别达到0.8676、0.6987和0.8175,与其他广泛使用的深度学习方法相比,包括三维卷积神经网络(3-D CNN)、二维卷积神经网络(2-D CNN)和长短期记忆递归神经网络(LSTM),ViTL模型的F1分数提高了9%~12%,显示出显著的优越性。ViTL模型克服了面对多时序遥感影像的农作物分类任务中的时间和空间信息特征采样不足问题,为准确、高效地农作物分类提供了新思路。展开更多
This study evaluates the performance and reliability of a vision transformer (ViT) compared to convolutional neural networks (CNNs) using the ResNet50 model in classifying lung cancer from CT images into four categori...This study evaluates the performance and reliability of a vision transformer (ViT) compared to convolutional neural networks (CNNs) using the ResNet50 model in classifying lung cancer from CT images into four categories: lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), large cell carcinoma (LULC), and normal. Although CNNs have made significant advancements in medical imaging, their limited capacity to capture long-range dependencies has led to the exploration of ViTs, which leverage self-attention mechanisms for a more comprehensive global understanding of images. The study utilized a dataset of 748 lung CT images to train both models with standardized input sizes, assessing their performance through conventional metrics—accuracy, precision, recall, F1 score, specificity, and AUC—as well as cross entropy, a novel metric for evaluating prediction uncertainty. Both models achieved similar accuracy rates (95%), with ViT demonstrating a slight edge over ResNet50 in precision and F1 scores for specific classes. However, ResNet50 exhibited higher recall for LULC, indicating fewer missed cases. Cross entropy analysis showed that the ViT model had lower average uncertainty, particularly in the LUAD, Normal, and LUSC classes, compared to ResNet50. This finding suggests that ViT predictions are generally more reliable, though ResNet50 performed better for LULC. The study underscores that accuracy alone is insufficient for model comparison, as cross entropy offers deeper insights into the reliability and confidence of model predictions. The results highlight the importance of incorporating cross entropy alongside traditional metrics for a more comprehensive evaluation of deep learning models in medical image classification, providing a nuanced understanding of their performance and reliability. While the ViT outperformed the CNN-based ResNet50 in lung cancer classification based on cross-entropy values, the performance differences were minor and may not hold clinical significance. Therefore, it may be premature to consider replacing CNNs with ViTs in this specific application.展开更多
In response to the problem of inadequate utilization of local information in PolSAR image classification using Vision Transformer in existing studies, this paper proposes a Vision Transformer method considering local ...In response to the problem of inadequate utilization of local information in PolSAR image classification using Vision Transformer in existing studies, this paper proposes a Vision Transformer method considering local information, LIViT. The method replaces image patch sequence with polarimetric feature sequence in the feature embedding, and uses convolution for mapping to preserve image spatial detail information. On the other hand, the addition of the wavelet transform branch enables the network to pay more attention to the shape and edge information of the feature target and improves the extraction of local edge information. The results in Wuhan, China and Flevoland, Netherlands show that considering local information when using Vision Transformer for PolSAR image classification effectively improves the image classification accuracy and shows better advantages in PolSAR image classification.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.62175156,81827807,81770940)Science and Technology Commission of Shanghai Municipality(22S31903000,16DZ0501100)Collaborative Innovation Project of Shanghai Institute of Technology(XTCX2022-27).
文摘Diabetic retinopathy(DR)is one of the major causes of visual impairment in adults with diabetes.Optical coherence tomography angiography(OCTA)is nowadays widely used as the golden criterion for diagnosing DR.Recently,wide-field OCTA(WF-OCTA)provided more abundant information including that of the peripheral retinal degenerative changes and it can contribute in accurately diagnosing DR.The need for an automatic DR diagnostic system based on WF-OCTA pictures attracts more and more attention due to the large diabetic population and the prevalence of retinopathy cases.In this study,automatic diagnosis of DR using vision transformer was performed using WF-OCTA images(12 mm×12 mm single-scan)centered on the fovea as the dataset.WF-OCTA images were automatically classified into four classes:No DR,mild nonproliferative diabetic retinopathy(NPDR),moderate to severe NPDR,and proliferative diabetic retinopathy(PDR).The proposed method for detecting DR on the test set achieves accuracy of 99.55%,sensitivity of 99.49%,and specificity of 99.57%.The accuracy of the method for DR staging reaches up to 99.20%,which has been proven to be higher than that attained by classical convolutional neural network models.Results show that the automatic diagnosis of DR based on vision transformer and WF-OCTA pictures is more effective for detecting and staging DR.
基金Researchers Supporting Project Number(RSPD2024R 553),King Saud University,Riyadh,Saudi Arabia.
文摘Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed.
文摘针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL)的深度学习模型,ViTL模型集成了双路Vision-Transformer特征提取、时空特征融合和长短期记忆递归神经网络(LSTM)时序分类等3个关键模块,双路Vision-Transformer特征提取模块用于捕获图像的时空特征相关性,一路提取空间分类特征,一路提取时间变化特征;时空特征融合模块用于将多时特征信息进行交叉融合;LSTM时序分类模块捕捉多时序的依赖关系并进行输出分类。综合利用基于多时序卫星影像的遥感技术理论和方法,对黑龙江省齐齐哈尔市讷河市作物信息进行提取,研究结果表明,ViTL模型表现出色,其总体准确率(Overall Accuracy,OA)、平均交并比(Mean Intersection over Union,MIoU)和F1分数分别达到0.8676、0.6987和0.8175,与其他广泛使用的深度学习方法相比,包括三维卷积神经网络(3-D CNN)、二维卷积神经网络(2-D CNN)和长短期记忆递归神经网络(LSTM),ViTL模型的F1分数提高了9%~12%,显示出显著的优越性。ViTL模型克服了面对多时序遥感影像的农作物分类任务中的时间和空间信息特征采样不足问题,为准确、高效地农作物分类提供了新思路。
文摘This study evaluates the performance and reliability of a vision transformer (ViT) compared to convolutional neural networks (CNNs) using the ResNet50 model in classifying lung cancer from CT images into four categories: lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), large cell carcinoma (LULC), and normal. Although CNNs have made significant advancements in medical imaging, their limited capacity to capture long-range dependencies has led to the exploration of ViTs, which leverage self-attention mechanisms for a more comprehensive global understanding of images. The study utilized a dataset of 748 lung CT images to train both models with standardized input sizes, assessing their performance through conventional metrics—accuracy, precision, recall, F1 score, specificity, and AUC—as well as cross entropy, a novel metric for evaluating prediction uncertainty. Both models achieved similar accuracy rates (95%), with ViT demonstrating a slight edge over ResNet50 in precision and F1 scores for specific classes. However, ResNet50 exhibited higher recall for LULC, indicating fewer missed cases. Cross entropy analysis showed that the ViT model had lower average uncertainty, particularly in the LUAD, Normal, and LUSC classes, compared to ResNet50. This finding suggests that ViT predictions are generally more reliable, though ResNet50 performed better for LULC. The study underscores that accuracy alone is insufficient for model comparison, as cross entropy offers deeper insights into the reliability and confidence of model predictions. The results highlight the importance of incorporating cross entropy alongside traditional metrics for a more comprehensive evaluation of deep learning models in medical image classification, providing a nuanced understanding of their performance and reliability. While the ViT outperformed the CNN-based ResNet50 in lung cancer classification based on cross-entropy values, the performance differences were minor and may not hold clinical significance. Therefore, it may be premature to consider replacing CNNs with ViTs in this specific application.
文摘In response to the problem of inadequate utilization of local information in PolSAR image classification using Vision Transformer in existing studies, this paper proposes a Vision Transformer method considering local information, LIViT. The method replaces image patch sequence with polarimetric feature sequence in the feature embedding, and uses convolution for mapping to preserve image spatial detail information. On the other hand, the addition of the wavelet transform branch enables the network to pay more attention to the shape and edge information of the feature target and improves the extraction of local edge information. The results in Wuhan, China and Flevoland, Netherlands show that considering local information when using Vision Transformer for PolSAR image classification effectively improves the image classification accuracy and shows better advantages in PolSAR image classification.