针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL...针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL)的深度学习模型,ViTL模型集成了双路Vision-Transformer特征提取、时空特征融合和长短期记忆递归神经网络(LSTM)时序分类等3个关键模块,双路Vision-Transformer特征提取模块用于捕获图像的时空特征相关性,一路提取空间分类特征,一路提取时间变化特征;时空特征融合模块用于将多时特征信息进行交叉融合;LSTM时序分类模块捕捉多时序的依赖关系并进行输出分类。综合利用基于多时序卫星影像的遥感技术理论和方法,对黑龙江省齐齐哈尔市讷河市作物信息进行提取,研究结果表明,ViTL模型表现出色,其总体准确率(Overall Accuracy,OA)、平均交并比(Mean Intersection over Union,MIoU)和F1分数分别达到0.8676、0.6987和0.8175,与其他广泛使用的深度学习方法相比,包括三维卷积神经网络(3-D CNN)、二维卷积神经网络(2-D CNN)和长短期记忆递归神经网络(LSTM),ViTL模型的F1分数提高了9%~12%,显示出显著的优越性。ViTL模型克服了面对多时序遥感影像的农作物分类任务中的时间和空间信息特征采样不足问题,为准确、高效地农作物分类提供了新思路。展开更多
This study evaluates the performance and reliability of a vision transformer (ViT) compared to convolutional neural networks (CNNs) using the ResNet50 model in classifying lung cancer from CT images into four categori...This study evaluates the performance and reliability of a vision transformer (ViT) compared to convolutional neural networks (CNNs) using the ResNet50 model in classifying lung cancer from CT images into four categories: lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), large cell carcinoma (LULC), and normal. Although CNNs have made significant advancements in medical imaging, their limited capacity to capture long-range dependencies has led to the exploration of ViTs, which leverage self-attention mechanisms for a more comprehensive global understanding of images. The study utilized a dataset of 748 lung CT images to train both models with standardized input sizes, assessing their performance through conventional metrics—accuracy, precision, recall, F1 score, specificity, and AUC—as well as cross entropy, a novel metric for evaluating prediction uncertainty. Both models achieved similar accuracy rates (95%), with ViT demonstrating a slight edge over ResNet50 in precision and F1 scores for specific classes. However, ResNet50 exhibited higher recall for LULC, indicating fewer missed cases. Cross entropy analysis showed that the ViT model had lower average uncertainty, particularly in the LUAD, Normal, and LUSC classes, compared to ResNet50. This finding suggests that ViT predictions are generally more reliable, though ResNet50 performed better for LULC. The study underscores that accuracy alone is insufficient for model comparison, as cross entropy offers deeper insights into the reliability and confidence of model predictions. The results highlight the importance of incorporating cross entropy alongside traditional metrics for a more comprehensive evaluation of deep learning models in medical image classification, providing a nuanced understanding of their performance and reliability. While the ViT outperformed the CNN-based ResNet50 in lung cancer classification based on cross-entropy values, the performance differences were minor and may not hold clinical significance. Therefore, it may be premature to consider replacing CNNs with ViTs in this specific application.展开更多
文摘针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL)的深度学习模型,ViTL模型集成了双路Vision-Transformer特征提取、时空特征融合和长短期记忆递归神经网络(LSTM)时序分类等3个关键模块,双路Vision-Transformer特征提取模块用于捕获图像的时空特征相关性,一路提取空间分类特征,一路提取时间变化特征;时空特征融合模块用于将多时特征信息进行交叉融合;LSTM时序分类模块捕捉多时序的依赖关系并进行输出分类。综合利用基于多时序卫星影像的遥感技术理论和方法,对黑龙江省齐齐哈尔市讷河市作物信息进行提取,研究结果表明,ViTL模型表现出色,其总体准确率(Overall Accuracy,OA)、平均交并比(Mean Intersection over Union,MIoU)和F1分数分别达到0.8676、0.6987和0.8175,与其他广泛使用的深度学习方法相比,包括三维卷积神经网络(3-D CNN)、二维卷积神经网络(2-D CNN)和长短期记忆递归神经网络(LSTM),ViTL模型的F1分数提高了9%~12%,显示出显著的优越性。ViTL模型克服了面对多时序遥感影像的农作物分类任务中的时间和空间信息特征采样不足问题,为准确、高效地农作物分类提供了新思路。
文摘This study evaluates the performance and reliability of a vision transformer (ViT) compared to convolutional neural networks (CNNs) using the ResNet50 model in classifying lung cancer from CT images into four categories: lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), large cell carcinoma (LULC), and normal. Although CNNs have made significant advancements in medical imaging, their limited capacity to capture long-range dependencies has led to the exploration of ViTs, which leverage self-attention mechanisms for a more comprehensive global understanding of images. The study utilized a dataset of 748 lung CT images to train both models with standardized input sizes, assessing their performance through conventional metrics—accuracy, precision, recall, F1 score, specificity, and AUC—as well as cross entropy, a novel metric for evaluating prediction uncertainty. Both models achieved similar accuracy rates (95%), with ViT demonstrating a slight edge over ResNet50 in precision and F1 scores for specific classes. However, ResNet50 exhibited higher recall for LULC, indicating fewer missed cases. Cross entropy analysis showed that the ViT model had lower average uncertainty, particularly in the LUAD, Normal, and LUSC classes, compared to ResNet50. This finding suggests that ViT predictions are generally more reliable, though ResNet50 performed better for LULC. The study underscores that accuracy alone is insufficient for model comparison, as cross entropy offers deeper insights into the reliability and confidence of model predictions. The results highlight the importance of incorporating cross entropy alongside traditional metrics for a more comprehensive evaluation of deep learning models in medical image classification, providing a nuanced understanding of their performance and reliability. While the ViT outperformed the CNN-based ResNet50 in lung cancer classification based on cross-entropy values, the performance differences were minor and may not hold clinical significance. Therefore, it may be premature to consider replacing CNNs with ViTs in this specific application.