针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL...针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL)的深度学习模型,ViTL模型集成了双路Vision-Transformer特征提取、时空特征融合和长短期记忆递归神经网络(LSTM)时序分类等3个关键模块,双路Vision-Transformer特征提取模块用于捕获图像的时空特征相关性,一路提取空间分类特征,一路提取时间变化特征;时空特征融合模块用于将多时特征信息进行交叉融合;LSTM时序分类模块捕捉多时序的依赖关系并进行输出分类。综合利用基于多时序卫星影像的遥感技术理论和方法,对黑龙江省齐齐哈尔市讷河市作物信息进行提取,研究结果表明,ViTL模型表现出色,其总体准确率(Overall Accuracy,OA)、平均交并比(Mean Intersection over Union,MIoU)和F1分数分别达到0.8676、0.6987和0.8175,与其他广泛使用的深度学习方法相比,包括三维卷积神经网络(3-D CNN)、二维卷积神经网络(2-D CNN)和长短期记忆递归神经网络(LSTM),ViTL模型的F1分数提高了9%~12%,显示出显著的优越性。ViTL模型克服了面对多时序遥感影像的农作物分类任务中的时间和空间信息特征采样不足问题,为准确、高效地农作物分类提供了新思路。展开更多
This study evaluates the performance and reliability of a vision transformer (ViT) compared to convolutional neural networks (CNNs) using the ResNet50 model in classifying lung cancer from CT images into four categori...This study evaluates the performance and reliability of a vision transformer (ViT) compared to convolutional neural networks (CNNs) using the ResNet50 model in classifying lung cancer from CT images into four categories: lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), large cell carcinoma (LULC), and normal. Although CNNs have made significant advancements in medical imaging, their limited capacity to capture long-range dependencies has led to the exploration of ViTs, which leverage self-attention mechanisms for a more comprehensive global understanding of images. The study utilized a dataset of 748 lung CT images to train both models with standardized input sizes, assessing their performance through conventional metrics—accuracy, precision, recall, F1 score, specificity, and AUC—as well as cross entropy, a novel metric for evaluating prediction uncertainty. Both models achieved similar accuracy rates (95%), with ViT demonstrating a slight edge over ResNet50 in precision and F1 scores for specific classes. However, ResNet50 exhibited higher recall for LULC, indicating fewer missed cases. Cross entropy analysis showed that the ViT model had lower average uncertainty, particularly in the LUAD, Normal, and LUSC classes, compared to ResNet50. This finding suggests that ViT predictions are generally more reliable, though ResNet50 performed better for LULC. The study underscores that accuracy alone is insufficient for model comparison, as cross entropy offers deeper insights into the reliability and confidence of model predictions. The results highlight the importance of incorporating cross entropy alongside traditional metrics for a more comprehensive evaluation of deep learning models in medical image classification, providing a nuanced understanding of their performance and reliability. While the ViT outperformed the CNN-based ResNet50 in lung cancer classification based on cross-entropy values, the performance differences were minor and may not hold clinical significance. Therefore, it may be premature to consider replacing CNNs with ViTs in this specific application.展开更多
有效的道路裂缝检测是保障道路安全的关键。针对现有道路裂缝检测方法效率低,检测结果易受检测环境影响的问题,文章结合深度学习与计算机视觉技术,在DeepLabV3+架构的基础上提出了一种适用于复杂道路场景下的道路裂缝检测网络Crack-Deep...有效的道路裂缝检测是保障道路安全的关键。针对现有道路裂缝检测方法效率低,检测结果易受检测环境影响的问题,文章结合深度学习与计算机视觉技术,在DeepLabV3+架构的基础上提出了一种适用于复杂道路场景下的道路裂缝检测网络Crack-Deeplab。Crack-Deeplab引入新颖的网络模块和结构设计,具有轻量化、强泛化的特点以及精细分割的能力。文章基于数据集Crack500进行试验,验证和测试数据集的裂缝交并比(Intersection over Union,IoU)分别达到了0.67和0.58,比现有的网络有明显提高;另外,采用复杂环境下拍摄的广州大学校内道路图片对该网络进行实际工程验证,基于Crack500数据训练的Crack-Deeplab在无需新增附加训练数据的情况下,能精准识别和分割出不同场景及环境下校内道路的裂缝,证明了该方法的有效性和鲁棒性,以及在实际工程中的应用价值。展开更多
为解决光线遮蔽、藻萍干扰以及稻叶尖形状相似等复杂环境导致稻田杂草识别效果不理想问题,该研究提出一种基于组合深度学习的杂草识别方法。引入MSRCP(multi-scale retinex with color preservation)对图像进行增强,以提高图像亮度及对...为解决光线遮蔽、藻萍干扰以及稻叶尖形状相似等复杂环境导致稻田杂草识别效果不理想问题,该研究提出一种基于组合深度学习的杂草识别方法。引入MSRCP(multi-scale retinex with color preservation)对图像进行增强,以提高图像亮度及对比度;加入ViT分类网络去除干扰背景,以提高模型在复杂环境下对小目标杂草的识别性能。在YOLOv7模型中主干特征提取网络替换为GhostNet网络,并引入CA注意力机制,以增强主干特征提取网络对杂草特征提取能力及简化模型参数计算量。消融试验表明:改进后的YOLOv7模型平均精度均值为88.2%,较原YOLOv7模型提高了3.3个百分点,参数量减少10.43 M,计算量减少66.54×109次/s。识别前先经过MSRCP图像增强后,与原模型相比,改进YOLOv7模型的平均精度均值提高了2.6个百分点,光线遮蔽、藻萍干扰以及稻叶尖形状相似的复杂环境下平均精度均值分别提高5.3、3.6、3.1个百分点,加入ViT分类网络后,较原模型平均精度均值整体提升了4.4个百分点,光线遮蔽、藻萍干扰一级稻叶尖形状相似的复杂环境下的平均精度均值较原模型整体提升了6.2、6.1、5.7个百分点。ViT-改进YOLOv7模型的平均精度均值为92.6%,相比于YOLOv5s、YOLOXs、MobilenetV3-YOLOv7、YOLOv8和改进YOLOv7分别提高了11.6、10.1、5.0、4.2、4.4个百分点。研究结果可为稻田复杂环境的杂草精准识别提供支撑。展开更多
文摘针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL)的深度学习模型,ViTL模型集成了双路Vision-Transformer特征提取、时空特征融合和长短期记忆递归神经网络(LSTM)时序分类等3个关键模块,双路Vision-Transformer特征提取模块用于捕获图像的时空特征相关性,一路提取空间分类特征,一路提取时间变化特征;时空特征融合模块用于将多时特征信息进行交叉融合;LSTM时序分类模块捕捉多时序的依赖关系并进行输出分类。综合利用基于多时序卫星影像的遥感技术理论和方法,对黑龙江省齐齐哈尔市讷河市作物信息进行提取,研究结果表明,ViTL模型表现出色,其总体准确率(Overall Accuracy,OA)、平均交并比(Mean Intersection over Union,MIoU)和F1分数分别达到0.8676、0.6987和0.8175,与其他广泛使用的深度学习方法相比,包括三维卷积神经网络(3-D CNN)、二维卷积神经网络(2-D CNN)和长短期记忆递归神经网络(LSTM),ViTL模型的F1分数提高了9%~12%,显示出显著的优越性。ViTL模型克服了面对多时序遥感影像的农作物分类任务中的时间和空间信息特征采样不足问题,为准确、高效地农作物分类提供了新思路。
文摘This study evaluates the performance and reliability of a vision transformer (ViT) compared to convolutional neural networks (CNNs) using the ResNet50 model in classifying lung cancer from CT images into four categories: lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), large cell carcinoma (LULC), and normal. Although CNNs have made significant advancements in medical imaging, their limited capacity to capture long-range dependencies has led to the exploration of ViTs, which leverage self-attention mechanisms for a more comprehensive global understanding of images. The study utilized a dataset of 748 lung CT images to train both models with standardized input sizes, assessing their performance through conventional metrics—accuracy, precision, recall, F1 score, specificity, and AUC—as well as cross entropy, a novel metric for evaluating prediction uncertainty. Both models achieved similar accuracy rates (95%), with ViT demonstrating a slight edge over ResNet50 in precision and F1 scores for specific classes. However, ResNet50 exhibited higher recall for LULC, indicating fewer missed cases. Cross entropy analysis showed that the ViT model had lower average uncertainty, particularly in the LUAD, Normal, and LUSC classes, compared to ResNet50. This finding suggests that ViT predictions are generally more reliable, though ResNet50 performed better for LULC. The study underscores that accuracy alone is insufficient for model comparison, as cross entropy offers deeper insights into the reliability and confidence of model predictions. The results highlight the importance of incorporating cross entropy alongside traditional metrics for a more comprehensive evaluation of deep learning models in medical image classification, providing a nuanced understanding of their performance and reliability. While the ViT outperformed the CNN-based ResNet50 in lung cancer classification based on cross-entropy values, the performance differences were minor and may not hold clinical significance. Therefore, it may be premature to consider replacing CNNs with ViTs in this specific application.
文摘有效的道路裂缝检测是保障道路安全的关键。针对现有道路裂缝检测方法效率低,检测结果易受检测环境影响的问题,文章结合深度学习与计算机视觉技术,在DeepLabV3+架构的基础上提出了一种适用于复杂道路场景下的道路裂缝检测网络Crack-Deeplab。Crack-Deeplab引入新颖的网络模块和结构设计,具有轻量化、强泛化的特点以及精细分割的能力。文章基于数据集Crack500进行试验,验证和测试数据集的裂缝交并比(Intersection over Union,IoU)分别达到了0.67和0.58,比现有的网络有明显提高;另外,采用复杂环境下拍摄的广州大学校内道路图片对该网络进行实际工程验证,基于Crack500数据训练的Crack-Deeplab在无需新增附加训练数据的情况下,能精准识别和分割出不同场景及环境下校内道路的裂缝,证明了该方法的有效性和鲁棒性,以及在实际工程中的应用价值。
文摘为解决光线遮蔽、藻萍干扰以及稻叶尖形状相似等复杂环境导致稻田杂草识别效果不理想问题,该研究提出一种基于组合深度学习的杂草识别方法。引入MSRCP(multi-scale retinex with color preservation)对图像进行增强,以提高图像亮度及对比度;加入ViT分类网络去除干扰背景,以提高模型在复杂环境下对小目标杂草的识别性能。在YOLOv7模型中主干特征提取网络替换为GhostNet网络,并引入CA注意力机制,以增强主干特征提取网络对杂草特征提取能力及简化模型参数计算量。消融试验表明:改进后的YOLOv7模型平均精度均值为88.2%,较原YOLOv7模型提高了3.3个百分点,参数量减少10.43 M,计算量减少66.54×109次/s。识别前先经过MSRCP图像增强后,与原模型相比,改进YOLOv7模型的平均精度均值提高了2.6个百分点,光线遮蔽、藻萍干扰以及稻叶尖形状相似的复杂环境下平均精度均值分别提高5.3、3.6、3.1个百分点,加入ViT分类网络后,较原模型平均精度均值整体提升了4.4个百分点,光线遮蔽、藻萍干扰一级稻叶尖形状相似的复杂环境下的平均精度均值较原模型整体提升了6.2、6.1、5.7个百分点。ViT-改进YOLOv7模型的平均精度均值为92.6%,相比于YOLOv5s、YOLOXs、MobilenetV3-YOLOv7、YOLOv8和改进YOLOv7分别提高了11.6、10.1、5.0、4.2、4.4个百分点。研究结果可为稻田复杂环境的杂草精准识别提供支撑。