The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place i...The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place in physical systems over time and effect substantially.This study has made ozone depletion identification through classification using Faster Region-Based Convolutional Neural Network(F-RCNN).The main advantage of F-RCNN is to accumulate the bounding boxes on images to differentiate the depleted and non-depleted regions.Furthermore,image classification’s primary goal is to accurately predict each minutely varied case’s targeted classes in the dataset based on ozone saturation.The permanent changes in climate are of serious concern.The leading causes beyond these destructive variations are ozone layer depletion,greenhouse gas release,deforestation,pollution,water resources contamination,and UV radiation.This research focuses on the prediction by identifying the ozone layer depletion because it causes many health issues,e.g.,skin cancer,damage to marine life,crops damage,and impacts on living being’s immune systems.We have tried to classify the ozone images dataset into two major classes,depleted and non-depleted regions,to extract the required persuading features through F-RCNN.Furthermore,CNN has been used for feature extraction in the existing literature,and those extricated diverse RoIs are passed on to the CNN for grouping purposes.It is difficult to manage and differentiate those RoIs after grouping that negatively affects the gathered results.The classification outcomes through F-RCNN approach are proficient and demonstrate that general accuracy lies between 91%to 93%in identifying climate variation through ozone concentration classification,whether the region in the image under consideration is depleted or non-depleted.Our proposed model presented 93%accuracy,and it outperforms the prevailing techniques.展开更多
针对小目标水漂垃圾形态多变、分辨率低且信息有限,导致检测效果不理想的问题,提出一种改进的Faster-RCNN(Faster Regions with Convolutional Neural Network)水漂垃圾检测算法MP-Faster-RCNN(Faster-RCNN with Multi-scale feature an...针对小目标水漂垃圾形态多变、分辨率低且信息有限,导致检测效果不理想的问题,提出一种改进的Faster-RCNN(Faster Regions with Convolutional Neural Network)水漂垃圾检测算法MP-Faster-RCNN(Faster-RCNN with Multi-scale feature and Polarized self-attention)。首先,建立黄河兰州段小目标水漂垃圾数据集,将空洞卷积结合ResNet-50代替原来的VGG-16(Visual Geometry Group 16)作为主干特征提取网络,扩大感受野以提取更多小目标特征;其次,在区域生成网络(RPN)利用多尺度特征,设置3×3和1×1的两层卷积,补偿单一滑动窗口造成的特征丢失;最后,在RPN前加入极化自注意力,进一步利用多尺度和通道特征提取更细粒度的多尺度空间信息和通道间依赖关系,生成具有全局特征的特征图,实现更精确的目标框定位。实验结果表明,MP-Faster-RCNN能有效提高水漂垃圾检测精度,与原始Faster-RCNN相比,平均精度均值(mAP)提高了6.37个百分点,模型大小从521 MB降到了108 MB,且在同一训练批次下收敛更快。展开更多
文章基于改进更快的区域卷积神经网络(Faster Region Convolutional Neural Networks,Faster R-CNN)模型,提出了一种行人识别系统设计。介绍了计算机视觉常用技术手段与方法、通行检测步骤,分析了主流的算法优缺点,利用深度学习方法提...文章基于改进更快的区域卷积神经网络(Faster Region Convolutional Neural Networks,Faster R-CNN)模型,提出了一种行人识别系统设计。介绍了计算机视觉常用技术手段与方法、通行检测步骤,分析了主流的算法优缺点,利用深度学习方法提取图像特征,然后使用改进Faster R-CNN模型进行目标检测。在改进Faster R-CNN模型中,采用了自适应尺度池化和增强的感兴趣区域(Region of Interest,RoI)池化技术,可以提高模型检测精度和速度。展开更多
This paper help with leguminous seeds detection and smart farming. There are hundreds of kinds of seeds and itcan be very difficult to distinguish between them. Botanists and those who study plants, however, can ident...This paper help with leguminous seeds detection and smart farming. There are hundreds of kinds of seeds and itcan be very difficult to distinguish between them. Botanists and those who study plants, however, can identifythe type of seed at a glance. As far as we know, this is the first work to consider leguminous seeds images withdifferent backgrounds and different sizes and crowding. Machine learning is used to automatically classify andlocate 11 different seed types. We chose Leguminous seeds from 11 types to be the objects of this study. Thosetypes are of different colors, sizes, and shapes to add variety and complexity to our research. The images datasetof the leguminous seeds was manually collected, annotated, and then split randomly into three sub-datasetstrain, validation, and test (predictions), with a ratio of 80%, 10%, and 10% respectively. The images consideredthe variability between different leguminous seed types. The images were captured on five different backgrounds: white A4 paper, black pad, dark blue pad, dark green pad, and green pad. Different heights and shootingangles were considered. The crowdedness of the seeds also varied randomly between 1 and 50 seeds per image.Different combinations and arrangements between the 11 types were considered. Two different image-capturingdevices were used: a SAMSUNG smartphone camera and a Canon digital camera. A total of 828 images wereobtained, including 9801 seed objects (labels). The dataset contained images of different backgrounds, heights,angles, crowdedness, arrangements, and combinations. The TensorFlow framework was used to construct theFaster Region-based Convolutional Neural Network (R-CNN) model and CSPDarknet53 is used as the backbonefor YOLOv4 based on DenseNet designed to connect layers in convolutional neural. Using the transfer learningmethod, we optimized the seed detection models. The currently dominant object detection methods, Faster RCNN, and YOLOv4 performances were compared experimentally. The mAP (mean average precision) of the FasterR-CNN and YOLOv4 models were 84.56% and 98.52% respectively. YOLOv4 had a significant advantage in detection speed over Faster R-CNN which makes it suitable for real-time identification as well where high accuracy andlow false positives are needed. The results showed that YOLOv4 had better accuracy, and detection ability, as wellas faster detection speed beating Faster R-CNN by a large margin. The model can be effectively applied under avariety of backgrounds, image sizes, seed sizes, shooting angles, and shooting heights, as well as different levelsof seed crowding. It constitutes an effective and efficient method for detecting different leguminous seeds incomplex scenarios. This study provides a reference for further seed testing and enumeration applications.展开更多
文摘The concept of classification through deep learning is to build a model that skillfully separates closely-related images dataset into different classes because of diminutive but continuous variations that took place in physical systems over time and effect substantially.This study has made ozone depletion identification through classification using Faster Region-Based Convolutional Neural Network(F-RCNN).The main advantage of F-RCNN is to accumulate the bounding boxes on images to differentiate the depleted and non-depleted regions.Furthermore,image classification’s primary goal is to accurately predict each minutely varied case’s targeted classes in the dataset based on ozone saturation.The permanent changes in climate are of serious concern.The leading causes beyond these destructive variations are ozone layer depletion,greenhouse gas release,deforestation,pollution,water resources contamination,and UV radiation.This research focuses on the prediction by identifying the ozone layer depletion because it causes many health issues,e.g.,skin cancer,damage to marine life,crops damage,and impacts on living being’s immune systems.We have tried to classify the ozone images dataset into two major classes,depleted and non-depleted regions,to extract the required persuading features through F-RCNN.Furthermore,CNN has been used for feature extraction in the existing literature,and those extricated diverse RoIs are passed on to the CNN for grouping purposes.It is difficult to manage and differentiate those RoIs after grouping that negatively affects the gathered results.The classification outcomes through F-RCNN approach are proficient and demonstrate that general accuracy lies between 91%to 93%in identifying climate variation through ozone concentration classification,whether the region in the image under consideration is depleted or non-depleted.Our proposed model presented 93%accuracy,and it outperforms the prevailing techniques.
文摘针对小目标水漂垃圾形态多变、分辨率低且信息有限,导致检测效果不理想的问题,提出一种改进的Faster-RCNN(Faster Regions with Convolutional Neural Network)水漂垃圾检测算法MP-Faster-RCNN(Faster-RCNN with Multi-scale feature and Polarized self-attention)。首先,建立黄河兰州段小目标水漂垃圾数据集,将空洞卷积结合ResNet-50代替原来的VGG-16(Visual Geometry Group 16)作为主干特征提取网络,扩大感受野以提取更多小目标特征;其次,在区域生成网络(RPN)利用多尺度特征,设置3×3和1×1的两层卷积,补偿单一滑动窗口造成的特征丢失;最后,在RPN前加入极化自注意力,进一步利用多尺度和通道特征提取更细粒度的多尺度空间信息和通道间依赖关系,生成具有全局特征的特征图,实现更精确的目标框定位。实验结果表明,MP-Faster-RCNN能有效提高水漂垃圾检测精度,与原始Faster-RCNN相比,平均精度均值(mAP)提高了6.37个百分点,模型大小从521 MB降到了108 MB,且在同一训练批次下收敛更快。
文摘文章基于改进更快的区域卷积神经网络(Faster Region Convolutional Neural Networks,Faster R-CNN)模型,提出了一种行人识别系统设计。介绍了计算机视觉常用技术手段与方法、通行检测步骤,分析了主流的算法优缺点,利用深度学习方法提取图像特征,然后使用改进Faster R-CNN模型进行目标检测。在改进Faster R-CNN模型中,采用了自适应尺度池化和增强的感兴趣区域(Region of Interest,RoI)池化技术,可以提高模型检测精度和速度。
文摘This paper help with leguminous seeds detection and smart farming. There are hundreds of kinds of seeds and itcan be very difficult to distinguish between them. Botanists and those who study plants, however, can identifythe type of seed at a glance. As far as we know, this is the first work to consider leguminous seeds images withdifferent backgrounds and different sizes and crowding. Machine learning is used to automatically classify andlocate 11 different seed types. We chose Leguminous seeds from 11 types to be the objects of this study. Thosetypes are of different colors, sizes, and shapes to add variety and complexity to our research. The images datasetof the leguminous seeds was manually collected, annotated, and then split randomly into three sub-datasetstrain, validation, and test (predictions), with a ratio of 80%, 10%, and 10% respectively. The images consideredthe variability between different leguminous seed types. The images were captured on five different backgrounds: white A4 paper, black pad, dark blue pad, dark green pad, and green pad. Different heights and shootingangles were considered. The crowdedness of the seeds also varied randomly between 1 and 50 seeds per image.Different combinations and arrangements between the 11 types were considered. Two different image-capturingdevices were used: a SAMSUNG smartphone camera and a Canon digital camera. A total of 828 images wereobtained, including 9801 seed objects (labels). The dataset contained images of different backgrounds, heights,angles, crowdedness, arrangements, and combinations. The TensorFlow framework was used to construct theFaster Region-based Convolutional Neural Network (R-CNN) model and CSPDarknet53 is used as the backbonefor YOLOv4 based on DenseNet designed to connect layers in convolutional neural. Using the transfer learningmethod, we optimized the seed detection models. The currently dominant object detection methods, Faster RCNN, and YOLOv4 performances were compared experimentally. The mAP (mean average precision) of the FasterR-CNN and YOLOv4 models were 84.56% and 98.52% respectively. YOLOv4 had a significant advantage in detection speed over Faster R-CNN which makes it suitable for real-time identification as well where high accuracy andlow false positives are needed. The results showed that YOLOv4 had better accuracy, and detection ability, as wellas faster detection speed beating Faster R-CNN by a large margin. The model can be effectively applied under avariety of backgrounds, image sizes, seed sizes, shooting angles, and shooting heights, as well as different levelsof seed crowding. It constitutes an effective and efficient method for detecting different leguminous seeds incomplex scenarios. This study provides a reference for further seed testing and enumeration applications.