Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention.However,due to interference occasioned by density,overlap,and coverage,the tiny object detection in re...Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention.However,due to interference occasioned by density,overlap,and coverage,the tiny object detection in remote sensing images has always been a difficult problem.Therefore,we propose a novel TO–YOLOX(Tiny Object–You Only Look Once)model.TO–YOLOX possesses a MiSo(Multiple-in-Singleout)feature fusion structure,which exhibits a spatial-shift structure,and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images.TO–YOLOX utilizes an adaptive IOU-T(Intersection Over Uni-Tiny)loss to enhance the localization accuracy of tiny objects,and it applies attention mechanism Group-CBAM(group-convolutional block attention module)to enhance the perception of tiny objects in remote sensing images.To verify the effectiveness and efficiency of TO–YOLOX,we utilized three aerial-photography tiny object detection datasets,namely VisDrone2021,Tiny Person,and DOTA–HBB,and the following mean average precision(mAP)values were recorded,respectively:45.31%(+10.03%),28.9%(+9.36%),and 63.02%(+9.62%).With respect to recognizing tiny objects,TO–YOLOX exhibits a stronger ability compared with Faster R-CNN,RetinaNet,YOLOv5,YOLOv6,YOLOv7,and YOLOX,and the proposed model exhibits fast computation.展开更多
As one of the key technologies of intelligent vehicles, traffic sign detection is still a challenging task because of the tiny size of its target object. To address the challenge, we present a novel detection network ...As one of the key technologies of intelligent vehicles, traffic sign detection is still a challenging task because of the tiny size of its target object. To address the challenge, we present a novel detection network improved from yolo-v3 for the tiny traffic sign with high precision in real-time. First, a visual multi-scale attention module(MSAM), a light-weight yet effective module, is devised to fuse the multi-scale feature maps with channel weights and spatial masks. It increases the representation power of the network by emphasizing useful features and suppressing unnecessary ones. Second, we exploit effectively fine-grained features about tiny objects from the shallower layers through modifying backbone Darknet-53 and adding one prediction head to yolo-v3. Finally, a receptive field block is added into the neck of the network to broaden the receptive field. Experiments prove the effectiveness of our network in both quantitative and qualitative aspects. The m AP@0.5 of our network reaches 0.965 and its detection speed is55.56 FPS for 512 × 512 images on the challenging Tsinghua-Tencent 100 k(TT100 k) dataset.展开更多
基金funded by the Innovative Research Program of the International Research Center of Big Data for Sustainable Development Goals(Grant No.CBAS2022IRP04)the Sichuan Natural Resources Department Project(Grant NO.510201202076888)+3 种基金the Project of the Geological Exploration Management Department of the Ministry of Natural Resources(Grant NO.073320180876/2)the Key Research and Development Program of Guangxi(Guike-AB22035060)the National Natural Science Foundation of China(Grant No.42171291)the Chengdu University of Technology Postgraduate Innovative Cultivation Program:Tunnel Geothermal Disaster Susceptibility Evaluation in Sichuan-Tibet Railway Based on Deep Learning(CDUT2022BJCX015).
文摘Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention.However,due to interference occasioned by density,overlap,and coverage,the tiny object detection in remote sensing images has always been a difficult problem.Therefore,we propose a novel TO–YOLOX(Tiny Object–You Only Look Once)model.TO–YOLOX possesses a MiSo(Multiple-in-Singleout)feature fusion structure,which exhibits a spatial-shift structure,and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images.TO–YOLOX utilizes an adaptive IOU-T(Intersection Over Uni-Tiny)loss to enhance the localization accuracy of tiny objects,and it applies attention mechanism Group-CBAM(group-convolutional block attention module)to enhance the perception of tiny objects in remote sensing images.To verify the effectiveness and efficiency of TO–YOLOX,we utilized three aerial-photography tiny object detection datasets,namely VisDrone2021,Tiny Person,and DOTA–HBB,and the following mean average precision(mAP)values were recorded,respectively:45.31%(+10.03%),28.9%(+9.36%),and 63.02%(+9.62%).With respect to recognizing tiny objects,TO–YOLOX exhibits a stronger ability compared with Faster R-CNN,RetinaNet,YOLOv5,YOLOv6,YOLOv7,and YOLOX,and the proposed model exhibits fast computation.
基金supported by the National Key R&D Program of China(Grant Nos.2018YFB2101100 and 2019YFB2101600)the National Natural Science Foundation of China(Grant No.62176016)+2 种基金the Guizhou Province Science and Technology Project:Research and Demonstration of Science and Technology Big Data Mining Technology Based on Knowledge Graph(Qiankehe[2021]General 382)the Training Program of the Major Research Plan of the National Natural Science Foundation of China(Grant No.92046015)the Beijing Natural Science Foundation Program and Scientific Research Key Program of Beijing Municipal Commission of Education(Grant No.KZ202010025047)。
文摘As one of the key technologies of intelligent vehicles, traffic sign detection is still a challenging task because of the tiny size of its target object. To address the challenge, we present a novel detection network improved from yolo-v3 for the tiny traffic sign with high precision in real-time. First, a visual multi-scale attention module(MSAM), a light-weight yet effective module, is devised to fuse the multi-scale feature maps with channel weights and spatial masks. It increases the representation power of the network by emphasizing useful features and suppressing unnecessary ones. Second, we exploit effectively fine-grained features about tiny objects from the shallower layers through modifying backbone Darknet-53 and adding one prediction head to yolo-v3. Finally, a receptive field block is added into the neck of the network to broaden the receptive field. Experiments prove the effectiveness of our network in both quantitative and qualitative aspects. The m AP@0.5 of our network reaches 0.965 and its detection speed is55.56 FPS for 512 × 512 images on the challenging Tsinghua-Tencent 100 k(TT100 k) dataset.