In the field of traffic sign recognition,traffic signs usually occupy very small areas in the input image.Most object detection algorithms directly reduce the original image to a specific size for the input model duri...In the field of traffic sign recognition,traffic signs usually occupy very small areas in the input image.Most object detection algorithms directly reduce the original image to a specific size for the input model during the detection process,which leads to the loss of small object information.Addi-tionally,classification tasks are more sensitive to information loss than local-ization tasks.This paper proposes a novel traffic sign recognition approach,in which a lightweight pre-locator network and a refined classification network are incorporated.The pre-locator network locates the sub-regions of the traffic signs from the original image,and the refined classification network performs the refinement recognition task in the sub-regions.Moreover,an innovative module(named SPP-ST)is proposed,which combines the Spatial Pyramid Pool module(SPP)and the Swin-Transformer module as a new feature extractor to learn the special spatial information of traffic sign effec-tively.Experimental results show that the proposed method is superior to the state-of-the-art methods(82.1 mAP achieved on 218 categories in the TT100k dataset,an improvement of 19.7 percentage points compared to the previous method).Moreover,both the result analysis and the output visualizations further demonstrate the effectiveness of our proposed method.The source code and datasets of this work are available at https://github.com/DijiesitelaQ/TSOD.展开更多
玉米果穗的表型参数是玉米生长状态的重要表征,生长状况的好坏直接影响玉米产量和质量。为方便无人巡检机器人视觉系统高通量、自动化获取玉米表型参数,该研究基于YOLACT(you only look at coefficients)提出一种高精度-速度平衡的玉米...玉米果穗的表型参数是玉米生长状态的重要表征,生长状况的好坏直接影响玉米产量和质量。为方便无人巡检机器人视觉系统高通量、自动化获取玉米表型参数,该研究基于YOLACT(you only look at coefficients)提出一种高精度-速度平衡的玉米果穗分割模型SwinT-YOLACT。首先使用Swin-Transformer作为模型主干特征提取网络,以提高模型的特征提取能力;然后在特征金字塔网络之前引入有效通道注意力机制,剔除冗余特征信息,以加强对关键特征的融合;最后使用平滑性更好的Mish激活函数替换模型原始激活函数Relu,使模型在保持原有速度的同时进一步提升精度。基于自建玉米果穗数据集训练和测试该模型,试验结果表明,SwinT-YOLACT的掩膜均值平均精度为79.43%,推理速度为35.44帧/s,相较于原始YOLACT和其改进算法YOLACT++,掩膜均值平均精度分别提升了3.51和3.38个百分点;相较于YOLACT、YOLACT++和Mask R-CNN模型,推理速度分别提升了3.39、2.58和28.64帧/s。该模型对玉米果穗有较为优秀的分割效果,适于部署在无人巡检机器人视觉系统上,为玉米生长状态监测提供技术支撑。展开更多
基金supported by the Natural Science Foundation of Sichuan,China (No.2022NSFSC0571)the Sichuan Science and Technology Program (No.2018JY0273,No.2019YJ0532)+1 种基金supported by funding of V.C.&V.R.Key Lab of Sichuan Province (No.SCVCVR2020.05VS)supported by the China Scholarship Council (No.201908510026).
文摘In the field of traffic sign recognition,traffic signs usually occupy very small areas in the input image.Most object detection algorithms directly reduce the original image to a specific size for the input model during the detection process,which leads to the loss of small object information.Addi-tionally,classification tasks are more sensitive to information loss than local-ization tasks.This paper proposes a novel traffic sign recognition approach,in which a lightweight pre-locator network and a refined classification network are incorporated.The pre-locator network locates the sub-regions of the traffic signs from the original image,and the refined classification network performs the refinement recognition task in the sub-regions.Moreover,an innovative module(named SPP-ST)is proposed,which combines the Spatial Pyramid Pool module(SPP)and the Swin-Transformer module as a new feature extractor to learn the special spatial information of traffic sign effec-tively.Experimental results show that the proposed method is superior to the state-of-the-art methods(82.1 mAP achieved on 218 categories in the TT100k dataset,an improvement of 19.7 percentage points compared to the previous method).Moreover,both the result analysis and the output visualizations further demonstrate the effectiveness of our proposed method.The source code and datasets of this work are available at https://github.com/DijiesitelaQ/TSOD.
文摘玉米果穗的表型参数是玉米生长状态的重要表征,生长状况的好坏直接影响玉米产量和质量。为方便无人巡检机器人视觉系统高通量、自动化获取玉米表型参数,该研究基于YOLACT(you only look at coefficients)提出一种高精度-速度平衡的玉米果穗分割模型SwinT-YOLACT。首先使用Swin-Transformer作为模型主干特征提取网络,以提高模型的特征提取能力;然后在特征金字塔网络之前引入有效通道注意力机制,剔除冗余特征信息,以加强对关键特征的融合;最后使用平滑性更好的Mish激活函数替换模型原始激活函数Relu,使模型在保持原有速度的同时进一步提升精度。基于自建玉米果穗数据集训练和测试该模型,试验结果表明,SwinT-YOLACT的掩膜均值平均精度为79.43%,推理速度为35.44帧/s,相较于原始YOLACT和其改进算法YOLACT++,掩膜均值平均精度分别提升了3.51和3.38个百分点;相较于YOLACT、YOLACT++和Mask R-CNN模型,推理速度分别提升了3.39、2.58和28.64帧/s。该模型对玉米果穗有较为优秀的分割效果,适于部署在无人巡检机器人视觉系统上,为玉米生长状态监测提供技术支撑。