在目标检测领域中,基于交并比(intersection over union, IoU)的系列损失函数存在一定的局限性,使得边界框回归的精度和稳定性有待进一步提升。为此提出了一种基于非线性高斯平方距离的边界框回归损失函数。首先综合考虑了边界框中重叠...在目标检测领域中,基于交并比(intersection over union, IoU)的系列损失函数存在一定的局限性,使得边界框回归的精度和稳定性有待进一步提升。为此提出了一种基于非线性高斯平方距离的边界框回归损失函数。首先综合考虑了边界框中重叠性、中心点距离和长宽比3个因素,将边界框建模为高斯分布;然后提出一种高斯平方距离来衡量概率分布之间的差距;最后设计了符合优化趋势的非线性函数,将高斯平方距离转化为有利于神经网络学习的损失函数。实验结果表明,与IoU损失相比,所提方法在掩膜区域卷积神经网络、一阶全卷积目标检测器和自适应特征选择目标检测器上的平均精度均值分别提高了0.3%、1.1%和2.3%,证明了该方法能有效提升目标检测的性能,同时有利于高精度边界框的回归。展开更多
Due to the complex environment of the university laboratory,personnel flow intensive,personnel irregular behavior is easy to cause security risks.Monitoring using mainstream detection algorithms suffers from low detec...Due to the complex environment of the university laboratory,personnel flow intensive,personnel irregular behavior is easy to cause security risks.Monitoring using mainstream detection algorithms suffers from low detection accuracy and slow speed.Therefore,the current management of personnel behavior mainly relies on institutional constraints,education and training,on-site supervision,etc.,which is time-consuming and ineffective.Given the above situation,this paper proposes an improved You Only Look Once version 7(YOLOv7)to achieve the purpose of quickly detecting irregular behaviors of laboratory personnel while ensuring high detection accuracy.First,to better capture the shape features of the target,deformable convolutional networks(DCN)is used in the backbone part of the model to replace the traditional convolution to improve the detection accuracy and speed.Second,to enhance the extraction of important features and suppress useless features,this paper proposes a new convolutional block attention module_efficient channel attention(CBAM_E)for embedding the neck network to improve the model’s ability to extract features from complex scenes.Finally,to reduce the influence of angle factor and bounding box regression accuracy,this paper proposes a newα-SCYLLA intersection over union(α-SIoU)instead of the complete intersection over union(CIoU),which improves the regression accuracy while increasing the convergence speed.Comparison experiments on public and homemade datasets show that the improved algorithm outperforms the original algorithm in all evaluation indexes,with an increase of 2.92%in the precision rate,4.14%in the recall rate,0.0356 in the weighted harmonic mean,3.60%in the mAP@0.5 value,and a reduction in the number of parameters and complexity.Compared with the mainstream algorithm,the improved algorithm has higher detection accuracy,faster convergence speed,and better actual recognition effect,indicating the effectiveness of the improved algorithm in this paper and its potential for practical application in laboratory scenarios.展开更多
高效和准确的场景文本(efficient and accuracy scene text,EAST)检测算法速度快且结构简单,但是由于文本结构的特殊性,导致在检测中尺寸较小的文本会被遗漏,而较长的文本则完整性较差。针对EAST算法存在的问题提出一种新的自然场景文...高效和准确的场景文本(efficient and accuracy scene text,EAST)检测算法速度快且结构简单,但是由于文本结构的特殊性,导致在检测中尺寸较小的文本会被遗漏,而较长的文本则完整性较差。针对EAST算法存在的问题提出一种新的自然场景文本检测模型。该方法利用自动架构搜索的特征金字塔网络(neural architecture search feature pyramid network,NAS-FPN)设计搜索空间,覆盖所有可能的跨尺度连接提取自然场景图像特征。针对输出层进行修改,一方面通过广义交并比(generalized intersection over union,GIOU)作为指标提升边界框的回归效果;另一方面通过对损失函数进行修改解决类别失衡问题。输出场景图像中任意方向的文本区域检测框。该方法在ICDAR2013和ICDAR2015数据集上都取得了较好的检测结果,与其他文本检测方法相比,检测效果也得到了明显提升。展开更多
文摘在目标检测领域中,基于交并比(intersection over union, IoU)的系列损失函数存在一定的局限性,使得边界框回归的精度和稳定性有待进一步提升。为此提出了一种基于非线性高斯平方距离的边界框回归损失函数。首先综合考虑了边界框中重叠性、中心点距离和长宽比3个因素,将边界框建模为高斯分布;然后提出一种高斯平方距离来衡量概率分布之间的差距;最后设计了符合优化趋势的非线性函数,将高斯平方距离转化为有利于神经网络学习的损失函数。实验结果表明,与IoU损失相比,所提方法在掩膜区域卷积神经网络、一阶全卷积目标检测器和自适应特征选择目标检测器上的平均精度均值分别提高了0.3%、1.1%和2.3%,证明了该方法能有效提升目标检测的性能,同时有利于高精度边界框的回归。
基金This study was supported by the National Natural Science Foundation of China(No.61861007)Guizhou ProvincialDepartment of Education Innovative Group Project(QianJiaohe KY[2021]012)Guizhou Science and Technology Plan Project(Guizhou Science Support[2023]General 412).
文摘Due to the complex environment of the university laboratory,personnel flow intensive,personnel irregular behavior is easy to cause security risks.Monitoring using mainstream detection algorithms suffers from low detection accuracy and slow speed.Therefore,the current management of personnel behavior mainly relies on institutional constraints,education and training,on-site supervision,etc.,which is time-consuming and ineffective.Given the above situation,this paper proposes an improved You Only Look Once version 7(YOLOv7)to achieve the purpose of quickly detecting irregular behaviors of laboratory personnel while ensuring high detection accuracy.First,to better capture the shape features of the target,deformable convolutional networks(DCN)is used in the backbone part of the model to replace the traditional convolution to improve the detection accuracy and speed.Second,to enhance the extraction of important features and suppress useless features,this paper proposes a new convolutional block attention module_efficient channel attention(CBAM_E)for embedding the neck network to improve the model’s ability to extract features from complex scenes.Finally,to reduce the influence of angle factor and bounding box regression accuracy,this paper proposes a newα-SCYLLA intersection over union(α-SIoU)instead of the complete intersection over union(CIoU),which improves the regression accuracy while increasing the convergence speed.Comparison experiments on public and homemade datasets show that the improved algorithm outperforms the original algorithm in all evaluation indexes,with an increase of 2.92%in the precision rate,4.14%in the recall rate,0.0356 in the weighted harmonic mean,3.60%in the mAP@0.5 value,and a reduction in the number of parameters and complexity.Compared with the mainstream algorithm,the improved algorithm has higher detection accuracy,faster convergence speed,and better actual recognition effect,indicating the effectiveness of the improved algorithm in this paper and its potential for practical application in laboratory scenarios.
文摘高效和准确的场景文本(efficient and accuracy scene text,EAST)检测算法速度快且结构简单,但是由于文本结构的特殊性,导致在检测中尺寸较小的文本会被遗漏,而较长的文本则完整性较差。针对EAST算法存在的问题提出一种新的自然场景文本检测模型。该方法利用自动架构搜索的特征金字塔网络(neural architecture search feature pyramid network,NAS-FPN)设计搜索空间,覆盖所有可能的跨尺度连接提取自然场景图像特征。针对输出层进行修改,一方面通过广义交并比(generalized intersection over union,GIOU)作为指标提升边界框的回归效果;另一方面通过对损失函数进行修改解决类别失衡问题。输出场景图像中任意方向的文本区域检测框。该方法在ICDAR2013和ICDAR2015数据集上都取得了较好的检测结果,与其他文本检测方法相比,检测效果也得到了明显提升。