Sea cucumber detection is widely recognized as the key to automatic culture.The underwater light environment is complex and easily obscured by mud,sand,reefs,and other underwater organisms.To date,research on sea cucu...Sea cucumber detection is widely recognized as the key to automatic culture.The underwater light environment is complex and easily obscured by mud,sand,reefs,and other underwater organisms.To date,research on sea cucumber detection has mostly concentrated on the distinction between prospective objects and the background.However,the key to proper distinction is the effective extraction of sea cucumber feature information.In this study,the edge-enhanced scaling You Only Look Once-v4(YOLOv4)(ESYv4)was proposed for sea cucumber detection.By emphasizing the target features in a way that reduced the impact of different hues and brightness values underwater on the misjudgment of sea cucumbers,a bidirectional cascade network(BDCN)was used to extract the overall edge greyscale image in the image and add up the original RGB image as the detected input.Meanwhile,the YOLOv4 model for backbone detection is scaled,and the number of parameters is reduced to 48%of the original number of parameters.Validation results of 783images indicated that the detection precision of positive sea cucumber samples reached 0.941.This improvement reflects that the algorithm is more effective to improve the edge feature information of the target.It thus contributes to the automatic multi-objective detection of underwater sea cucumbers.展开更多
为了提高交通标志识别的速度和精度,提出了一种采用Yolov4(You only look once version4)深度学习框架的交通标志识别方法,并将该方法与SSD(single shot multi box detector)和Yolov3(You only look once version 3)算法进行对比,所提...为了提高交通标志识别的速度和精度,提出了一种采用Yolov4(You only look once version4)深度学习框架的交通标志识别方法,并将该方法与SSD(single shot multi box detector)和Yolov3(You only look once version 3)算法进行对比,所提算法模型参数量显著增加。进一步对Yolov4的主干特征提取网络和多尺度输出进行调整,提出轻量化的Yolov4算法。仿真实验表明,此算法能够快速有效检测交通标志,具有实时性和适用性。展开更多
Aiming at the shortcomings of current gesture tracking methods in accuracy and speed, based on deep learning You Only Look Once version 4(YOLOv4) model, a new YOLOv4 model combined with Kalman filter real-time hand tr...Aiming at the shortcomings of current gesture tracking methods in accuracy and speed, based on deep learning You Only Look Once version 4(YOLOv4) model, a new YOLOv4 model combined with Kalman filter real-time hand tracking method was proposed. The new algorithm can address some problems existing in hand tracking technology such as detection speed, accuracy and stability. The convolutional neural network(CNN) model YOLOv4 is used to detect the target of current frame tracking and Kalman filter is applied to predict the next position and bounding box size of the target according to its current position. The detected target is tracked by comparing the estimated result with the detected target in the next frame and, finally, the real-time hand movement track is displayed. The experimental results validate the proposed algorithm with the overall success rate of 99.43% at speed of 41.822 frame/s, achieving superior results than other algorithms.展开更多
在工业生产中,安全帽对人体头部提供了较好的安全保障。在现场环境中,检验施工人员是否佩戴安全帽主要依靠人工检查,因而效率非常低。为了解决施工现场安全帽检测识别难题,提出一种基于深度级联网络模型的安全帽检测方法。首先通过You O...在工业生产中,安全帽对人体头部提供了较好的安全保障。在现场环境中,检验施工人员是否佩戴安全帽主要依靠人工检查,因而效率非常低。为了解决施工现场安全帽检测识别难题,提出一种基于深度级联网络模型的安全帽检测方法。首先通过You Only Look Once version 4(YOLOv4)检测网络对施工人员进行检测;然后运用注意力机制残差分类网络对人员ROI区域进行分类判断,识别其是否佩戴安全帽。该方法在Ubuntu18.04系统和Pytorch深度学习框架的实验环境中进行,在自主制作工业场景安全帽数据集中进行训练和测试实验。实验结果表明,基于深度级联网络的安全帽识别模型与YOLOv4算法相比,准确率提高了2个百分点,有效提升施工人员安全帽检测效果。展开更多
基金supported by Scientific Research Project of Tianjin Education Commission(Nos.2020KJ091,2018KJ184)National Key Research and Development Program of China(No.2020YFD0900600)+1 种基金the Earmarked Fund for CARS(No.CARS-47)Tianjin Mariculture Industry Technology System Innovation Team Construction Project(No.ITTMRS2021000)。
文摘Sea cucumber detection is widely recognized as the key to automatic culture.The underwater light environment is complex and easily obscured by mud,sand,reefs,and other underwater organisms.To date,research on sea cucumber detection has mostly concentrated on the distinction between prospective objects and the background.However,the key to proper distinction is the effective extraction of sea cucumber feature information.In this study,the edge-enhanced scaling You Only Look Once-v4(YOLOv4)(ESYv4)was proposed for sea cucumber detection.By emphasizing the target features in a way that reduced the impact of different hues and brightness values underwater on the misjudgment of sea cucumbers,a bidirectional cascade network(BDCN)was used to extract the overall edge greyscale image in the image and add up the original RGB image as the detected input.Meanwhile,the YOLOv4 model for backbone detection is scaled,and the number of parameters is reduced to 48%of the original number of parameters.Validation results of 783images indicated that the detection precision of positive sea cucumber samples reached 0.941.This improvement reflects that the algorithm is more effective to improve the edge feature information of the target.It thus contributes to the automatic multi-objective detection of underwater sea cucumbers.
文摘为了提高交通标志识别的速度和精度,提出了一种采用Yolov4(You only look once version4)深度学习框架的交通标志识别方法,并将该方法与SSD(single shot multi box detector)和Yolov3(You only look once version 3)算法进行对比,所提算法模型参数量显著增加。进一步对Yolov4的主干特征提取网络和多尺度输出进行调整,提出轻量化的Yolov4算法。仿真实验表明,此算法能够快速有效检测交通标志,具有实时性和适用性。
文摘Aiming at the shortcomings of current gesture tracking methods in accuracy and speed, based on deep learning You Only Look Once version 4(YOLOv4) model, a new YOLOv4 model combined with Kalman filter real-time hand tracking method was proposed. The new algorithm can address some problems existing in hand tracking technology such as detection speed, accuracy and stability. The convolutional neural network(CNN) model YOLOv4 is used to detect the target of current frame tracking and Kalman filter is applied to predict the next position and bounding box size of the target according to its current position. The detected target is tracked by comparing the estimated result with the detected target in the next frame and, finally, the real-time hand movement track is displayed. The experimental results validate the proposed algorithm with the overall success rate of 99.43% at speed of 41.822 frame/s, achieving superior results than other algorithms.
文摘在工业生产中,安全帽对人体头部提供了较好的安全保障。在现场环境中,检验施工人员是否佩戴安全帽主要依靠人工检查,因而效率非常低。为了解决施工现场安全帽检测识别难题,提出一种基于深度级联网络模型的安全帽检测方法。首先通过You Only Look Once version 4(YOLOv4)检测网络对施工人员进行检测;然后运用注意力机制残差分类网络对人员ROI区域进行分类判断,识别其是否佩戴安全帽。该方法在Ubuntu18.04系统和Pytorch深度学习框架的实验环境中进行,在自主制作工业场景安全帽数据集中进行训练和测试实验。实验结果表明,基于深度级联网络的安全帽识别模型与YOLOv4算法相比,准确率提高了2个百分点,有效提升施工人员安全帽检测效果。