The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-genera...The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable.展开更多
在自动驾驶应用场景下,将YOLOv5应用于目标检测中,性能较之前版本有明显的提升,但在高运行速度情况下检测精度仍不够高,本文提出一种基于改进YOLOv5的车辆端目标检测方法.为解决训练不同数据集时需手动设计初始锚框大小,引入自适应锚框...在自动驾驶应用场景下,将YOLOv5应用于目标检测中,性能较之前版本有明显的提升,但在高运行速度情况下检测精度仍不够高,本文提出一种基于改进YOLOv5的车辆端目标检测方法.为解决训练不同数据集时需手动设计初始锚框大小,引入自适应锚框计算.在主干网络(backbone)添加压缩与激励模块(squeeze and excitation,SE),筛选针对通道的特征信息,提升特征表达能力.为了提升检测不同大小物体时的精度,将注意力机制与检测网络融合,把卷积注意力模块(convolutional block attention module,CBAM)与Neck部分融合,使模型在检测不同大小的物体时能关注重要的特征,提升特征提取能力.在主干网络中使用空间金字塔池化SPP模块,使得模型输入可以输入任意图像高宽比和大小.在激活函数方面,进行卷积操作后使用Hardswish激活函数,应用于整个网络模型.在损失函数方面,使用CIoU作为检测框回归的损失函数,改善定位精度低和训练过程中目标检测框回归速度慢的问题.实验结果表明,改进后的检测模型在KITTI 2D数据集上测试,目标检测的精确率(precision)提高了2.5%,召回率(recall)提高了5.1%,平均精度均值(mean average precision,mAP)提高了2.3%.展开更多
基金the National Natural Science Foundation of China(No.61976080)the Academic Degrees&Graduate Education Reform Project of Henan Province(No.2021SJGLX195Y)+1 种基金the Teaching Reform Research and Practice Project of Henan Undergraduate Universities(No.2022SYJXLX008)the Key Project on Research and Practice of Henan University Graduate Education and Teaching Reform(No.YJSJG2023XJ006)。
文摘The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable.
文摘在自动驾驶应用场景下,将YOLOv5应用于目标检测中,性能较之前版本有明显的提升,但在高运行速度情况下检测精度仍不够高,本文提出一种基于改进YOLOv5的车辆端目标检测方法.为解决训练不同数据集时需手动设计初始锚框大小,引入自适应锚框计算.在主干网络(backbone)添加压缩与激励模块(squeeze and excitation,SE),筛选针对通道的特征信息,提升特征表达能力.为了提升检测不同大小物体时的精度,将注意力机制与检测网络融合,把卷积注意力模块(convolutional block attention module,CBAM)与Neck部分融合,使模型在检测不同大小的物体时能关注重要的特征,提升特征提取能力.在主干网络中使用空间金字塔池化SPP模块,使得模型输入可以输入任意图像高宽比和大小.在激活函数方面,进行卷积操作后使用Hardswish激活函数,应用于整个网络模型.在损失函数方面,使用CIoU作为检测框回归的损失函数,改善定位精度低和训练过程中目标检测框回归速度慢的问题.实验结果表明,改进后的检测模型在KITTI 2D数据集上测试,目标检测的精确率(precision)提高了2.5%,召回率(recall)提高了5.1%,平均精度均值(mean average precision,mAP)提高了2.3%.