摘要
随着智能交通的提出,结合无人机的航拍车辆检测有着越来越多的应用。目前在车辆检测方面,基于CNN的目标检测方法如faster-rcnn、yolo等都达到了很高的水准,但也存在着需要收集大量标注数据进行训练的问题。而通过图像生成方法解决训练样本的获取是一个可行的解决方案。但一般的生成模型要么只能生成车辆,没有背景信息,要么只能拟合背景,生成车辆严重失真。对此,文中在pix2pixGAN的基础上提出多条件约束的生成对抗网络,用以在真实航拍场景图像中生成带位置标注信息的车辆。通过在生成对抗网络中设立多判别器的方法分别约束背景的拟合以及图像中车辆的生成,将图像中预先设置的噪声区域完美转化成车辆图像。对比实验结果显示,该车辆生成模型能够很好地在航拍图像中生成较为逼真的车辆。
With the introduction of intelligent transportation,aerial vehicle detection combined with UAV has more and more applications.At present,in terms of vehicle detection,CNN-based target detection methods,such as faster-rcnn,yolo,have reached a high level,but there is still a problem that a large amount of labeled data needs to be collected for training.It is a feasible solution to obtain training samples by image generation.However,the general generated model can only generate vehicles without background information,or it can only fit the background and generate vehicle with severe distortion.Based on pix2pixGAN,we propose a multi-condition constrained generation adversarial network to generate vehicles with positional annotation information in real aerial scene images.The noise region preset in the image is perfectly converted into a vehicle image by constraining the fitting of the background and the generation of the vehicle in the image by respectively setting up a multi-discriminator in the generation confrontation network.The comparison experiment shows that the proposed vehicle generation model can generate a more realistic vehicle in the aerial image.
作者
陶晓力
刘宁钟
TAO Xiao-li;LIU Ning-zhong(School of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 211106,China)
出处
《计算机技术与发展》
2019年第12期162-166,共5页
Computer Technology and Development
基金
国家自然科学基金(61375021)
南京航空航天大学研究生创新基地(实验室)开放基金资助(kfjj20171608)
中央高校基本科研业务费专项资金