期刊文献+

基于改进U-Net网络的图像混合畸变校正方法

Hybrid distortion image correction method based on improved U-Net networks
下载PDF
导出
摘要 图像几何畸变校正是许多计算机视觉应用的关键预处理步骤。当前基于深度学习的几何畸变校正方法主要解决图像的单一畸变校正问题,为此,本文提出一种改进U-Net网络的图像混合畸变校正方法。首先,提出了一种构建混合畸变的图像数据集的方法,解决了训练数据集稀缺以及畸变类型单一的问题。其次,采用U-Net网络结合空间注意力机制进行图像特征提取及畸变坐标图的重建,将图像校正问题转为畸变图像的逐像素点坐标位移变化的预测问题,并设计了结合坐标差损失和图像重采样损失的损失函数,有效提高校正的准确性。最后,通过消融实验验证了本文方法各模块的性能。与最新基于深度学习的畸变校正方法对比。实验结果表明,本文方法在定量指标和主观评价方面都有较好的表现,对畸变图像的空间坐标校正的平均绝对误差为0.2519。本文还对GoPro相机获取的光学影像开展了校正实验,进一步验证了本文方法对畸变图像校正的有效性。 Image geometric distortion correction is a key pre-processing step for many computer vision applications.Current geometric distortion correction methods based on deep learning mainly solve the problems of single distortion correction of images.For this reason,this paper proposes a hybrid distortion correction method for images with improved U-Net networks.Firstly,a method is proposed to construct hybrid distortion image datasets,which solves the problems of sparse training dataset and single distortion type.Secondly,the U-Net with spatial attention for image feature extraction and reconstruction of the distortion coordinate map is used to turn the image correction problem into a prediction problem of the pixel-by-pixel coordinate displacement change of the distorted image,and a loss function combining coordinate difference loss and image resampling loss is designed to effectively improve the correction accuracy.Finally,the performance of each module of the method in this paper is verified by ablation experiments.Compared with the latest deep learning-based distortion correction methods,the experimental results show that the method in this paper has better performance in terms of quantitative indexes and subjective evaluation,and reaches 0.2519 of the mean absolute error for coordinate correction of distorted images.Calibration experiments on the optical images acquired by GoPro cameras have further verified the effectiveness of the proposed method in practice.
作者 宋巍 师丽彪 耿立佳 马振玲 杜艳玲 SONG Wei;SHI Li-biao;GENG Li-jia;MA Zhen-ling;DU Yan-ling(College of Information,Shanghai Ocean University,Shanghai 201306,China;East China Sea Standard Metrology Center,State Oceanic Administration,Shanghai 201306,China)
出处 《液晶与显示》 CAS CSCD 北大核心 2023年第11期1580-1589,共10页 Chinese Journal of Liquid Crystals and Displays
基金 国家自然科学基金(No.42101443,No.61972240)。
关键词 混合畸变校正 U-Net 空间注意力 坐标差损失 重采样损失 hybrid distortion correction U-Net spatial attention coordinate difference loss resampling loss
  • 相关文献

参考文献3

二级参考文献16

共引文献23

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部