摘要
为了增强能见度深度学习模型在小样本条件下的准确率和鲁棒性,提出一种基于可见光-远红外图像的多模态能见度深度学习方法.首先,利用图像配准获取视野范围与分辨率均相同的可见光-远红外输入图像对;然后,构造三分支并行结构的多模态特征融合网络;分别在可见光图像、远红外图像及其累加特征图中提取不同性质的大气特征,各分支的特征信息通过网络结构实现模态互补与融合;最后在网络末端输出图像场景所对应的能见度的等级.采用双目摄像机收集不同天气情况下的室外真实可见光-远红外图像作为实验数据,在不同性能指标、多角度下的实验结果表明,与传统单模态能见度深度学习模型相比,多模态能见度模型可显著提高小样本条件下能见度检测的准确率和鲁棒性.
In order to enhance the robustness of the visibility deep learning model under a small training da-taset,this paper proposes a multi-modal visibility deep learning model based on visible-infrared image pairs.Apart from conventional visibility deep learning models,the visible-infrared image pairs are used as observation data.First,raw data set is preprocessed to generate visible-infrared image pairs with identical resolution and view range using image registration.Then we construct a new convolutional neural network structure including three CNN streams,which are connected in parallel.The feature maps of each stream are extracted and fused from low layer to deep layer by propagation.Finally,the visibility range level is classi-fied by softmax layer based on the output feature descriptor of full connected layer.The experimental results demonstrate that,compared with conventional visibility deep learning models,both accuracy and robustness are strongly enhanced using the proposed method,especially for small training datasets.
作者
沈克成
施佺
王晗
Shen Kecheng;Shi Quan;Wang Han(School of Information Science and Technology,Nantong University,Nantong 226019;School of Transportation and Civil Engineering,Nantong University,Nantong 226019)
出处
《计算机辅助设计与图形学学报》
EI
CSCD
北大核心
2021年第6期939-946,共8页
Journal of Computer-Aided Design & Computer Graphics
基金
国家自然科学基金(61872425)
南通市科技面上项目(MS12019051).