摘要
实际应用中,事件时钟与传统相机图像帧率之间的差异往往导致二者存在一定的时空差异,很难得到精准的一一对应的事件-图像数据对,导致无法对网络进行有效的监督训练。针对时空匹配困难的问题,利用循环生成对抗网络中学习图片整体分布的特性,提出基于事件相机的无监督视频重建方法,实现了在事件相机上非时空匹配的无监督重建。实验结果表明,相比于现有的基于事件相机的视频重建方法,所提方法在结构相似性(structural similarity,SSIM)、均方误差(mean-square error,MSE)和盲/无参考图像空间质量评估器(blind/referenceless image spatial quality evaluator,BRISQUE)3个指标上均有提升;在没有时空匹配数据的情况下,所提方法能够重建出较为清晰的高帧率视频。
In practical applications,the difference between event clock and traditional camera frame rate often leads to a certain spastiotemporal difference between the two,and it is difficult to obtain an accurate one-to-one corresponding event-image data pair,resulting in the failure of effective supervision and network training.Aiming at the difficulty of spatiotemporal matching,an unsupervised video reconstruction method based on event camera is proposed by using the characteristics of the overall distribution of images learned in the cyclic generative adversarial networks,which realizes unsupervised reconstruction of non-spatiotemporal matching based on event camera.The experimental results show that compared with the existing video reconstruction methods based on event camera,the proposed method has improved in three indicators:structural similarity(SSIM),mean square error(MSE)and blind/referenceless image spatial quality evaluator(BRISQUE).In the absence of spatiotemporal matching data,the proposed method can reconstruct a relatively clear and high frame rate video.
作者
刘凢
余磊
LIU Fan;YU Lei(School of Electronic Information,Wuhan University,Wuhan 430072,China)
出处
《武汉大学学报(工学版)》
CAS
CSCD
北大核心
2024年第8期1150-1159,共10页
Engineering Journal of Wuhan University
基金
国家自然科学基金项目(编号:62271354,61871297)
湖北省自然科学基金项目(编号:2021CFB467)。