期刊文献+

基于纹理特征的分布式视频压缩感知自适应重构方法

Adaptive Reconstruction for Distributed Compressive Video Sensing Based on Texture Features
下载PDF
导出
摘要 面对大规模视频数据带来的全新挑战,具备硬件友好特性的分布式视频压缩感知应运而生。由于传统基于分析模型的分布式视频压缩感知重构方法计算复杂度高,难以满足实时应用的要求,因此深度学习技术被逐渐引入。然而,现有基于深度学习的重构方法忽略了帧的纹理特征,限制了重构性能。由于同图像组中的视频帧具有较高的相似性,因此可以选择相邻视频帧作为当前视频帧纹理特征的参考。为了解决这个问题,提出一种基于纹理特征的分布式视频压缩感知自适应重构网络,命名为TF-DCVSNet。具体来说,TF-DCVSNet利用已重构的相邻帧纹理特征,激活当前重构帧的重构网络模块,进行自适应重构。大量实验验证了TF-DCVSNet的有效性。 Distributed compressive video sensing(DCVS),possessing hardware-friendly characteristics,has emerged as a solution to the challenges posed by large-scale video data.Since traditional analytical model-based reconstruction methods for DCVS are computational-ly complex and hard to meet the requirements of real-time applications,deep learning technologies are gradually applied in DCVS.How-ever,the existing deep learning-based reconstruction methods ignore texture characterizes of frames,which limits reconstruction perform-ance.Based on the observation that frames within one group are highly correlated,the adjacent frames can be selected as the references to exploit texture features of current frames.To address this problem,a texture features-based adaptive reconstruction network for DCVS is proposed,dubbed‘TF-DCVSNet’.Specifically,TF-DCVSNet utilizes the texture features of the reconstructed adjacent frames to acti-vate the reconstruction network module of the current reconstructed frame to perform adaptive reconstruction.Extensive experiments demonstrate the effectiveness of TF-DCVSNet.
作者 陈灿 周超 张登银 CHEN Can;ZHOU Chao;ZHANG Dengyin(College of Internet of Things,Nanjing University of Posts and Telecommunications,Nanjing Jiangsu 210003,China)
出处 《传感技术学报》 CAS CSCD 北大核心 2024年第1期58-63,共6页 Chinese Journal of Sensors and Actuators
基金 国家自然科学基金项目(61872423) 江苏省高校自然科学研究面上项目(22KJB510008) 南京邮电大学校级自然科学基金项目(NY221094) 南京邮电大学引进人才科研启动基金项目(NY221023)。
关键词 分布式视频压缩感知 视频重构 深度学习 纹理特征 distributed compressive video sensing video reconstruction deep learning texture features.
  • 相关文献

参考文献4

二级参考文献92

  • 1KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Imagenet classification with deep convolutional neural networks[C]∥Advances in Neural Information Processing Systems.Red Hook,NY:Curran Associates,2012:1097-1105.
  • 2DAHL G E,YU D,DENG L,et al.Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition[J].Audio,Speech,and Language Processing,IEEE Transactions on,2012,20(1):30-42.
  • 3ZEN H,SENIOR A,SCHUSTER M.Statistical parametric speech synthesis using deep neural networks[C]∥Acoustics,Speech and Signal Processing(ICASSP),20131EEE International Conference on.Piscataway,NJ:IEEE,2013:7962-7966.
  • 4BAHDANAU D,CHO K,BENGIO Y.Neural machine translation by jointly learning to align and translate[J].CoRR,2014:abs/1409.0473.
  • 5ZEILER M D,FERGUS R.Visualizing and understanding convolutional neural networks[J].CoRR,2013:abs/1311.2901.
  • 6SERMANET P,EIGEN D,ZHANG X,et al.Overfeat:integrated recognition,localization and detection using convolutional networks[J].CoRR,2013:abs/1312.6229.
  • 7RUSSAKOVSKY O,DENG J,SU H,et al.Image Net large scale visual recognition challenge[J].CoRR,2014:abs/1409.0575.
  • 8LIN M,CHEN Q,YAN S.Network in network[J].CoRR,2013:abs/1312.4400.
  • 9SUN Y,WANG X,TANG X.Deep learning face representation from predicting 10,000 classes[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE,2014:1891-1898.
  • 10TAIGMAN Y,YANG M,RANZATO M A,et al.Deepface:closing the gap to human-level performance in face verification[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE,2014:1701-1708.

共引文献388

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部