期刊文献+

易错网络中面向虚拟视点绘制的宏块模型 被引量:2

A significance model for virtual view synthesis in error prone network
原文传递
导出
摘要 为了降低传输失真对自由视点视频虚拟视点质量的影响,提出了一种同时适用于彩色图及深度图的宏块区分模型。模型主要包括两部分:首先结合立体视频时空域的相关性,在考虑错误扩散的基础上,提出一个递归形式的宏块级传输失真模型;之后分别讨论彩色图失真和深度图失真对虚拟视点质量的影响,进而提出了一个低复杂度的基于绘制失真分析的重要性模型。实验结果表明,与随机丢包相比,本文算法能在不改变丢包率(PLR)的情况下大幅提高虚拟视点的客观质量,在PLR为20%时,平均峰值信噪比(PSNR)最大能提高15.65dB,且主观质量接近零传输失真情况。 In order to relief the virtual view's quality reduction induced by the transmission distortion, we propose a macro block level significance model applied to both depth map and texture map. The model contains two main parts: firstly, by considering the temporal and spatial correlation of the compression structure and the distortion diffusion due to lost packets, we propose a recursive transmission distortion model;secondly,we discuss the relationship between synthesis distortion and both depth-error and tex- ture-error induced distortions. A low-complexity significance model based on the analysis of synthesis distortion is proposed. Experimental results show that compared with random packet loss, the peak signal to noise ratio (PSNR) of the virtual view could be increased dramatically(up to 15.65 dB at 20% packet loss rate), and the subjective quality is close to the undistorted transmission condition.
出处 《光电子.激光》 EI CAS CSCD 北大核心 2015年第2期328-335,共8页 Journal of Optoelectronics·Laser
基金 国家科技支撑计划(2012BAH67F01) 国家自然科学基金重点(U1301257) 浙江省教育厅科研计划(Y201327703) 省科技厅/创新团队自主设计(2012R10009-08) 宁波市科技创新团队研究计划(2011B81002)资助项目
关键词 传输失真模型 虚拟视点绘制失真 自由视点视频 transmission distortion model virtual view synthesis distortion free-viewpoint video
  • 相关文献

参考文献17

  • 1Zhang Y,Kwong S,Xu L,et al. Regional bit allocation and rate distortion optimization for multiview depth video cod- ing with view synthesis distortion model[J]. Image Pro- cessing, IEEE Transactions on, 2013,22(9) : 3497-3512.
  • 2Zhang R,Regunathan S L, Rose K. Video coding with op- timal intra/ inter mode switching for packet loss resili- ence[J]. IEEE Journal on Selected Areas in Communica- tions, 2000,18(6) : 966-976.
  • 3Yang H, Rose K. Advances in recursive per-pixel end-to- end distortion estimation for robust video coding in H. 264/AVC[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2007,17 (7) ; 845-856.
  • 4Chen Z F, Pahalawatta P V, Tourapis A M, et al. Improved estimation of transmission distortion for error-resilient video coding[J]. IEEE Transactions on Circuits and Sys- tems for Video Technology, 2012,22(4) : 636-647.
  • 5Yang H. Advances in recursive per-pixel end-to-end dis- tortion estimation for robust video coding in H. 264/AVC [J]. Circuits and Systems for Video Technology, IEEE Transactions on, 2007,17(7) : 845-856.
  • 6Macchiavello B, Dorea C, Hung E M, et al. Reference frame selection for loss-resilient texture & depth map coding in multiview video conferencing[A]. Proc. of Im- age Processing (ICIP) ,2012 19th IEEE International Con- ference on. IEEE[C]. 2012,1653-1656.
  • 7Zhou Y,Hou C, Pan R, et al. Distortion analysis and error concealment for multi-view video transmission[A]. Proc. of Broadband Multimedia Systems and Broadcasting (BMSB), 2010 IEEE International Symposium on. IEEE [C]. 2010,1-5.
  • 8Zhou Y, Hou C, Xiang W, et al. Channel distortion model- ing for multi-view video transmission over packet- switched networks [J]. Circuits and Systems for Video Technology, IEEE Transactions on, 2011,2,1 < 11 ) ; 1679- 1692.
  • 9Kim W S,Ortega A,Lai P L,et al. Depth map coding with distortion estimation of rendered view[A]. Proc. of SPIE[C].2010,75430B-75430B-10.
  • 10Oh B T, Lee J, Park D S. Depth map coding based on synthesized view distortion function[J]. Selected Topics in Signal Processing, IEEE Journal of, 2011,5 (7) : 1344- 1352.

二级参考文献17

  • 1Vetro A, Tourapis A, MUller K, et al. 3D-TV content stor- age and transmission [J]. IEEE Transactions on Broad- casting,2011,57(2) :384-394.
  • 2Merkle P, Mueller K, Wiegand T. 3D video., acquisition , coding,and display[J]. IEEE Transactions on Consumer Electronics, 2010,56(2) : 946-950.
  • 3Smolic A, Mueller K, Merkle P,et al. 3D video and free viewpoint video-technologies, applications and MPEG Standards[A]. Proc. of IEEE International Conference on Multimedia and Expo[C]. 2006,2161-2164.
  • 4Muller K,Merkle P, Wiegand T. 3D video representation using depth video[J]. Proc. of the IEEE, 2011,99 (4): 643-656.
  • 5Mori Y, Fukushima N, Yendo T, et al. View generation with 3D warping using depth information for FTV[J]. Signal Processing .. Image Communication, 2009,24 (1-2) .. 65-?2.
  • 6Foix S, Alenya G, Torras C. Lock-in time-of-flight (ToF) cameras, a survey[J]. IEEE Sensors Journal, 2011,11 (9) :1917-1926.
  • 7ISO/IEC JTC1/SC29/WG11, M16923. Depth estimation reference software (DERS) 5.0[S].
  • 8Liu Q, Yang Y, Ji R, et al. Cross-view down/up-sampling method for multiview depth video coding[J]. IEEE Signal Processing Letters,2012,19(5) :295-298.
  • 9PENG Zong-ju, JIANG Gang-yi, YU Mei, et al. Temporal pixel classification and smoothing for higher depth video compression performance[J]. IEEE Transactions on Con- sumer Electronics,2011,57(4) : 1815-1822.
  • 10De Silva D V S X, Ekmekcioglu E, Fernando W A C, et al. Display dependent preprocessing of depth maps based on just noticeable depth difference modeling [J]. IEEE Journal of Selected Topics in Signal Processing, 2011,5 (2) :335-351.

共引文献8

同被引文献19

  • 1Zilly F,Kluger J,Kauff P,et al.Production Rules for Stereo Acquisition[J].Proceedings of the IEEE,2011,99(4):590-606.
  • 2Urey H,Erden E,Surman P.State of the Art in Stereoscopic and Autostereoscopic Displays[J].Proceedings of the IEEE,2011,99(4):540-555.
  • 3Sun Gang,Wei Xing,Lu Dongming.Low-complexity Frame Importance Modelling and Resource Allocation Scheme for Error-resilience H.264 Video Streaming[C]//Proceedings of the 10th IEEE Workshop on Multimedia Signal Processing.Washington D.C.,USA:IEEE Press,2008:809-814.
  • 4Itti L,Koch C,Niebur E.A Model of Saliency-based Visual Attention for Rapid Scene Analysis[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(11):1254-1259.
  • 5Harel J,Koch C,Perona P.Graph-based Visual Salien-cy[M]//Schlkopf B,Platt J C,Hoffman T.Advances in Neural Information Processing Systems 19.[S.l.]:Neural Information Processing Systems Foundation,Inc.,2006:545-552.
  • 6Achanta R,Estrada F,Wils P,et al.Salient Region Dete-ction and Segmentation[M]//Gasteratos A,Vincze M,Tsotsos J K.Computer Vision Systems.Berlin,Germany:Springer,2008:66-75.
  • 7Rahtu E,Kannala J,Salo M,et al.Segmenting Salient Objects from Images and Videos[M]//Daniilidis K,Maragos P,Paragios N.Computer Vision-ECCV 2010.Berlin,Germany:Springer,2010:366-379.
  • 8Hou Xiaodi,Zhang Liqing.Saliency Detection:A Spe-ctral Residual Approach[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2007:1-8.
  • 9Goferman S,Zelnik-Manor L,Tal A.Context-aware Saliency Detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(10):1915-1926.
  • 10Neri A,Colonnese S,Russo G,et al.Automatic Moving Object and Background Separation[J].Signal Processing,1998,66(2):219-232.

引证文献2

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部