期刊文献+

空时形状预测与高效编码 被引量:1

Spatio-temporal shape prediction and efficient coding
原文传递
导出
摘要 目的形状是视觉对象的关键特征,形状编码是对象基图像和视频处理中的关键技术,但现有无损形状编码方法压缩效率普遍不高。为此,提出一种基于链码表示和空时预测的高效无损形状编码新算法。方法首先逐帧提取视觉对象的形状轮廓并转化为链码表示;然后基于对象轮廓的帧间活动性将形状视频序列分成帧内预测编码帧和帧间预测编码帧,并基于轮廓链码的空域相关性和时域相关性对二者分别进行空域和时域补偿与预测;最后基于链码的方向约束特性对预测后的位移矢量和预测残差进行高效编码压缩。结果为了检验所提算法的性能,基于MPEG-4标准形状测试序列进行了编码实验测试。与现有主要方法相比本文算法能提高压缩效率6%到71.6%不等。结论本文算法可广泛应用于对象基编码、基于内容的图像检索、图像分析与理解等领域。 Objective The use of a shape is a popular way to define objects, and efficient shape coding is a key technique in object-based applications. Shape coding is also a hot research topic in the field of image and video signal processing, and many shape-coding techniques have been proposed. Among these methods, chain-coding is a popular technique that can be used for losstess shape coding. However, most existing chain-based shape-coding methods have not exploited the spatio- temporal redundancy contained within shape image sequences. Similar to the existence of strong spatio-temporal redundancy within and among video textures, a strong redundancy also exists within and between object contours. This redundancy can be exploited to improve coding efficiency. Hence, in this ,paper, a novel chain-based lossless shape-coding scheme is pro- posed by exploiting the spatio-temporal correlations among object contours to acquire high coding efficiency. Method First, for a given shape image sequence, the contours of visual objects are extracted, thinned to perfect single-pixel width, and transformed into chain-based representation frame by frame. Second, the activity of object contours in each frame is detec-ted and evaluated. The shape frames are classified into two coding categories on the basis of this activity: intra-coding frames and inter-coding frames. If the contour activity in a frame is larger than a preset threshold, the activity will be enco- ded as an inter-coding frame ; otherwise, it will be encoded as an intra-coding frame. For an intra-coding frame, the spatial correlations within object contours are exploited on the basis of chain-based spatial prediction and compensation. For an in- ter-coding frame, the temporal correlations among object contours are exploited on the basis of chain-based temporal predic- tion and compensation. Finally, a new method is introduced to efficiently encode the prediction residuals and motion dis- placements by analyzing the constraints among chain links. Result To evaluate the performance of the proposed scheme, ex- periments are conducted and a partial comparison is performed against some well-known existing methods, including the lossless coding scheme proposed by the Joint Bi-level Image Experts Group (JBIG) , the improved lossless coding scheme proposed by JBIG (JBIG2) , the Context-based Arithmetic Encoding with Intra-mode (CAE lntra) of MPEG-4, the Con- text-based Arithmetic Encoding with Inter-mode ( CAE Inter) of MPEG-4, the Digital Straight Line Segments-based Coding with Intra-mode ( DSLSC Intra) and the Digital Straight Line Segments-based Coding with Inter-mode ( DSLSC Inter), are also presented. , The experimental results show that the average code length of our scheme is only 28.4% of JBIG, 32. 3% of JBIG2, 39. 9% of CAE Intra, 78.1% of CAE Inter, 48.4% of DSLSC Intra, and 94. 0% of DSLSC Inter. Conclusion As a whole, the proposed scheme outperforms all existing techniques and is considerably more efficient than other methods. As far as we know, the DSLSC Inter is the most efficient lossless shape-coding approach. However, compared with the DSLSC Inter, the proposed scheme has an average code length that can be reduced by 6%. The proposed scheme has wide prospects in many object-based images and video applications, such as object-based coding, object-based editing, and ob- ject-based retrieval.
出处 《中国图象图形学报》 CSCD 北大核心 2016年第1期1-7,共7页 Journal of Image and Graphics
基金 国家自然科学基金项目(60902066) 浙江省自然科学基金项目(LY14F010006) 宁波市自然科学基金项目(2015A610136) 人社部留学科技人员择优资助项目(2013-277) 教育部留学回国人员科研启动基金项目(2014-1685)~~
关键词 视觉对象 形状编码 预测编码 高效编码 visual objects shape coding predictive coding high efficient coding
  • 相关文献

参考文献15

  • 1Zhu Z J, Wang Y E, Jiang G Y. On muhi-view video segmenta- tion for object-based coding [ J ]. Digital Signal Processing, 2012, 22 (6) :954-960. [ DOI : 10. 1016/j. dsp. 2012.05. 006 ].
  • 2Aghito S M, Forchhammer S. Efficient coding of shape and trans- parency for video objects [ J ]. IEEE Trans. on Image Process- ing, 2007, 16 ( 9 ) : 2234-2244. [DOI: 10. 1109/ TIP. 2007. 903902 ].
  • 3Liu Q, Ngan K N. Arbitrarily shaped object coding based on H. 264/AVC [ C ]//International Symposium on Intelligent Signal Processing and Communication Systems. Kanazawa, Japan: IEEE, 2009: 343-346. [DOI:10. 1109/ISPACS. 2009. 5383832].
  • 4ISO/IEC JTC1/SC29. ISO/IEC-11544 Coded representation of Picture and Audio Information-progressive Bi-level Image Com- pression [ S ]. Japan : ISO/IEC, 1993.
  • 5. ISO/IEC JTC1/SC29. ISO/IEC-14492 Coded Representation of Picture and Audio Information-lossy/Lossless Coding of Bi-Level Images (JBIG2) [ S ]. Japan : ISO/IEC, 2000.
  • 6ISO/IEC JTC1/SC29. IS0/IEC-14496-2 Information Technolo- gy-coding of Audio-visual Objects-part 2 : Visual [ S ] . Japan : ISO/IEC, 1999.
  • 7Martin K, Lukac R, Platartiotis K N. SPIHT-based coding of the shape and texture of arbitrarily shaped visual objects [J]. IEEE Trans. on Circuits and Systems for Video Technology, 2006, 16(10) :196-1208. [ DOI:10.1109/TCSVT. 2006. 882388].
  • 8Bandyopadhyay S K, Kondi L P. Optimal bit allocation for joint texture-aware contour-based shape coding and shape-adaptive tex- ture coding [ J ]. IEEE Trans. on Circuits and Systems for Video Technology, 2008, 18 ( 6 ) : 840-844. E DOI: 10. 1109/ TCSVT. 2008. 918784 ].
  • 9Kim K J, Suh J Y, Kang M G. Generalized inter-frame vertex- based shape encoding scheme for video sequences [ J ]. IEEE Trans. on Image Process. , 2000, 9 (10) : 1667-1676. [DOI: 10. 1109/83. 869178].
  • 10Nunes P, MarqueHs F, Pereira F, et al. A contour-based ap- proach to binary shape coding using a multiple grid chain code [J], Signal Processing: Image Communication, 2000, 15(7): 585-599. [DOI:10. 1016/S0923-5965(99)00041-7].

同被引文献1

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部