摘要
新一代视频编码标准H.266/VVC引入分量间线性模型(CCLM)预测提高压缩效率。针对亮度色度分量存在相关性却难以建模的问题,提出基于神经网络的分量间预测算法。该算法根据待预测像素与参考像素的亮度差遴选出相关性强的参考像素构成参考子集,然后将参考子集送入轻量级全连接网络获得色度预测值。实验结果表明,与H.266/VVC测试模型版本10.0(VTM10.0)相比,所提算法可提高色度预测准确度,在Y、Cb和Cr上可分别节省0.27%、1.54%和1.84%的码率。所提算法具有不同块尺寸和编码参数均可使用统一网络结构的优点。
Cross-component linear model(CCLM) prediction in H.266/versatile video coding(VVC) can improve the compression efficiency. There exists high correlation between luma and chroma components while the correlation is difficult to be modeled explicitly. An algorithm for neural network based cross-component prediction(NNCCP) was proposed where reference pixels with high correlation were selected according to the luma difference between the reference pixels and the pixel to be predicted. Based on the high-correlated reference pixels and the luma difference, the predicted chroma was obtained based on lightweight fully connected networks. Experimental results demonstrate that the proposed algorithm can achieve 0.27%, 1.54%, and 1.84% bitrate savings for luma and chroma components, compared with the VVC test model 10.0(VTM10.0). Besides, a unified network can be employed to blocks with different sizes and different quantization parameters.
作者
霍俊彦
王丹妮
马彦卓
万帅
杨付正
HUO Junyan;WANG Danni;MA Yanzhuo;WAN Shuai;YANG Fuzheng(State Key Laboratory of Integrated Services Network,Xidian University,Xi’an 710071,China;School of Electronics and Information,Northwestern Polytechnical University,Xi’an 710072,China)
出处
《通信学报》
EI
CSCD
北大核心
2022年第2期143-155,共13页
Journal on Communications
基金
国家自然科学基金资助项目(No.62101409,No.62171353)。