Video transmission requires considerable bandwidth,and current widely employed schemes prove inadequate when confronted with scenes featuring prominently.Motivated by the strides in talkinghead generative technology,t...Video transmission requires considerable bandwidth,and current widely employed schemes prove inadequate when confronted with scenes featuring prominently.Motivated by the strides in talkinghead generative technology,the paper introduces a semantic transmission system tailored for talking-head videos.The system captures semantic information from talking-head video and faithfully reconstructs source video at the receiver,only one-shot reference frame and compact semantic features are required for the entire transmission.Specifically,we analyze video semantics in the pixel domain frame-by-frame and jointly process multi-frame semantic information to seamlessly incorporate spatial and temporal information.Variational modeling is utilized to evaluate the diversity of importance among group semantics,thereby guiding bandwidth resource allocation for semantics to enhance system efficiency.The whole endto-end system is modeled as an optimization problem and equivalent to acquiring optimal rate-distortion performance.We evaluate our system on both reference frame and video transmission,experimental results demonstrate that our system can improve the efficiency and robustness of communications.Compared to the classical approaches,our system can save over 90%of bandwidth when user perception is close.展开更多
基金supported by the National Natural Science Foundation of China(No.61971062)BUPT Excellent Ph.D.Students Foundation(CX2022153)。
文摘Video transmission requires considerable bandwidth,and current widely employed schemes prove inadequate when confronted with scenes featuring prominently.Motivated by the strides in talkinghead generative technology,the paper introduces a semantic transmission system tailored for talking-head videos.The system captures semantic information from talking-head video and faithfully reconstructs source video at the receiver,only one-shot reference frame and compact semantic features are required for the entire transmission.Specifically,we analyze video semantics in the pixel domain frame-by-frame and jointly process multi-frame semantic information to seamlessly incorporate spatial and temporal information.Variational modeling is utilized to evaluate the diversity of importance among group semantics,thereby guiding bandwidth resource allocation for semantics to enhance system efficiency.The whole endto-end system is modeled as an optimization problem and equivalent to acquiring optimal rate-distortion performance.We evaluate our system on both reference frame and video transmission,experimental results demonstrate that our system can improve the efficiency and robustness of communications.Compared to the classical approaches,our system can save over 90%of bandwidth when user perception is close.