摘要
深度神经网络被广泛用于语义压缩编码,然而所生成的语义特征是否还有冗余信息和压缩空间等难以判断。针对此问题,首先基于信息瓶颈理论给出损失函数,接着引入信道自适应压缩模块构建系统模型,然后利用vCLUB互信息估计和变分近似方法推导损失函数上界,设计互信息估计网络等结构。实验结果表明,与基线方法相比,所提出的基于信息瓶颈的信道自适应语义压缩编码方法实现了更高的智能任务性能和更低的通信开销。
Deep neural network is widely used in semantic compression coding,however,it is difficult to judge whether there exists redundant information and compression space in the generated semantic features.To solve this problem,the loss function is first given based on the information bottleneck theory,and the channel adaptive compression module is introduced to construct the system model.Then,the upper bound of the loss function is derived by using vCLUB mutual information estimation and variational approximation,and the structures are designed such as the mutual information estimation network.The experimental results show that compared with the baseline methods,the proposed channel adaptive semantic compression coding method based on information bottleneck achieves higher performance on smart tasks and lower communication overhead.
作者
李洁
郭彩丽
朱美逸
杜忠田
LI Jie;GUO Caili;ZHU Meiyi;DU Zhongtian(Beijing University of Posts and Telecommunications,Beijing 100876,China;China Telecom Digital Intelligence Technology Co.,LTD,Beijing 100035,China)
出处
《移动通信》
2023年第4期65-70,共6页
Mobile Communications
基金
中央高校基本科研业务费专项资金资助(2021XD-A01-1)。
关键词
语义通信
语义编码
信息瓶颈
深度学习
semantic communication
sematic coding
information bottleneck
deep learning