摘要
近年来,新兴的深度学习(DL)技术在译码领域取得了进展,目前提出的极化码神经网络译码器有着比置信传播(BP)译码更快的收敛速度和更好的误码率(BER)性能,但其仍存在运算复杂度高的问题。文章针对该问题,在迭代过程中采用改进信息更新这个思路,提出了一种改进左信息更新的循环神经网络(RNN)偏移最小和(OMS)近似BP(RNN-OMS-BP-L)译码算法。仿真结果表明,文章所提算法相比于深度神经网络(DNN)BP(DNN-BP)译码算法,使用6.25%的加法运算代价替换了全部的乘法运算;相比于目前较优的RNN OMS近似BP(RNN-OMS-BP)译码算法,在确保BER性能几乎无损失的条件下,文章所提算法使用改进的信息更新减少了25%的加法运算复杂度,节省了部分存储空间开销,在相同BER性能下,减少了37.5%的迭代次数。
In recent years,the emerging Deep Learning(DL)technology has made progress in the field of decoding.Current polar code neural network decoder has faster convergence speed and better Bit Error Rate(BER)performance than Belief Propagation(BP)decoding.However,it still has the problem of high computational complexity.Therefore,in order to improve this problem,this paper adopts the idea of improving information update in the iterative process,and proposes a Recurrent Neural Network(RNN)Offset Min-Sum(OMS)BP decoding algorithm that improves Left information update(RNN-OMS-BP-L).The simulation results show that,compared with the Deep Neural Network(DNN)BP(DNN-BP)decoding algorithm,this algorithm replaces all multiplications with a cost of 6.25%addition.Compared with the current optimized RNN OMS and approximate BP(RNN-OMS-BP)decoding algorithm,the decoding algorithm in this paper uses improved information to reduce 25%of the addition operations with almost no loss in BER performance,while saving part of the storage space overhead.Under the same BER performance,it reduces the number of iterations by 37.5%.
作者
邓学璐
彭大芹
DENG Xue-lu;PENG Da-qin(School of Communication and Information Engineering,Chongqing University of Posts and Telecommunications,Chongqing 400065,China)
出处
《光通信研究》
2022年第4期17-22,共6页
Study on Optical Communications
基金
国家自然科学基金资助项目(E020B2018023)。