摘要
提出一种神经网络算法实现室内可见光信道模型,解决Lambert模型难以计算室内可见光信道的噪声和误差问题。针对指纹库数据量大、难以采集和训练参数多导致迭代速度慢的问题,提出使用生成式对抗网络(generative adversarial network,GAN)生成仿真数据集融合原有的稀疏指纹库,生成满足训练要求数量的指纹库;使用一维的卷积神经网络(convolutional neural network,CNN)提取数据特征,降低训练参数,提高迭代速度。在室内5 m×5 m×3 m环境下采集稀疏指纹库,分别用反向传播神经网络(back propagation netural network,BPNN)和一维CNN室内可见光信道模型进行对比。仿真结果表明:使用GAN生成指纹库的平均绝对误差为0.04,对数据量增广300%;在同一指纹库下,BPNN信道模型误差为3.81,迭代500次收敛;而CNN信道模型误差为0.79,迭代100次收敛。本文提出的GAN指纹库融合CNN的可见光信道模型具有精度高、误差小、速度快、泛化性强等优点,为室内可见光信道模型提供新的研究方案。
In order to solve the Lambert model is difficult to calculate the indoor visible light channel noise and error problems,a neural network algorithm is proposed to realize the indoor visible light channel model.Aiming at the problems of large amount of fingerprint database data,difficult collection and many training parameters,which lead to slow iteration speed,the generative adversarial network(GAN)is proposed to generate simulation data set and merge the original sparse fingerprint database to generate the number of fingerprint database meeting the training requirements.A one-dimensional convolutional neural network(CNN)is used to extract data features,reduce training parameters and improve iteration speed.The sparse fingerprint database is collected in the indoor environment of 5 m×5 m×3 m,and the back propagation neural network(BPNN)and one-dimensional CNN indoor visible light channel model are respectively used for comparison.The simulation results show that the average absolute error of GAN is 0.04,and the data volume is increased by 300%.Under the same fingerprint database,the error of BPNN channel model is 3.81,and the convergence is realized after 500 iterations.However,the error of CNN channel model is 0.79,and the iteration converges are 100 times.The GAN fingerprint database merged CNN visible light channel model proposed in this paper has the advantages of high precision,small error,fast speed and strong generalization,which provides a new research scheme for indoor visible light channel model.
作者
卢宇希
张慧颖
梁誉
王凯
LU Yuxi;ZHANG Huiying;LIANG Yu;WANG Kai(College of Information and Control Engineering,Jilin Institute of Chemical Technology,Jilin,Jilin,132022,China)
出处
《光电子.激光》
CAS
CSCD
北大核心
2023年第11期1201-1209,共9页
Journal of Optoelectronics·Laser
基金
吉林省自然科学基金(联合基金YDZJ202101ZYTS189)
吉林化工学院科研项目(2021050)资助项目。