期刊文献+

基于生成式对抗网络和多模态注意力机制的扩频与常规调制信号识别方法 被引量:1

Spread Spectrum and Conventional Modulation Signal Recognition Method Based on Generative Adversarial Network and Multi-modal Attention Mechanism
下载PDF
导出
摘要 针对低信噪比条件下的扩频与常规调制信号分类精度低的问题,该文提出一种基于生成式对抗网络(GAN)、卷积神经网络(CNN)和长短期记忆(LSTM)网络的多模态注意力机制信号调制识别方法。首先生成待识别信号的时频图像(TFIs),并利用GAN实现TFIs降噪处理;然后将信号的同相正交数据(I/Q data)与TFIs作为模型输入,并搭建基于CNN的TFIs识别支路和基于LSTM的I/Q数据识别支路;最后,在模型中添加注意力机制,增强I/Q数据和TFIs中重要特征对分类结果的决定作用。实验结果表明,该文所提方法相较于单模态识别模型以及其它基线模型,整体分类精度有效提升2%~7%,并在低信噪比条件下具备更强的特征表达能力和鲁棒性。 Considering the low classification accuracy of spreading and conventional modulated signals under low signal-to-noise ratio conditions,a multimodal attention mechanism signal modulation recognition method based on Generative Adversarial Network(GAN)and Convolutional Neural Networks(CNN)with Long Short-Term Memory(LSTM)network is proposed.Firstly,the Time-Frequency Images(TFIs)of the to-be-recognized signals are generated and the noise reduction process of TFIs is realized by using GAN;Secondly,the In-phase and Quadrature data(I/Q data)of the signals with TFIs are used as model inputs,and the CNN-based TFIs recognition branch and the LSTM-based I/Q data recognition branch are built;Finally,an attentional mechanism is added to the model to enhance the role of important features in I/Q data and TFIs in the determination of classification results.The experimental results show that the proposed method effectively improves the overall classification accuracy by 2%to 7%compared with the unimodal recognition model and other baseline models,and possesses stronger feature expression capability and robustness under low signal-to-noise ratio conditions.
作者 王华华 张睿哲 黄永洪 WANG Huahua;ZHANG Ruizhe;HUANG Yonghong(School of Communication and Information Engineering,Chongqing University of Posts and Telecommunications,Chongqing 400065,China;Chongqing Key Laboratory of Mobile Communication Technology,Chongqing 400065,China;School of Cyber Security and Information Law,Chongqing University of Posts and Telecommunications,Chongqing 400065,China)
出处 《电子与信息学报》 EI CAS CSCD 北大核心 2024年第4期1212-1221,共10页 Journal of Electronics & Information Technology
基金 国家自然科学基金(61701063) 重庆市自然科学基金(cstc2021jcyj-msxmX0454)。
关键词 深度学习 自动调制识别 生成对抗网络(GAN) 多模态特征 时频分布 Deep learning Automatic Modulation Recognition(AMR) Generate Adversarial Network(GAN) Multi-modal features Time-frequency distribution
  • 相关文献

参考文献4

二级参考文献11

共引文献36

同被引文献10

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部