为在低复杂度约束条件下提升电磁信号调制识别的性能,提出了一种基于稀疏深度神经网络(Sparse Deep Neural Network,SDNN)的电磁信号调制识别方法。首先,通过提取电磁信号同相和正交两路数据绘制出信号的星座图,作为信号的浅层特征表达...为在低复杂度约束条件下提升电磁信号调制识别的性能,提出了一种基于稀疏深度神经网络(Sparse Deep Neural Network,SDNN)的电磁信号调制识别方法。首先,通过提取电磁信号同相和正交两路数据绘制出信号的星座图,作为信号的浅层特征表达;然后,基于星座图中各信号点密度大小对星座图进行上色,增强星座图中信号特征;最后,通过SDNN对增强后的星座图进行识别分类。实验结果表明,SDNN模型选取合适的剪枝率后,能够有效降低模型存储规模和计算量,其中模型参数压缩了72%,浮点运算量压缩了45%,与原模型97%的综合识别率相比,稀疏化处理后模型的综合识别率为96.8%,在小幅度识别精度损失范围内大幅降低了模型复杂度。展开更多
Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based att...Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based attack methods are chosen to add perturbations on each pixel of the original image with the same weight,resulting in redundant noise in the adversarial examples,which makes them easier to be detected.Given this deliberation,a novel attentionguided sparse adversarial attack strategy with gradient dropout that can be readily incorporated with existing gradient-based methods is introduced to minimize the intensity and the scale of perturbations and ensure the effectiveness of adversarial examples at the same time.Specifically,in the gradient dropout phase,some relatively unimportant gradient information is randomly discarded to limit the intensity of the perturbation.In the attentionguided phase,the influence of each pixel on the model output is evaluated by using a soft mask-refined attention mechanism,and the perturbation of those pixels with smaller influence is limited to restrict the scale of the perturbation.After conducting thorough experiments on the NeurIPS 2017 adversarial dataset and the ILSVRC 2012 validation dataset,the proposed strategy holds the potential to significantly diminish the superfluous noise present in adversarial examples,all while keeping their attack efficacy intact.For instance,in attacks on adversarially trained models,upon the integration of the strategy,the average level of noise injected into images experiences a decline of 8.32%.However,the average attack success rate decreases by only 0.34%.Furthermore,the competence is possessed to substantially elevate the attack success rate by merely introducing a slight degree of perturbation.展开更多
文摘为在低复杂度约束条件下提升电磁信号调制识别的性能,提出了一种基于稀疏深度神经网络(Sparse Deep Neural Network,SDNN)的电磁信号调制识别方法。首先,通过提取电磁信号同相和正交两路数据绘制出信号的星座图,作为信号的浅层特征表达;然后,基于星座图中各信号点密度大小对星座图进行上色,增强星座图中信号特征;最后,通过SDNN对增强后的星座图进行识别分类。实验结果表明,SDNN模型选取合适的剪枝率后,能够有效降低模型存储规模和计算量,其中模型参数压缩了72%,浮点运算量压缩了45%,与原模型97%的综合识别率相比,稀疏化处理后模型的综合识别率为96.8%,在小幅度识别精度损失范围内大幅降低了模型复杂度。
基金Fundamental Research Funds for the Central Universities,China(No.2232021A-10)Shanghai Sailing Program,China(No.22YF1401300)+1 种基金Natural Science Foundation of Shanghai,China(No.20ZR1400400)Shanghai Pujiang Program,China(No.22PJ1423400)。
文摘Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based attack methods are chosen to add perturbations on each pixel of the original image with the same weight,resulting in redundant noise in the adversarial examples,which makes them easier to be detected.Given this deliberation,a novel attentionguided sparse adversarial attack strategy with gradient dropout that can be readily incorporated with existing gradient-based methods is introduced to minimize the intensity and the scale of perturbations and ensure the effectiveness of adversarial examples at the same time.Specifically,in the gradient dropout phase,some relatively unimportant gradient information is randomly discarded to limit the intensity of the perturbation.In the attentionguided phase,the influence of each pixel on the model output is evaluated by using a soft mask-refined attention mechanism,and the perturbation of those pixels with smaller influence is limited to restrict the scale of the perturbation.After conducting thorough experiments on the NeurIPS 2017 adversarial dataset and the ILSVRC 2012 validation dataset,the proposed strategy holds the potential to significantly diminish the superfluous noise present in adversarial examples,all while keeping their attack efficacy intact.For instance,in attacks on adversarially trained models,upon the integration of the strategy,the average level of noise injected into images experiences a decline of 8.32%.However,the average attack success rate decreases by only 0.34%.Furthermore,the competence is possessed to substantially elevate the attack success rate by merely introducing a slight degree of perturbation.