摘要
目的 现有基于对抗图像的隐写算法大多只能针对一种隐写分析器设计对抗图像,且无法抵御隐写分析残差网络(steganalysis residual network,SRNet)、Zhu-Net等最新基于卷积神经网络隐写分析器的检测。针对这一现状,提出了一种联合多重对抗与通道注意力的高安全性图像隐写方法。方法 采用基于U-Net结构的生成对抗网络生成对抗样本图像,利用对抗网络的自学习特性实现多重对抗隐写网络参数迭代优化,并通过针对多种隐写分析算法的对抗训练,生成更适合内容隐写的载体图像。同时,通过在生成器中添加多个轻量级通道注意力模块,自适应调整对抗噪声在原始图像中的分布,提高生成对抗图像的抗隐写分析能力。其次,设计基于多重判别损失和均方误差损失相结合的动态加权组合方案,进一步增强对抗图像质量,并保障网络快速稳定收敛。结果 实验在BOSS Base 1.01数据集上与当前主流的4种方法进行比较,在使用原始隐写图像训练后,相比于基于U-Net结构的生成式多重对抗隐写算法等其他4种方法,使得当前性能优异的5种隐写分析器平均判别准确率降低了1.6%;在使用对抗图像和增强隐写图像再训练后,相比其他4种方法,仍使得当前性能优异的5种隐写分析器平均判别准确率降低了6.8%。同时也对对抗图像质量进行分析,基于测试集生成的2 000幅对抗图像的平均峰值信噪比(peak signal-tonoise ratio,PSNR)可达到39.925 1 dB,实验结果表明本文提出的隐写网络极大提升了隐写算法的安全性。结论 本文方法在隐写算法安全性领域取得了较优秀的性能,且生成的对抗图像具有很高的视觉质量。
Objective The advancement of current steganographic techniques has been facing many challenges.Themethod of modifying the original image to hide the secret information is traceable,rendering it susceptible to detection bysteganalyzers.The coverless steganographic method improves the security of steganography.However,it has limitations,such as small embedding capacity,large image database,and difficulty extracting secret information.The cover image gen⁃erative steganography method also produces small and unnatural generated images.Introducing adversarial examples pro⁃vides a new approach to address these limitations by adding subtle perturbations to the original image to form an adversarialimage that is not visually distinguishable and causes wrong classification results to be outputted with high confidence.Thus,the security of image steganography is enhanced.However,most existing steganographic algorithms based on adver⁃sarial examples can only design adversarial samples for one steganalyzer,making them vulnerable to the latest convolu⁃tional neural network-based steganalyzers,such as SRNet and Zhu-Net.In response to this problem,a high-security imagesteganography method with the combination of multiple competition and channel attention is proposed in this study.Method In the proposed method,we generate the adversarial noise V using the generator G,which employs the U-Netarchitecture with added channel-attention modules.Subsequently,the adversarial noise V is added to the original image Xto obtain the adversarial image.The pixel space minimum mean square error loss MSE_loss is adopted to train the generatornetwork G.Thus,high-quality and semantically meaningful adversarial images are generated.Then,we generate the stegoimage from the original image X using the steganography network(SN)and input the original image X and its correspondingstego image into the steganalysis optimization network to optimize its parameters.Moreover,we build multiple steganalysisadversarial networks(SANs)to discriminate the original image X and its adversarial image and assign different scores tothe adversarial and original images,providing multiple discriminant losses SDO_loss1.Furthermore,we embed secret mes⁃sages into the adversarial image through the SN to generate the enhanced stego image.The adversarial image and theenhanced stego image are reinput into the optimized multiple steganalyzers to improve the antisteganalysis performance ofthe adversarial image.The SAN evaluates the data-hiding capability of the adversarial image and provides multiple dis⁃criminant losses SDO_loss2.Additionally,the weighted superposition of the MSE_loss,namely,the multiple steganalysisdiscrimination losses SDO_loss1 and SDO_loss2,is employed as the cumulative loss function of generator G to improve theimage quality of the adversarial image and its antisteganalysis ability.Finally,the proposed method enables fast and stablenetwork convergence and high stego image visual quality and antisteganalysis ability.Result First,we select four highperformance deep-learning steganalyzers,namely,Xu-Net,Ye-Net,SRNet,and Zhu-Net,for simultaneous adversarialtraining to improve the antisteganalysis ability of adversarial images.However,simultaneously conducting experimentswith four steganalysis networks may sharply increase the number of model parameters,resulting in slow training speed andlong training period.Furthermore,each iteration of adversarial noise is generated according to the gradient feedback of thefour steganalysis networks during the adversarial image generation process.A consequence of this approach is that the origi⁃nal image is subjected to excessive,unnecessary adversarial noise,leading to low-quality adversarial images.In responseto this issue,we execute ablation experiments on different steganalysis networks employed in training.These experimentsaim to decrease model parameters,reduce training time,and ultimately enhance the quality of adversarial images for theirantisteganalysis capability improvement.The role of the generator is to produce adversarial noise,which is subsequentlyincorporated into the original image to generate adversarial images.Different positions of adversarial noise in the originalimage can cause distinct perturbations to the steganalysis network,and the quality of the generated adversarial images canbe influenced differently.This study introduces ablation experiments by altering the addition of the channel attention mod⁃ule at various positions of the generator to examine the effectiveness of the channel attention module.The parameters of thegenerator loss function are fine-tuned by conducting the ablation experiment.Subsequently,we generate 2000 adversarialimages using the proposed model and evaluate the quality of these images.The results reveal that the average peak signalto-noise ratio(PSNR)value of the 2000 generated adversarial images is 39.9251 dB.Furthermore,more than 99.55%ofthese images have a PSNR value greater than 39 dB,and more than 75%of the generated adversarial images have a PSNRvalue greater than 40 dB.Additionally,the average structural similarity index measure(SSIM)value of the generatedadversarial images is 0.9625.Among these images,more than 69.85%have an SSIM value greater than 0.955,and morethan 55.6%of the adversarial samples have an SSIM value greater than 0.960.These results indicate that compared with the original images,the generated adversarial images exhibit high visual similarity.Finally,we conduct a comparativestudy of the proposed method with the current state-of-the-art methods on the BOSS Base 1.01 dataset.The experiments areconducted on the BOSS Base 1.01 dataset,and the results are compared with those of the current state-of-the-art methods.Compared with the four methods,the five steganalysis methods show decreased average accuracy by 1.6%after training onthe original steganographic images.Compared with other four methods,the five steganalysis methods show decreased aver⁃age accuracy by 6.8%after further training with adversarial images and enhanced steganographic images.The experimen⁃tal results indicate that the proposed steganographic method significantly improves the security of the steganographic algo⁃rithm.Conclusion In this study,we propose a steganographic architecture based on the U-Net framework with lightweightchannel attention modules to generate adversarial images,which can resist multiple steganalysis networks.The experimentresults demonstrate that the security and generalization of the algorithm we propose exceed those of the compared stegano⁃graphic methods.
作者
马宾
李坤
徐健
王春鹏
李健
张立伟
Ma Bin;Li Kun;Xu Jian;Wang Chunpeng;Li Jian;Zhang Liwei(School of Cyber Security,Qilu University of Technology(Shandong Academy of Sciences),Jinan 250353,China;Shandong Provincial Key Laboratory of Computer Networks,Jinan 250098,China;School of Computer Science and Technology,Shandong University of Finance and Economics,Jinan 250014,China;Integrated Electronic Systems Lab Co.,Ltd.,Jinan 250104,China)
出处
《中国图象图形学报》
CSCD
北大核心
2024年第2期355-368,共14页
Journal of Image and Graphics
基金
国家自然科学基金项目(62272255)
国家重点研发计划资助(2021YFC3340602)
山东省自然科学基金创新发展联合基金项目(ZR2022LZH011)
山东省自然科学基金项目(ZR2020MF054)
山东省科技型中小企业能力提升工程项目(2022TSGC2485)
济南市带头人工作室项目(2020GXRC056)
济南市引进创新团队项目(202228016)
山东省高校青年创新团队项目(2022KJ124)
教育部“春晖计划”科研合作项目(HZKY20220482)。
关键词
隐写
隐写分析
对抗图像
通道注意力
生成对抗网络(GAN)
steganography
steganalysis
adversarial images
channel attention
generative adversarial network(GAN)