期刊文献+

Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients

原文传递
导出
摘要 Adversarial example has been well known as a serious threat to deep neural networks(DNNs).In this work,we study the detection of adversarial examples based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution(GGD)but with different parameters(i.e.,shape factor,mean,and variance).GGD is a general distribution family that covers many popular distributions(e.g.,Laplacian,Gaussian,or uniform).Therefore,it is more likely to approximate the intrinsic distributions of internal responses than any specific distribution.Besides,since the shape factor is more robust to different databases rather than the other two parameters,we propose to construct discriminative features via the shape factor for adversarial detection,employing the magnitude of Benford-Fourier(MBF)coefficients,which can be easily estimated using responses.Finally,a support vector machine is trained as an adversarial detector leveraging the MBF features.Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust in detecting adversarial examples of different crafting methods and sources compared to state-of-the-art adversarial detection methods.
出处 《Machine Intelligence Research》 EI CSCD 2023年第5期666-682,共17页 机器智能研究(英文版)
基金 supported by Natural Science Foundation of China(No.62076213) Shenzhen Science and Technology Program,China(No.RCYX20210609103057050) the university development fund of The Chinese University of Hong Kong,Shenzhen,China(No.01001810) Guangdong Provincial Key Laboratory of Big Data Computing,The Chinese University of Hong Kong,Shenzhen,China.

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部