Deep learning model is vulnerable to adversarial examples in the task of image classification. In this paper, a cluster-based method for defending against adversarial examples is proposed. Each adversarial example bef...Deep learning model is vulnerable to adversarial examples in the task of image classification. In this paper, a cluster-based method for defending against adversarial examples is proposed. Each adversarial example before attacking a classifier is reconstructed by a clustering algorithm according to the pixel values. The MNIST database of handwritten digits was used to assess the defence performance of the method under the fast gradient sign method (FGSM) and the DeepFool algorithm. The defence model proposed is simple and the trained classifier does not need to be retrained.展开更多
跨站脚本(cross site scripting,XSS)攻击是Web安全中最严重的风险之一。随着Web服务、API等Web技术的广泛使用,以及AJAX、CSS和HTML5等新编程风格的出现,XSS攻击的威胁变得更加严重,因此如何处理XSS攻击安全风险已成为Web安全研究的重...跨站脚本(cross site scripting,XSS)攻击是Web安全中最严重的风险之一。随着Web服务、API等Web技术的广泛使用,以及AJAX、CSS和HTML5等新编程风格的出现,XSS攻击的威胁变得更加严重,因此如何处理XSS攻击安全风险已成为Web安全研究的重要关注点。通过对近年来XSS攻击检测和防御技术的调研,根据XSS攻击是否具有隐蔽性,首次从非对抗和对抗两个角度综述了XSS攻击检测和防御的最新技术。首先,从非对抗攻击检测和对抗攻击检测两个方面探讨分析了基于机器学习从数据中学习攻击特征、预测攻击的方法,以及基于强化学习识别或生成对抗样本策略来优化检测模型的方法;其次,阐述了非对抗攻击防御基于规则过滤XSS攻击、基于移动目标防御(MTD)随机性降低攻击成功率和基于隔离沙箱防止XSS攻击传播的方法;最后,分别从样本特征、模型特点和CSP的局限性、上传功能的广泛性等方面提出了XSS攻击检测和防御未来需要考虑的问题并作出展望。展开更多
From fraud detection to speech recognition,including price prediction,Machine Learning(ML)applications are manifold and can significantly improve different areas.Nevertheless,machine learning models are vulnerable and...From fraud detection to speech recognition,including price prediction,Machine Learning(ML)applications are manifold and can significantly improve different areas.Nevertheless,machine learning models are vulnerable and are exposed to different security and privacy attacks.Hence,these issues should be addressed while using ML models to preserve the security and privacy of the data used.There is a need to secure ML models,especially in the training phase to preserve the privacy of the training datasets and to minimise the information leakage.In this paper,we present an overview of ML threats and vulnerabilities,and we highlight current progress in the research works proposing defence techniques againstML security and privacy attacks.The relevant background for the different attacks occurring in both the training and testing/inferring phases is introduced before presenting a detailed overview of Membership Inference Attacks(MIA)and the related countermeasures.In this paper,we introduce a countermeasure against membership inference attacks(MIA)on Conventional Neural Networks(CNN)based on dropout and L2 regularization.Through experimental analysis,we demonstrate that this defence technique can mitigate the risks of MIA attacks while ensuring an acceptable accuracy of the model.Indeed,using CNN model training on two datasets CIFAR-10 and CIFAR-100,we empirically verify the ability of our defence strategy to decrease the impact of MIA on our model and we compare results of five different classifiers.Moreover,we present a solution to achieve a trade-off between the performance of themodel and the mitigation of MIA attack.展开更多
基金the National NSF of China (61602125, 61772150, 61862011, 61862012)the China Postdoctoral Science Foundation (2018M633041)+5 种基金the NSF of Guangxi (2016GXNSFBA380153, 2017GXNSFAA198192, 2018GXNSFAA138116, 2018-GXNSFAA281232, 2018GXNSFDA281054)the Guangxi Science and Technology Plan Project (AD18281065)the Guangxi Key R&D Program (AB17195025)the Guangxi Key Laboratory of Cryptography and Information Security (GCIS201625, GCIS201704)the National Cryptography Development Fund of China (MMJJ20170217)the research start-up grants of Dongguan University of Technology, and the Postgraduate Education Innovation Project of Guilin University of Electronic Technology (2018YJCX51, 2019YCXS052).
文摘Deep learning model is vulnerable to adversarial examples in the task of image classification. In this paper, a cluster-based method for defending against adversarial examples is proposed. Each adversarial example before attacking a classifier is reconstructed by a clustering algorithm according to the pixel values. The MNIST database of handwritten digits was used to assess the defence performance of the method under the fast gradient sign method (FGSM) and the DeepFool algorithm. The defence model proposed is simple and the trained classifier does not need to be retrained.
文摘跨站脚本(cross site scripting,XSS)攻击是Web安全中最严重的风险之一。随着Web服务、API等Web技术的广泛使用,以及AJAX、CSS和HTML5等新编程风格的出现,XSS攻击的威胁变得更加严重,因此如何处理XSS攻击安全风险已成为Web安全研究的重要关注点。通过对近年来XSS攻击检测和防御技术的调研,根据XSS攻击是否具有隐蔽性,首次从非对抗和对抗两个角度综述了XSS攻击检测和防御的最新技术。首先,从非对抗攻击检测和对抗攻击检测两个方面探讨分析了基于机器学习从数据中学习攻击特征、预测攻击的方法,以及基于强化学习识别或生成对抗样本策略来优化检测模型的方法;其次,阐述了非对抗攻击防御基于规则过滤XSS攻击、基于移动目标防御(MTD)随机性降低攻击成功率和基于隔离沙箱防止XSS攻击传播的方法;最后,分别从样本特征、模型特点和CSP的局限性、上传功能的广泛性等方面提出了XSS攻击检测和防御未来需要考虑的问题并作出展望。
文摘From fraud detection to speech recognition,including price prediction,Machine Learning(ML)applications are manifold and can significantly improve different areas.Nevertheless,machine learning models are vulnerable and are exposed to different security and privacy attacks.Hence,these issues should be addressed while using ML models to preserve the security and privacy of the data used.There is a need to secure ML models,especially in the training phase to preserve the privacy of the training datasets and to minimise the information leakage.In this paper,we present an overview of ML threats and vulnerabilities,and we highlight current progress in the research works proposing defence techniques againstML security and privacy attacks.The relevant background for the different attacks occurring in both the training and testing/inferring phases is introduced before presenting a detailed overview of Membership Inference Attacks(MIA)and the related countermeasures.In this paper,we introduce a countermeasure against membership inference attacks(MIA)on Conventional Neural Networks(CNN)based on dropout and L2 regularization.Through experimental analysis,we demonstrate that this defence technique can mitigate the risks of MIA attacks while ensuring an acceptable accuracy of the model.Indeed,using CNN model training on two datasets CIFAR-10 and CIFAR-100,we empirically verify the ability of our defence strategy to decrease the impact of MIA on our model and we compare results of five different classifiers.Moreover,we present a solution to achieve a trade-off between the performance of themodel and the mitigation of MIA attack.