摘要
近年来,基于深度卷积神经网络的人脸活体检测技术取得了较好的性能.然而,深度神经网络被证明容易受到对抗样本的攻击,影响了人脸系统的安全性.为了建立更好的防范机制,需充分研究活体检测任务对抗样本的生成机理.相对于普通分类问题,活体检测任务具有类间距离小,且扰动操作难度大等特性.在此基础上,提出了基于最小扰动维度和人眼视觉特性的活体检测对抗样本生成算法,将扰动集中在少数几个维度上,并充分考虑人眼的视觉连带集中特性,加入扰动点的间距约束,以便最后生成的对抗样本更不易被人类察觉.该方法只需平均改变输入向量总维度的1.36%,即可成功地欺骗网络,使网络输出想要的分类结果.通过志愿者的辨认,该方法的人眼感知率比DeepFool方法降低了20%.
Face-spoofing detection based on deep convolutional neural networks has achieved good performance in recent years. However, deep neural networks are vulnerable to adversarial examples, which will reduce the safety of the face based application systems. Therefore, it is necessary to analyze the mechanism of generating the adversarial examples, so that the face-spoofing detection algorithms will be more robust. Compared with the general classification problems, face-spoofing detection has the smaller inter-class distance, and the perturbation is difficulty to assign. Motivated by the above, this study proposes an approach to generate the adversarial examples for face-spoofing detection by combining the minimum perturbation dimensions and visual concentration. In the proposed approach, perturbation is concentrated on a few pixels in a single component, and the intervals between pixels are constrained-according to the visual concentration. With such constraints, the generated adversarial examples can be perceived by human with low probability. The adversarial examples generated from the proposed approach can defraud the deep neural networks based classifier with only 1.36% changed pixels on average. Furthermore, human vision perception rate of the proposed approach decreases about 20% compared with DeepFool.
作者
马玉琨
毋立芳
简萌
刘方昊
杨洲
MA Yu-Kun;WU Li-Fang;JIAN Meng;LIU Fang-Hao;YANG Zhou(Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China;School of Information Engineering, He’nan Institute of Science and Technology, Xinxiang 453000, China)
出处
《软件学报》
EI
CSCD
北大核心
2019年第2期469-480,共12页
Journal of Software
基金
北京市教委科技创新项目(KZ201510005012)
国家自然科学基金(61702022)
中国博士后科学基金(2017M610026
2017M610027)~~
关键词
人脸活体检测
对抗样本
卷积神经网络
对抗扰动
视觉集中性
face-spoofing detection
adversarial example
convolutional neural network
adversarial perturbation
visual concentration