期刊文献+

物理域中针对人脸识别系统的对抗样本攻击方法

Adversarial Attacks on Face Recognition System in Physical Domain
下载PDF
导出
摘要 对抗样本攻击揭示了人脸识别系统可能存在不安全性和被攻击的方式。现有针对人脸识别系统的对抗样本攻击大多在数字域进行,然而从最近文献检索的结果来看,越来越多的研究开始关注如何能把带有对抗扰动的实物添加到人脸及其周边区域上,如眼镜、贴纸、帽子等,以实现物理域的对抗攻击。这类新型的对抗样本攻击能够轻易突破市面上现有绝大部分人脸活体检测方法的拦截,直接影响人脸识别系统的结果。尽管已有不少文献提出数字域的对抗攻击方法,但在物理域中复现对抗样本的生成并不容易且成本高昂。本文提出一种可从数字域方便地推广到物理域的对抗样本生成方法,通过在原始人脸样本中添加特定形状的对抗扰动来攻击人脸识别系统,达到误导或扮演攻击的目的。主要贡献包括:利用人脸关键点根据脸型构建特定形状掩膜来生成对抗扰动;设计对抗损失函数,通过训练生成器实现在数字域的对抗样本生成;设计打印分数损失函数,减小打印色差,在物理域复现对抗样本的生成,并通过模拟眼镜佩戴、真实场景光照变化等方式增强样本,改善质量。实验结果表明,所生成的对抗样本不仅能在数字域以高成功率攻破典型人脸识别系统VGGFace10,且可方便、大量地在物理域复现。本文方法揭示了人脸识别系统的潜在安全风险,为设计人脸识别系统的防御体系提供了很好的帮助。 Adversarial attacks exhibit bothpotential insecurity of face recognition systems and the way of performing attacks.Most current adversarial attacks onface recognition systems are carried out in digital domain.However,based on the recent reports in literature,more and more studies beginto concern about how to put the physical patches containingadversarial noise on human face and its neighboring regions,for example,eyeglass framework,paper sticker,and cap,so as to implement adversarial attacks in physical domain.Such a new type ofattacks can easily break through most of current living face detection systems and thus affect the decision of face recognition systems.Although there are a fewmethods proposed for the generation of adversarial samples in digital domain,it is not easy or cheap to realize those methods in physical domain.This paper proposes a method of generating adversarial attack in digital domain which can be readily extended to physical domain.By adding adversarial perturbation of special shapes into an original face sample,we can fool the face recognition system and make it regard the face as someone else’s face(i.e.,dodging attack)or a specific person’s face(i.e.,impersonation attack).The major contributions of this paper include:First,we propose a methodof using the face landmarks to construct a specific shape mask of the adversarial perturbation for individual face.Second,we design the adversarial loss function to train the generator to produce digital samples.Third,we design the printing score lossfunction to reduce the color differencebetween display and printer so as to reproduce those samples in physical domain.We improve the quality of adversarial samples by means of data enhancement which aims at simulating the way of wearing eyeglasses,illumination variations and other situations in real-world applications.Experimental results show that the proposed method can attackthe face recognition system VggFace10 in a high success rate in digital domain.Moreover,it canbe readilyextended tophysical domainand generates samples quickly and economically.Our studyexposesthe secu-rity risk of face recognition systems,which can provideus with useful information to design better face recognition sys-tems against adversarial attacksin the future.
作者 蔡楚鑫 王宇飞 章烈剽 卓思超 张娟苗 胡永健 CAI Chuxin;WANG Yufei;ZHANG Liepiao;ZHUO Sichao;ZHANG Juanmiao;HU Yongjian(Scbool of Electromic and Informaton Engineering,South China University of Techmology,Guangzbou 510641,China;Sino-Simgapore Intemational Joimt Research Institute,Guangzhou 511356,China;Guangzhou GRG Vision Co.Ltd.,Guagzhou 510663,China;Gruangzhou GRG Banking Equipmeat Co,Ltd.,Guangzhou 510663,China)
出处 《信息安全学报》 CSCD 2023年第2期127-137,共11页 Journal of Cyber Security
基金 国家重点研发计划项目(No.2019QY2202) 广州开发区国际合作项目(No.2019GH16) 中新国际联合研究院项目(No.206-A018001)资助。
关键词 人脸识别 对抗样本攻击 数字域对抗样本 物理域对抗样本 打印分数损失函数 face recognition adversarial sample attack digital adversarial samples physical adversarial samples print-ing score loss function
  • 相关文献

参考文献1

二级参考文献2

共引文献59

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部