期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
VeriFace:Defending against Adversarial Attacks in Face Verification Systems
1
作者 Awny Sayed Sohair Kinlany +1 位作者 Alaa Zaki Ahmed Mahfouz 《Computers, Materials & Continua》 SCIE EI 2023年第9期3151-3166,共16页
Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromi... Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods. 展开更多
关键词 Adversarial attacks face aerification adversarial detection perturbation removal
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部